0

Personal Discussion on Using AI for Code Generation

Do you use AI to generate code, and how experienced are you? Personally, I use AI to deliver web development projects (both front-end and back-end) and Python tasks. While I have some experience, there are times when I receive code that works but I don’t fully understand. Often, I rely on AI to generate code for projects that I couldn't complete on my own. This allows me to deliver projects much faster, but it doesn’t necessarily improve my programming skills. I’m interested in hearing about your experiences with AI. How has it transformed your life? How do you utilize it—ethically, legally, or otherwise? Any advice or opinions you can share would be valuable.

26th Jul 2024, 1:47 AM
Umar Alhaji Baba
Umar Alhaji Baba - avatar
6 Réponses
+ 4
If you don't understand the code that AI is creating for you, then you are in trouble. How do you know if it is correct, if it does not contain any bugs? What will you do when it turns out to be bad or just slightly wrong, or doesn't work in just 1 out of 1000 cases, how can you fix it? AI can speed up development with code generation, but you must ensure that you are protected from hallucinations. And prompt engineering is also an art, if you start out with the wrong assumptions, or lack of specific constraints, your code may not work well. Also if you create and publish AI generated code, without quality control, it can be fed back to the LLM and mislead future code generations, leading to the degradation of code quality for everyone who uses this method.
26th Jul 2024, 6:38 AM
Tibor Santa
Tibor Santa - avatar
+ 3
Generative AI cobbles together ideas from its training data in a way that seems sensible, predicting the most average response. The problems are many, including that: - No new ideas are created. - The average idea for a particular subject might be very poor quality in its training data. - The ideas may not actually work together so the code will probably be poor at least or even unusable. - The ideas may not have come up often enough for the AI to even figure out how they are meant to connect. In short, don’t use an LLM to generate code, it’s going to be meh.
26th Jul 2024, 4:32 AM
Wilbur Jaywright
Wilbur Jaywright - avatar
+ 2
Yesterday I asked Copilot to write a python code for me. The end results were way off than I expected, and I made 2 feed about it. If you ask AI to write code for you, try some extreme cases and see if it does return a good value, even you fully understand how the code works. https://www.sololearn.com/post/1766505/?ref=app https://www.sololearn.com/post/1766509/?ref=app
26th Jul 2024, 6:52 AM
Wong Hei Ming
Wong Hei Ming - avatar
+ 2
Wong Hei Ming , i took the description as given by you, and feed it to chatgpt. the result is a code that gives the expected result (without any modifications from my side): https://sololearn.com/compiler-playground/c7E9Tm0HLVKI/?ref=app
26th Jul 2024, 3:51 PM
Lothar
Lothar - avatar
+ 2
There goes the ethical part. Using ChatGPT for challenges involving real people is unfair. These challenges are meant to test your own abilities. If you can't complete them independently, you need more practice. It's all to improve yourself. What happens in the real world when you can't use ChatGPT? Imagine there's a big project that everyone thinks only you can do. Then they find out you know nothing at all. You just simply wasted everyone's time, preventing an actually qualified person from getting the opportunity. And you jeopardized the company.
26th Jul 2024, 8:01 PM
Chris Coder
Chris Coder - avatar
+ 1
Lothar I tried 1047, and it returns 741. Some people agree with that result, and some don't. And the challenge poster hasn't post any comment about that. If the comments in code were what you told gpt, the code doesn't comply with that.
26th Jul 2024, 4:33 PM
Wong Hei Ming
Wong Hei Ming - avatar