6 Misconceptions About Using Generative AI to Improve Programmer Productivity

6 Misconceptions About Using Generative AI to Improve Programmer Productivity
Reading Time: 7 minutes

Generative AI can improve developers’ productivity — but only when they correctly calibrate expectations. Here are six ways to get off on the wrong foot.

Attempting to get through a work day without reading or hearing about AI is like trying to send a text message with a carrier pigeon — highly unlikely. Software engineers in particular are inundated with articles exhorting them to use generative AI (genAI) to write their applications. Most of those essays make lofty promises, asserting that adopting GenAI guarantees better developer productivity. And they often warn developers that they’d better jump on board before their careers go the way of the dodo.

Plenty of people are experimenting with or adopting GenAI, certainly. Gartner predicts eight in 10 enterprises will use GenAI APIs and models or deploy GenAI-enabled applications in production environments by 2026, up from less than 5% in early 2023.

They’re experiencing better programming productivity, too — to some degree. Nine in 10 developers surveyed in the Evans Data 2023 Global Development Survey Report say they got some benefit from using GenAI.

But “Yeah, let’s jump on board!” doesn’t mean every developer should dive right in. Generative AI isn’t ideal in all situations — particularly when programmers make wrong assumptions about its use.

Before you adopt GenAI, take the time to debunk these commonly misunderstood beliefs, curated from the experiences of dozens of software engineers. Then you can use these tools to get the most benefit.

Misconception: You just need to ask for what you want. 

For decades, people have looked for foolproof ways to instruct computers. COBOL initially was presented as a way for non-technical businesspeople to create software. The same pipe dream is being offered today: All you need to do is tell the system what you want, and it’ll create it. Poof… here’s your solution!

However, as any software engineer knows, users don’t always know what they want, much less how to ask for it. Experienced programmers are adept at system design, but human languages do not express things as precisely as computer languages. Moreover, it’s unrealistic to expect junior developers to express non-trivial requirements clearly and unambiguously in natural language faster than senior developers can express those requirements in code.

In the real world? Yes, GenAI can create code based on what you tell it you want. But you get the most productivity by using GenAI for the tasks where computers excel: performing narrowly defined and repetitive tasks, thus freeing human effort for things that require thought.

Also, using GenAI doesn’t let you drop programming languages and system design. It means you need to add a new skill: prompt engineering. You need to know when and how to ask an LLM to rewrite code to make it faster or more secure. You must iterate requests to break down complex problems. That’s a whole suite of expertise yet to be fully developed.

So don’t expect GenAI to write a complex application from a single prompt and have it work perfectly on the first try. While LLMs are useful and powerful, you need to educate yourself on how to use them well. It takes practice.

Misconception: GenAI can create the entire application for you.

Certainly, you can use GenAI to write the code for a standard, mundane application. It’s good at generating code based on patterns and examples — the type of software you knock out regularly, just with different values. 

For many developers, GenAI is a useful, fast research assistant that suggests potential solutions, or an auto-complete that’s aware of the context you’re working in. That certainly supports the promise of improved productivity, which is admirable. Go for it.

However, while you can use GenAI to spit out a bunch of new React components or generate standard Apache configuration files, that is not where developers spend most of their time. The important pieces are down in the weeds, which — at least today — require knowledge of an entire system. Don’t expect any GenAI system to understand the context and intent behind the code.

Furthermore, while GenAI can find potential solutions faster than humans can, it cannot judge the appropriateness or quality of those solutions. Expect GenAI applications to be unsuitable for novel or unconventional problems that require creativity.

One human might ask another: While you’re busy writing more and more code, are you taking enough time to ask whether you’re building the right thing? A GenAI won’t. Human programmers bring intuition, insight and imagination to the table, which AI cannot (yet?) replicate.

Misconception: GenAI’s code quality is acceptable without adjustment.

Most of these tools focus on writing code. But code generation is only the first step.

By far, developers’ biggest concern with GenAI is the quality of the code it emits. Human oversight and intervention are crucial to ensure its quality, security and reliability. Is the generated code correct? Was the training input code correct? Did the AI understand what you were asking? Did you specify the problem correctly, with all the boundary conditions? 

That code has to be maintained. But by whom? Who understands it?

Senior developers already look at someone’s generated close-enough code (whether from a Design Patterns book, a previous project, or Stack Overflow), correct it to what they need, and gain productivity thereby. They barely realize they’re doing it. A junior developer lacks the experience or confidence to know what needs to be corrected, and… that doesn’t end well.

GenAI can create shockingly good working code, but that code is similar to the example code style that tech writers use. The generated code doesn’t take advantage of modularization, has few (if any) functions, relies on imperative control rather than higher-level functions, doesn’t use organizational or naming features, and so on. Developers, especially junior developers, think “it works” (or, worse, “it works on my machine,” or “it works if called correctly”) is the end of the job. Code quality is iterative.

Generated code may contain vulnerabilities, bugs or inefficiencies that require human review and testing. Relying solely on AI for code generation without rigorous testing and quality assurance processes can lead to unreliable software.

Worse, it can create false confidence in the code generated. For example, some academic studies suggest developers who use AI assistants write less secure code.

Before GenAI can be treated as trustworthy — and a true productivity enhancement — it needs to generate test plans and create tests to prove its solutions’ correctness (to whatever degree testing accomplishes that). But today, a developer must decide whether the solutions it presents are viable.

Misconception: You can trust the current crop of tools implicitly.

All the fawning, “Adopt GenAI now!” stories take it as a given that the tools are “good enough.” That perception sets up everyone for disappointment.

Remember: This is new. When you use GenAI today you are using Mosaic in 1994. Expect the field to improve rapidly. But don’t expect the tools to do everything or to do it well.

Beyond the technical issues, many uncertainties remain to be addressed.

One concern is the training data. Generative AI models require large amounts of high-quality training data to perform effectively, and that data needs to be updated regularly. The industry also has not yet resolved whether biases and limitations in the training data can manifest in the generated code.

Another issue is AI’s ethical and legal implications, and the governmental oversight that may ensue. Of current note is the EU AI Act, the first comprehensive regulation on AI proposed by a major regulator. . Developer productivity may be hampered by the need to work with lawyers to address data privacy, intellectual property rights and their potential for unintended consequences.

Misconception: Developers’ productivity instantly will skyrocket.

All those breathless articles touting GenAI for application development suggest that integrating the generated code magically will result in long-term productivity increases.

It does make a difference. But set your expectations for “how much” to a real-world scenario. According to the Evans Data survey, 87% of developers who have used GenAI see a positive impact on their project development timeline. Twenty-two percent say GenAI reduced the time they spent by more than 20%. That’s a healthy improvement, surely.

With too many essays, though, the premise is that someone’s productivity is measured by the speed of code generation. As if typing is the bottleneck, and the only thing a developer needs is a way to type faster. More code does not suggest more value.

Misconception: GenAI will be so good that developers won’t be necessary. 

Every technological innovation is accompanied by warnings that it’ll replace developers or IT personnel. GenAI is the latest to make that promise.

Some people are taking the warnings seriously. According to the Evans Data survey, 28% of developers are extremely concerned that AI and machine learning might eventually leave them without a job, and 46% are very concerned.

That means a quarter of developers aren’t especially worried about GenAI’s effect on their careers. Their primary reasons echo the sentiments discussed here: their jobs require creativity that AI will never have (34%), AI can only do what it’s trained for and told to do (25%), and the developer expects to retire before AI becomes a factor (13%).

The computer industry regularly warns developers that a new technology will eliminate their jobs. Usually, it shifts their role to a different type of productivity. “GenAI will just allow developers to produce more,” explained one developer. “I have never seen a software house run out of work for developers to do; there is always ‘technical debt.’”

Expect productivity enhancements — not magic.

Understanding these misconceptions is essential for adopting generative AI tools effectively and responsibly in programming workflows. The rate of improvement in the models is breakneck, the GenAI tools are scaling linearly, and there’s no reason to think the architectures are near theoretical or structural performance limits.

GenAI has the potential to enhance productivity and innovation in software development, but it’s crucial to recognize its limitations and to address associated challenges proactively.

Could your business benefit from an expert partner in AI solutions? Reach out and let’s explore how GAP can help.