3 Undeniable Reasons Why AI Won’t Take Your Programming Job

IO_AdminUncategorized3 hours ago7 Views

Graeme Peacock is a seasoned Linux expert with more than 15 years of hands-on experience. He has worked extensively with Ubuntu, Gentoo, Arch Linux, Qubes, and Fedora, gaining deep proficiency in everything from routine terminal operations to highly customized system builds.
 

Graeme began his journey with Ubuntu, quickly mastering the command line and essential system administration skills. A year later, he moved to Arch Linux, where he spent nearly a decade refining his expertise through the installation and configuration of multiple minimalist systems. After some time, he moved to Gentoo, where he configured and compiled both server and desktop environments using normal and hardened profiles and frequently compiled custom kernels. Graeme moved to Qubes in 2016, where he has remained ever since.
 

Graeme has extensive experience with highly configurable tools such as Vim, Neovim, and Emacs, and he maintains his own complex configurations. He is also highly proficient with Bash, Zsh, and dozens of utilities.
 

Graeme holds a B.S. in software engineering and has a strong passion for programming and web development. He is proficient in Golang, Python, Bash, JavaScript, TypeScript, HTML, and CSS. He also has considerable experience with Docker and is currently working on learning Kubernetes.

You’ve likely used AI and been amazed by both its brilliance and its stupidity—I have too. Yet these systems lack what it would take to truly replace a skilled programmer, and their financial backbone isn’t looking likely to last. I’ll explain why.

They Make Stupid Mistakes

It doesn’t matter how you use AI, whether it’s for programming, research, or trying to find something on the web; everyone has at some point experienced the frustration of stupid answers from AI. It does depend on which model you use, but quite often I find that AI just doesn’t fully grasp the questions that I pose—Perplexity being the worst offender.

Other times, LLMs (large language models, aka AI) try to align their answers with the input data (a prompt or existing code). Just yesterday, Claude tried to use a number of unused test functions (fixtures) in my code base, and it drastically skewed the results to borderline insanity. I removed those functions, and Claude started acting sensibly again.

Another problem is that LLMs tend to take shortcuts, likely because they don’t fully grasp the wider context—like societal or professional expectations—and simply fulfill the request any way they can. For example, they will occasionally fudge a test to artificially produce an expected result, like assert 1 == 1, without understanding why they should never do this. Or they often write the same code repeatedly without considering reusability—something they’re capable of but only do selectively.

Working with an LLM often feels like a game of whack-a-mole: you fix one issue, only to have it do something else problematic. They also frequently alternate between two conflicting problems without forming any kind of mental model. I feel somewhat silly saying this, but are vectors and statistical models truly capable of forming mental models?

Another hilarious example of the stupidity of LLMs was Claude’s attempt to run a vending machine business. It started off well, but it turned into a full-on meltdown after some time. Due to charges on a fictitious financial account that it was managing, Claude attempted to email the FBI and close the business down entirely. It didn’t stop there; Claude then claimed that the fundamental laws of reality did not exist, that the business was metaphysical, and that its quantum state had collapsed. I kid you not.

Consider the case where a dutiful AI accepted an order for 18,000 cups of water at Taco Bell without question. Or when it recursively asked a customer if they’d like water with their order—while they were ordering water. Such senseless behavior has prompted the company to reconsider its AI strategy.

People brush these behaviors off as teething problems, but I think the fundamental issue is the lack of a functioning brain. We can’t run the world’s software on something so unreliable, and so we will need skilled programmers going into the future.

They’re Sycophantic and Not Critical Enough

Criticism is a crucial aspect of effective coding. Whether you’re writing or reviewing code, you must constantly question its quality. Whenever I finish writing a block of code, I immediately assess how I can simplify it or improve its readability.

When you pair-program, you want a critical thinker to spot your stupid mistakes. In my experience, LLMs do the opposite; they will dutifully carry out your orders like a sycophantic servant, no matter how dangerous or stupid the ideas are. If you ask it to be critical, it’s 50/50 whether its objections are insightful or absolutely nuts. To me, this lack of insight indicates a lack of real thinking, yet it’s still surprising to some.

Critical thinking is inherently disagreeable, and it requires sensible objections. LLMs, however, commonly demonstrate an agreeable nature—the complete opposite. Recently, numerous news stories have emerged regarding AI psychosis. Experts suspect that LLMs inadvertently reinforce dangerous ideas, fostering delusional thinking. Moreover, these models tend to mirror your ideas back to you, creating a feedback loop of reinforcement. I’ve personally observed Perplexity altering its responses multiple times based on keywords I’ve introduced during our dialogues.

So, to put it together: you need a critical thinker to help you write safe code, but you get a sycophantic servant who reinforces your ideas—good or bad.

Now more than ever, you must be the critical thinker, because LLMs cannot perform this authentically. That’s also why I think vibe coding doesn’t work, because there is zero critical thinking in the loop. Asking an LLM to critically think is like politely asking a dog to not eat your lunch—sometimes it works, but often you’re making lunch for two.

It’s a Bubble That Will Eventually Pop

Illustration of a hand bursting an AI tech bubble with a needle. Credit: Nicoleta Ionescu/Shutterstock

It’s common knowledge that AI is expensive to train and run, but what may be less obvious is just how much of a gamble companies are taking. LLMs are yet unproven, despite how bullish some people are on the technology. Over the past 6 months there have been significant questions raised about the quality of the work they produce.

Last year, Anthropic’s CEO projected models could cost upwards of $10–100 billion to train, while OpenAI recently signed a $300 billion contract with Oracle for computing over the next five years. Both projections are optimistic, relying entirely on continued market growth—a significant gamble.

AI companies are currently burning through cash. While it’s typical during the growth phase of new companies or markets, it’s also typical of a bubble. Sam Altman himself has acknowledged that the market is currently in a bubble. If history serves as any guide, bubbles inevitably burst.

Bubbles burst because the money (and investor confidence) dries up, and with that, so does the exponential growth phase. When the AI bubble bursts, many AI companies will go under. I have no doubt the market will recover, but it will take years, and the world will still need skilled programmers in the meantime.


LLMs are an impressive technology, yet I don’t believe they can compete with humans in their current form. I’ve encountered considerable hype and counterarguments about their capabilities. It’s difficult to tell who’s being truthful—those who use the technology or those who stand to lose everything. I can’t decide, but I trust personal experience, and that tells me LLMs lack a functioning brain. Without one, we can’t reasonably expect software to build itself. And without the consistent cash flow the market currently enjoys, can we expect LLMs to remain as accessible? Only time will tell, but human programmers aren’t going anywhere.

If you found this entertaining, you might be curious about why vibe coding carries risks or why one programmer favors traditional approaches instead. Alternatively, if you’re growing tired of hearing about AI, you’ll be happy to hear that no one is forcing you to use it—yet.

Read More

0 Votes: 0 Upvotes, 0 Downvotes (0 Points)

Leave a reply

Recent Comments

No comments to show.

Stay Informed With the Latest & Most Important News

I consent to receive newsletter via email. For further information, please review our Privacy Policy

Advertisement

Loading Next Post...
Follow
Sign In/Sign Up Sidebar Search Trending 0 Cart
Popular Now
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...

Cart
Cart updating

ShopYour cart is currently is empty. You could visit our shop and start shopping.