AI is making you a worse programmer. And that’s a good thing.

Originally published in Spanish here.

In a post on Anthropic’s blog, they share the results of a study that sought to understand the impact of AI usage on the acquisition of technical skills:

In a randomized controlled trial, we examined 1) how quickly software developers acquired a new skill (in this case, a Python library) with and without AI assistance; and 2) whether using AI made them less likely to understand the code they had just written.

The results:

We found that using AI assistance led to a statistically significant decrease in skill mastery. On a quiz covering concepts they had used just minutes earlier, participants in the AI group scored 17% lower than those who coded by hand, or the equivalent of nearly two grade levels. AI use slightly accelerated task completion, but this did not reach the threshold of statistical significance.

Although the context is AI usage, these results do not surprise me. Not because of AI itself, but because this is exactly what happens every time a technology eliminates a problem you previously had to solve manually.

Not our first rodeo

When I learned to program in Objective-C around 2008, before
ARC, every time you created an object in memory with [[Object alloc] init], you had to make sure to free that memory with [object release] once you were done using it. If you didn’t, you had a memory leak. In the best case, your app felt slow. In the worst, you’d hit a SEGFAULT and your app would crash unexpectedly.

Then ARC arrived, and suddenly those of us programming for iOS no longer had to worry about manually closing that cycle, because the system would do it for us. Not only did we have to write less code, but we also had a system-level guarantee that we no longer needed to keep track of references manually.

Overnight, we no longer had to think about it.

Of course, other problems emerged, and it wasn’t the end of bugs in iOS applications, but that problem no longer existed.

Someone who decides to learn Swift as their first programming language in 2026 will not have to learn the same memory management concepts that I had to learn 20 years ago; just as I didn’t have to learn the fundamentals of processor registers that were vital when ASM was the highest level of abstraction we had.

And yet, I’ve been able to build a career in software without knowing how to manually move bits of memory between registers.

The abstraction layer

What AI is doing for software development has the same practical effect as advances in compilers, data types, and static code analysis: by creating an abstraction layer over fundamental problems, programmers can completely stop worrying about an entire category of issues.

This process is not an anomaly caused by AI, but the continuation of a historical inertia.

Every major advance in computing has consisted of giving us the opportunity to “ignore” a technical difficulty so we can think about more complex problems. We went from manually managing bits in registers to delegating data organization to C++ encapsulation. From struggling with memory management to trusting Java’s Garbage Collector. And from writing infrastructure code to defining only intent in Ruby or JavaScript.

What we are experiencing today with AI is simply the displacement of that boundary: if we previously abstracted hardware and memory, today we are abstracting syntax itself. Code is becoming an implementation detail so the programmer can focus purely on architecture and purpose.

Worse programmer, better engineer

This is the point most people miss when they look at these studies. Being a “worse programmer” in the sense that you can no longer write a memory-sorting algorithm or that you need to look up the exact syntax of a Python decorator is not a bug, but a feature.

The question has never been “how well do you write code?”. The question has always been “how well do you solve problems?”. And solving problems requires a completely different set of skills: knowing which problem you are actually solving. Not the problem the user explicitly asked for, but the underlying problem that is creating friction.

Knowing when not to write code. When the solution is to change a process, reorganize a team, or simply say no. Knowing how to verify whether your solution works. Not whether it compiles. Not whether it passes tests. Whether it solves the real problem for the real people who have it.

AI can write the code. It cannot do any of these other things.

From syntax to systems

Just as someone learning Swift today doesn’t have to worry about the retain/release memory cycle, and I didn’t have to worry about learning how to move bits between processor registers, what AI is doing is raising the abstraction layer so that code itself is no longer the important part.

Except this time, the abstraction layer is higher than ever. Completely outside the technical domain. It’s no longer about writing syntactically correct code. It’s about knowing which problem to solve and how to verify that the solution—the one you designed and the AI implemented—actually solves it.

That is a completely different skill. And that is the skill that defines an engineer.

If you are navigating this transition—from writing code to designing systems, from executing to deciding what is worth executing—I work with people in technology who want to develop exactly these skills that AI cannot abstract away: strategic thinking, solution architecture, and the ability to distinguish between the real problem and the apparent one.


Schedule a discovery session
and let’s talk about where you are and where you want to go.

Categorías: , ,

Vuélvete miembro para dejar comentarios, y desbloquear otros beneficios.

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *