The dominant narrative in recent times has been that AI will replace us. Robots, algorithms, machines that will render us obsolete. Headlines even speak of a “job displacement,” “automation of work,” “the end of the human workforce.” Although I don’t agree with those headlines at all, they fundamentally misunderstand what AI is; needless to say, it won’t replace us, nor is it an independent agent that reasons to carry out tasks.

TLDR; From my point of view, AI is a new level of abstraction in our cognitive chain, one more layer in the stack we’ve been building for decades with technological progress. But here lies the subtle trap: each level of abstraction hides a prior level of understanding.

Artificial intelligence as an abstraction layer

Each layer of simplification lets us operate at a higher level without needing to understand the underlying details, at least in engineering. A person doesn’t need to know how the engine works to drive a car; the very dashboard in front of you is an abstraction layer that hides complexity. Operating systems do the same between hardware and our everyday applications; it’s an extraordinarily effective strategy for scaling complexity, but it has a cost we rarely account for: the loss of deep understanding.

Contemporary AI works exactly like these traditional abstraction layers. Large language models sit right in the middle between our declarative intentions (prompts) and the results. In cases like students who don’t yet understand the subject at hand, they are effectively externalizing their thinking.

Nor do I want to be misunderstood: AI is a very powerful tool. Just as someone who is excellent at mental arithmetic can be outpaced by someone with a calculator, someone using artificial intelligence will leave you behind in the modern world in your field. On my path as a programmer I started playing with Assembly, the “lowest” layer between the physical and the software; today it’s commendable to know someone with this skill, but claiming that we can get to optimizing at such a low level really doesn’t matter to anyone—compilers and tools were created and polished to the point that it stopped being important.

When we use a calculator, we don’t need to understand division. When we use an AI, we don’t need to exercise reasoning. The replacement doesn’t come from the machine trying to be intelligent; it comes from us renouncing being so.


Outsource Philosophy

You can’t delegate what you are

As context for the image above, ironically generated by AI, this is the most provocative intuition in today’s debate: thinking is not a task that can be outsourced like any other. When we delegate the calculation of a salary to software, we gain efficiency without losing critical cognitive capacity. But when we delegate reasoning, analysis, the synthesis of ideas, we are losing something more fundamental: the neural plasticity that makes that reasoning possible; at the end of the day, we’re talking about a muscle.

And of course this happens constantly in the modern world. Nowadays I won’t go anywhere in my car without first turning on Google Maps, even if I’ve gone to that place a thousand times and know the route by heart. Google Maps knows the best route better than I do—because there was an accident, because there’s too much traffic in a specific area, etc.—and I don’t remember 90% of my phone book either, beyond two or three specific numbers. The reasoning based on this premise is very simple: every thing we delegate to technology is a capacity we lose, therefore we must be strategic about which things we delegate.

The false dichotomy of efficiency

There is an almost irresistible temptation to accept this tradeoff. Yes, we forget how to reason or do creative activities, but we are extremely fast and efficient—qualities that in today’s world make you invaluable. Isn’t that a fair trade?

Let me tell you it isn’t: critical thinking and creative capacity are not a luxury; they are the foundation upon which everything else is built.1

When we blindly trust AI and its recommendations without critically evaluating the assumptions, we are not becoming more efficient; we are becoming vulnerable to a handful of simple mathematical expressions with a variability parameter called temperature deciding for us at random, creating problems that people with reasoning ability will then have to solve.

An emerging proposal in education suggests using AI strategically as scaffolding that gradually recedes, rather than as a permanent replacement. AI should calibrate cognitive load—reducing it enough to prevent overwhelm, but maintaining sufficient challenge to generate the stress that drives growth. It’s about hitting that “sweet spot” of optimal difficulty that enables genuine development.2

But this requires a cultural resistance that is difficult to imagine in the era of maximum efficiency. It requires valuing slow thinking, the difficult process, the possibility of making mistakes. It requires saying “no” to convenience when that convenience costs more than it yields.

Conclusion

In the end, the question is not whether we should use artificial intelligence, but rather “What kind of person do I want to be?” Every time we delegate an act to a machine, we become something simpler. We become less capable of surprising ourselves, less able to navigate uncertainty, and less resilient in the face of the unexpected. Come on—that’s the most interesting thing life has.