Written on the 16th of April 2025.

The hidden costs of AI-convenience

Artificial intelligence, often abbreviated as AI, is no longer a futuristic concept. It has become an integral part of professional workflows. Whether it's automating routine tasks, generating content, or writing code, artificial intelligence enables things that previously felt time-consuming or even unattainable for some. This development naturally offers various advantages. AI systems help us work more efficiently and make certain tasks accessible to people who might not have been able to perform them before. That sounds appealing, and it is. But it also raises an important question: What happens when we use artificial intelligence not just as a tool, but as a replacement for the thinking normally required to truly learn and become proficient at something?

In this article, I want to delve deeper into this. Because although artificial intelligence is incredibly powerful, there is also a risk in the ease with which we deploy it. If we become too dependent on artificial intelligence for important tasks, such as problem-solving or conducting in-depth analyses, there's a chance we will suffer as a result. Without that process, it becomes difficult to develop genuine expertise.

A good example of this is the programmer who uses artificial intelligence to write code. At first glance, this seems ideal; you get quick results and can focus on the bigger picture. But what happens when you rarely need to understand the underlying logic yourself anymore? If you blindly trust what the artificial intelligence suggests, do you still learn how to structurally solve problems or identify the source of errors? For me, this is not an abstract issue. I work with other developers daily and notice this concern arising more frequently. It certainly helps us work faster, but I sometimes wonder if we aren't risking outsourcing the core of our profession?

That doesn't mean we should avoid artificial intelligence altogether. It can be a fantastic tool, provided we use it consciously and thoughtfully. By remaining critical about how we use it and, especially, why, we can benefit from the convenience without it coming at our own expense.

Link

Short-term gain versus long-term development

It's understandable why many people let artificial intelligence take over the thinking. It offers immediate benefits: it's often faster, requires less mental effort, and bypasses complex thought processes. That feels productive, especially when the result is usable. But precisely in that convenience lies a tension. The short-term gain can come at the expense of something much more valuable: the long-term learning process. How sustainable is that productivity if you learn less from it yourself? Does your understanding grow along with the output generated by artificial intelligence? To properly assess this, it's important first to understand how humans naturally develop complex skills. Expertise doesn't arise simply by consuming information. It's an active, iterative process, and often a challenging journey. And that's precisely what makes it valuable. Here are a few key components of that learning process:

These are the building blocks of true craftsmanship. But they require time and, above all, effort. When something takes these processes out of your hands, you miss a large part of that learning experience. Let's take the programmer again as an example, or in this case, someone using artificial intelligence to generate code. What happens when that becomes a habit?

Link

The illusion of competence

The ease with which artificial intelligence can perform tasks brings not only practical advantages; it also introduces a less visible pitfall: the illusion of competence. Because you achieve results quickly and smoothly, it might seem as though you master the underlying skill. You get what you need, and that feels productive. But the reality is often different: your own contribution is limited to formulating a prompt, while the thinking, the real expertise, is provided by the tool. This dynamic can inadvertently lead to dependency. Without the tool, you suddenly find yourself less capable: you may not know how the solution was created, cannot adapt it independently, and when the artificial intelligence fails or needs to operate just outside its scope, you are left empty-handed. This isn't unwillingness or laziness, but a logical consequence of a lack of foundation. True understanding enables you to judge the work of an artificial intelligence for quality, correctness, and applicability. It allows you to recognize errors, improve ideas, and know the limits. But without that basic knowledge, it becomes difficult to remain critical. You lack the frame of reference to check if something is actually correct. Outsourcing core tasks rarely remains without consequences. This applies not only to individuals but also to teams, organizations, and even society as a whole.

When you routinely delegate complex tasks, your brain simply gets less training. The mental effort required for growth is neglected. And that has consequences. Skills develop more slowly or even stagnate completely. In the long run, this can lead to limited career opportunities, especially in roles where in-depth expertise and creative problem-solving are essential. Additionally, professional satisfaction may decrease. Those who primarily consume instead of create often miss the feeling of ownership and growth. And when technology changes or the AI tool used becomes obsolete, you are extra vulnerable: you lack not only the tool but also the skill to work without it.

What begins at the individual level affects teams. When multiple team members rely heavily on external help for core competencies, a shortage of in-depth knowledge arises within the group. The capacity to understand and solve complex, unforeseen problems diminishes. Furthermore, the innovative potential of an organization can come under pressure. True innovation rarely stems from superficial solutions. It requires a deep, lived understanding of the field. Without that foundation, ideas often remain superficial. And the more dependent a team or organization becomes on specific tools, the less flexible and adaptive it becomes – especially when the situation demands something beyond the standard capabilities of artificial intelligence.

If these trends continue, they could even have an impact at the societal level. We might shift towards a culture where speed and convenience outweigh knowledge and skill. Expertise that people train and study for years could become less valued. This can have major consequences for education, the labour market, and how we approach talent and development. If training professionals seems less important than 'smartly using' AI tools, we risk losing sight of the core of learning – and with it, critical thinking, self-awareness, and understanding.

Link

Conscious use

The potential of artificial intelligence as a tool is undeniable. The challenge, therefore, lies not in avoiding it, but in developing a conscious, strategic approach to it. The key to that balance starts with a mindset shift: artificial intelligence should not be seen as a replacement for human thinking power, but as an extension of it. An instrument that, when used correctly, strengthens human capabilities, not weakens them. This requires a re-evaluation of what is fundamental. Routine subtasks can easily be automated. But the thinking, the analysis, and the critical assessment must remain human tasks. This means investing in the basic skills of a discipline. Learning the underlying principles, practising the essentials, and developing insight. Only from this solid knowledge base can you effectively use artificial intelligence and assess, adjust, and integrate its results within a broader framework.

Equally important is changing the interaction with artificial intelligence: from passive consumption to active use. Instead of just requesting answers, you can use artificial intelligence as a partner in your learning process. Ask questions, ask for explanations, generate practice material, and test your own reasoning. Use artificial intelligence as a mirror for your own thought process. For example, don't just ask for a solution, but investigate why that solution was chosen. What are the alternatives? Can it be better, more efficient, more elegant? This active engagement significantly enhances the learning effect. Improving, adapting, or reproducing AI results requires delving deeper, and that is precisely where learning takes place. Crucial here is setting conscious boundaries – determine which tasks are essential for your development and perform those tasks yourself, even if it takes longer. In this way, artificial intelligence becomes not a shortcut, but a lever for learning.

Link

Labour, capital, and power dynamics

The discussion about artificial intelligence and expertise touches upon a deeper, structural issue. Its rise evokes a clear historical parallel: the Industrial Revolution. Back then, specialized artisans with years of experience and deep-rooted craftsmanship gave way to production processes divided into simple, repeatable tasks. The craftsman was replaced by the factory worker. Control over work shifted to the owners of the machines, and with it, power. A similar power shift now looms again, albeit in a different context. Artificial intelligence enables organizations to partially automate knowledge-intensive tasks such as coding, analysing, designing, or writing. This makes human labour in these areas partly replaceable or less distinctive. The value of knowledge work, once based on expertise and experience, comes under pressure. The deployment of artificial intelligence can lead to cheaper, faster output, even if the quality is lower. This weakens the bargaining position of human employees and strengthens that of the employer.

The fundamental problem is that AI systems are typically developed, managed, and financed by capital-rich entities. The productivity gains made possible by artificial intelligence primarily accrue to the owners of these systems. Employees contribute input through their data, usage, and work processes, but do not automatically share in the economic benefits. History teaches us that technological progress does not automatically lead to a proportional improvement in the position of the working class. During the Industrial Revolution, it took decades for workers' real wages to rise. Productivity gains initially flowed mainly to capital.

Therefore, if we analyze artificial intelligence solely from the perspective of individual development, we miss an important part of the story. Its rise also calls for a reconsideration of socio-economic relationships. How do we ensure that productivity growth also leads to broader prosperity improvement? How do we prevent control, knowledge, and value from concentrating in the hands of a small group? And what is the role of education, policy, and collective bargaining in this? The extent to which workers will benefit from the potential AI revolution or be excluded from its gains is thus one of the critical questions of our time. Not just for technologists or policymakers, but for anyone who takes the future of work, education, and society seriously.

Link

Further reading