It is said and promised that Artificial intelligence allows us to be more efficient and productive. That frees up time for us to focus on doing more creative and less repetitive tasks. But to what extent is this so? Using AI can be exhausting. A study published in Harvard Business Review in March 2026 by researchers from Boston Consulting Group talks about the AI Brain Fry (the extreme mental fatigue arising from certain patterns of using AI tools in the professional field) and points out that, at the same time, other ways can reduce burnout.
What tires us the most? Constant monitoring of results of these systems. The idea of human in the loop and not trust 100% everything it generates. Which, by the way, is a recommended practice. The point is: how and to what extent do you know what to delegate in each case?
And this is not all: the fact that countless new functionalities and platforms emerge again and again, with more and better promises, also generates a additional cognitive load. The FOMO of being left out of the “newest” and, ultimately, of becoming obsolete, is a ghost that today is omnipresent in the daily lives of professionals in different areas. Especially those who work in areas of knowledge.
In this note, three experts in education and technology explain how you can learn to integrate AI into our daily lives productively and without finishing overwhelmed and exhausted. Is it about developing new cognitive skills? How to stand before this new paradigm of content production in collaboration with machines.
Mariana Ferrarellidirector of AI strategy in education at the University of San Andrés, highlights that The first question we have to ask ourselves is why we approach AI.
And from that he distinguishes two types of approaches: one transactionalin which the user does not have a clear notion of what they want to use it for; then it goes to the system with generic requests, without a clear request or context information. “This is a problem, because what is given to you will surely have biases, hallucinations, it will be very general and it will not be fully adapted to what you need. That generates exhaustion again, because you realize that much is of no use, you have to edit a lot and it generates more work than it saves you,” highlights Ferrarelli.
At the antipodes of this approach is what the expert calls a couplingwhere the user makes a snip of what they want to do. You have a clear objective, you know what materials you want to work on and the result you hope to obtain. “In these cases I have more governance over the process, I understand what the final product is that I am aiming for from the beginning, I directed all my efforts and the creative efforts of the AI towards that product that I want to achieve. “I can understand in more detail where the hallucinations are going to be and take precautions,” he details.
And it also mentions that if the use is structured and the prompting is guided, the cognitive load is mitigated. And there a productive use of the tools is made.
Just as Ferrarelli focuses on the directionality of the process, Melina Masnatta, educational technology specialist and author of the book Educate in synthetic times, delves into the cognitive dimension of that same challenge.
Masnatta contrasts the cognitive exhaustion and emotional fatigue of those who make superficial use of AI, without being clear about what they are looking for, with the fluency of those who understand how to dialogue with technology and they achieve results that the former do not achieve.
The priority meta-skills to develop in this context, according to the specialist, are critical thinking and metacognition (understood as the ability to understand how oneself thinks and learns) and the agencywhich he defines as the act of taking charge of one’s own learning process. “Agency,” he adds, “is what differentiates us as people, and is linked to the ability to take charge of our learning.”
Far from being separate concepts, they function as a pair: “those who develop this metacognition They understand how AI thinks, and therefore they understand what to ask of it.“And he adds: “what we humans have to do is project where we want to goand that is difficult for us to understand, both in organizations and in learning systems. Because there is still a lack of communication and conversation. I would tell you that the biggest challenge is knowing where we want to go, what we want to do with this technology.”
Masnatta also warns about another phenomenon that is already perceived in the digital ecosystem: the sameficationwave tendency for all content produced with AI to resemble each other. “We see the Internet overflowing with the same LinkedIn posts,” he points out, as a symptom that we have not yet managed to make that leap towards a use that enhances cognitive diversity and originality. Added to this, he says, is FOMO and FOBO (the fear of being left out and the fear of becoming obsolete, respectively) that generate additional emotional overload. “In this dizziness of stimuli and changes, we even end up emotionally overloaded with the use of AI.”
“I think the most profound impact is not the automation of work, but the automation of judgment and own perspective“, highlights Tomás Balmaceda, doctor in philosophy, teacher and author of Think again: philosophy for the disobedient.
“Algorithmic logic subordinates compassion, justice and even ethical values to a rationality oriented exclusively to efficiency.”. Added to this is the call automation bias: the tendency to accept what the machine says as if it were more reliable than one’s own experience,” he reflects. And he adds that “when this happens on an organizational scale, the company can become faster, but also more unfair and more fragile.”
Are we always able to be more productive when we use AI tools? “Not necessarily, what we have is an illusion of productivity. AI accelerates the generation of results, but productivity is not just speed. It is quality, relevance, understanding of the context and the ability to generate value,” he says.
Balmaceda also warns about the risk of losing acquired skills due to lack of use. This is a process known as deskilling which is already starting to happen. This especially applies to capabilities such as argue, analyze or write.
“Highly automated environments tend to produce less innovative, less agile and less resourceful workers. Something very dangerous is normalized: doing without thinking”; stands out.
“It seems to me that the key is reposition AI not as a substitute, but as an amplifier of my capabilities. That implies, first, maintaining vigilance and doubt. Do not confuse statistical calculation with judgment. Important decisions must continue to be guided by human reasoning, empathy and reflection. AI can suggest, but not decide,” explains Balmaceda.
The second point is that the human is inserted into the process strategically. “Systems need monitoring points where someone evaluates the results not only for their efficiency, but also for their justice, their common sense and their adaptation to the context,” he emphasizes.
Thirdly, it is about choosing what to automate and what not. “Use AI to automate the mundane or boring and free up time for the inspiring and human,” he suggests.
And finally, “develop true literacy in AI. It is not about using it passively, butlearn to co-create with her: know how to ask, evaluate, correct and improve. The ideal is not to delegate, but to dialogue”.
The three experts converge on something that sounds simple, but is challenging in practice: Before integrating AI into any workflow, you need to know where you want to go. The tool amplifies both clarity and confusion. At a time when speed is sold as a virtue, using artificial intelligence well is, paradoxically, a pause exercise: knowing what you want, what can be delegated and what, under no circumstances, should you let go.
