Who is teaching AI to be more human

Author Picture
Published On: April 7, 2026
Who is teaching AI to be more human

If for a time the development of artificial intelligence was guided by an eminently technical logic (more data, more computing power and better algorithms), as these systems began to interact massively with people, that logic began to show its limits. It is not enough for a machine to respond “correctly”; The next frontier is for it to understand nuances, recognize emotions and, in some cases, even make decisions that are reasonable from a human point of view.

Already in the first years of expansion of generative models, around 2023, work began on a specific type of training on the style, sensitivity and language of the models. Then, The publication Rest of The World stated how some of the main companies in the ecosystem (such as Scale AI and Appen), began to hire poets, storytellers and humanities specialists to produce original texts intended to train AI. It was not just about improving the quality of the responses, but about ensuring that these systems could move naturally in areas where language has an expressive as well as a functional function.

Additionally, there is the language issue. The problem is not minor, given that much of the data with which these models are trained comes from the Internet, a universe profoundly dominated by English. This not only conditions the language, but also the cultural forms of the language: its structures, its references, and even its “musicality.” To expand the capabilities of AI beyond that bias, the publication explained how companies began looking for writers in Hindi, Japanese and other underrepresented languages, capable of providing nuances that were not present in traditional datasets.

As AI systems advance and become more widespread, the intersection between technological sophistication and human intervention reaches new stages and new challenges in terms of training. Suddenly, neither data nor language are enough; emotional interpretation and expression are needed.

Along these lines, some of the main companies in the ecosystem are beginning to work with improvisational actors, comedians and performers to train models in the field of emotions and expression. According to what The Verge saysthrough intermediary companies such as Handshake AI – which provides training data to firms such as OpenAI – calls are being launched for profiles with expressive skills, capable of recognizing, interpreting and moving through different emotions in a credible way.

The dynamic is in itself experimental, since these are sessions of improvision by video callwhere participants receive open instructions and build scenes in real time. These interactions – spontaneous, ambiguous and full of nuances – become material for training for models looking to improve their ability to interpret tone, intent and emotion in human conversations.

The rise of this type of search is not coincidental. As large companies move towards multimodal systems – capable not only of writing, but also of speaking, listening and interacting with realistic intonations – the emotional dimension of language becomes a new competitive front; That is, answering “good” is already within the norm, the challenge now is to “sound human.”

However, these types of initiatives also raise questions. The publication also comments on how in communities like Reddit (where actors and performers discussed these proposals) reactions ranged from curiosity to discomfort. Some users described the initiative as “dystopian” and interpreted it as an attempt to train models that could eventually replace them -in the midst of a discussion where the entire world of Hollywood and acting analyzes the scope of AI in film production-, while others relativized the scope: “they are not teaching improvisation, they are teaching human conversation,” they highlight. Other users, for their part, went the other way and emphasized the “good intentions”: “a renewed value of the human, the imperfect, the live.”

The next challenge is even more complex: teach AI to distinguish between right and wrong.

There appears an unexpected figure within the technological ecosystem: the philosophy. At Anthropic, one of the most influential companies in the development of language models, that role is occupied by Amanda Askell, a 37-year-old Oxford-trained researcher who has a unique task: teach ethics to a chatbot.

According what the Wall Street Journal reportsAskell does not write code or train models with traditional datasets. Your job consists of study how Claude – Anthropic’s AI – reasons, dialogue with her and shape her responses through extensive prompts that, in some cases, exceed 100 pages. The goal is not only to improve the accuracy of the answers, but a more ambitious one: build a kind of “personality” with moral criteria.

In her responses, Askell herself compares her task to that of raising a child: “providing the system with a kind of internal compass, a “digital soul” that guides the millions of interactions it has with users every week,” she explains. Instead of relying exclusively on external human corrections, the idea is that the model can evaluate its own responses.

AI platforms can already generate human-like images, video and audio, but they still lack that level of subtlety in the dialogue with users

That approach is reflected in what Anthropic calls “Constitutional AI”: a methodology that trains systems from a set of principles inspired by widely accepted valuessuch as avoiding harm to others, being honest or respecting people. The model generates a response, reviews it in light of those principles and adjusts it, and for that task the training of the “teacher in morals” is vital.

But this technical advance opens a much deeper layer of discussion: who defines how you should think, what values ​​are included, which are left out and to what extent they can be considered universal.

On an ethical level, the development of artificial intelligence is not based on a single point of view, but on a multiplicity of approaches and value systems. “There are different theoretical frameworks for thinking about ethics, and not everyone understands concepts such as harm, autonomy or well-being in the same way,” explains Sofía Geyer, specialist in innovation and neuroscience and founder of The Human Lab.

Geyer was selected as an Eisenhower Fellow in 2025. For a few weeks she toured the United States researching how organizations are adopting artificial intelligence and new technologies, and what their implications are for human work, decision-making, and ethics. Based on that work, today he leads initiatives focused on the development of critical and ethical thinking applied to technology and companies.

Along these lines, Geyer explains that methodologies such as Stanford Ethics Toolkit They propose not only to declare principles, but to apply them concretely in the design and development of technologies.

The challenge, as he points out, begins precisely there: in define what “harm” means in each context. “Evaluating a technology from its functionality is not the same as from its impact on people’s autonomy. For example, there are designs – the so-called dark patterns– that can influence decision-making without the user being fully aware,” he warns.

Geyer also focuses on a key aspect in systems training: the anthropomorphization. Various research shows that the more human a system appears – whether due to its tone of voice, its expressions or even interface details such as the text that progressively appears simulating writing – the greater its capacity to influence human behavior. “Those small design gestures are not neutral: they build a sense of real conversation that can increase trust, but also persuasion,” he explains.

Faced with this scenario, the ethical approach implies broadening the view beyond the system itself. “We must map all the groups that could be affected and analyze how technology impacts dimensions such as dignity, well-being or autonomy“, he maintains. This exercise is not theoretical: it involves anticipating possible uses, identifying risks and designing mitigation mechanisms.

In this process, he adds, the interdisciplinarity is key. “Today philosophers participate, but also psychologists, experts in cognitive sciences and linguists. It is not just about What AI says, but how it says it and how it affects people“, he points out. Even in fields such as robotics or autonomous vehicles, where decisions can have critical consequences, these discussions become central: “Who to protect in an extreme situation? These are questions that do not have universal answers, but they still have to be modeled,” explains the specialist.

In short, more than an additional layer, Ethics appears as a structural dimension of technological development. “It is not something that is added at the end, but a way of thinking about the entire system from the beginning: what we build, for whom and with what consequences,” he concludes.

In this new stage, the training of artificial intelligence seems to stop being a purely technical challenge and become a deep and delicate cultural process. Beyond optimizing responses, the underlying debate is what type of intelligence is being built and what its impact will be on society.


Sophia Reed is a political correspondent specializing in U.S. elections, legislation, and governance. She holds a degree in Political Science and has covered multiple election cycles. Her reporting emphasizes balanced perspectives and verified information from credible institutions.… Read More

Home
Web Stories
Instagram
WhatsApp