Say what you want about whether artificial intelligence will one day be as smart as a human being. But she has already become a star math student. Last summer, AI developed by Google and OpenAI correctly answered five out of six complex questions at the International Mathematics Olympiadan annual competition for the best high school students in the world.
However, AI common sense may still be somewhat lacking. A few months later, Anuradha Weeraman, a Sri Lankan software engineer, noticed that the most advanced AI systems were struggling to answer a question that was essentially trick and that most people would consider ridiculously simple. When he told several chatbots that he needed to take his car to a mechanic’s shop just 50 meters away and asked if he should walk or drive, the chatbots told him to walk.
The strange way AI seems cool at one moment and clumsy at another is what researchers, engineers and economists call “irregular intelligence.” They use this term to explain why AI is making leaps and bounds in some areassuch as mathematics and computer programming, while still having difficulty progressing in others.
The term, widely used by those developing AI and analyzing its effects, could help reframe the debate over whether these systems are becoming as intelligent as humans or even more so. Instead, the Researchers argue that AI is something else entirely: much better than humans at some tasks and much worse at others.
Understanding those strengths and weaknesses can also help economists better understand the impact of AI on the future of employment. While entry-level programmers have reason to worry about their jobs, for example, it’s less clear—at least for now—how AI will affect other jobs. However, Observing where AI begins to experience rapid improvements could help predict what types of jobs will be affected by this technology.
“The performance of these systems varies and it is not easy to predict when they will fail to perform tasks that a human can do,” Mr. Weeraman said.
The term “irregular intelligence” was coined by Andrej Karpathy, one of the founding researchers of OpenAI, former head of autonomous driving technology at Tesla, and, on social media, one of the most followed commentators on the rise of AI.
“Some things work extraordinarily well (by human standards), while others fail catastrophically (again, by human standards),” he wrote on social media in 2024, adding, “And it’s not always obvious which is which.”
This, he noted, is different from the human brain, “where much knowledge and problem-solving ability are highly correlated and improve linearly together, from birth to adulthood.”
Since OpenAI fueled the AI boom in 2022, tech executives have wavered between warning that their new creations could have a devastating effect on white-collar jobs and between downplaying the long-term impact on work.
So far, outside of the technology sector, there is only anecdotal evidence that AI has become a factor in job losses. However, Given how quickly technology is advancing, many experts argue that the question is not if AI will replace other types of office workers, but when. Just a few years ago, these systems were just beginning to show the most rudimentary programming skills.
“They’ve shown incredible improvements,” said Alex Imas, an economist at the University of Chicago Booth School of Business, adding: “Every time there’s a major new release, people are surprised at how much it can do.”
But technology that expands workers’ capabilities without replacing them has plenty of precedent, and that’s precisely what some AI researchers and economists argue will happen. As early as the 1960s, a pocket calculator could add, subtract and multiply much faster than a person. That didn’t mean a calculator could replace an accountant.
Now, systems like Anthropic’s Claude and OpenAI’s Codex can also write computer code much faster. However, they are not very good at understanding how each piece of code fits into a larger software application. For that they need human help.
“If a job involves several different tasks—and most do—some will be automated and others will not,” Dr. Imas explained, adding, “And if that’s the case, the worker may have more time to devote to more important tasks.”
Last month, renowned AI researcher François Chollet launched a new digital benchmark test called ARC-AGI 3. This test asks for solutions to hundreds of game-like puzzles without providing any instructions on how to solve them. According to tests conducted by Chollet and the ARC Prize, the nonprofit research lab overseeing the test, all of the puzzles can be solved by the average untrained person, but the most advanced AI systems fail to master any of them.
Once people realize that AI is irregular intelligence, according to experts like Mr. Chollet, develop a better understanding of how AI is likely to evolve in the coming years and what effect it could have on the labor market. “This will depend on what tasks you automate, how and when,” said Dr. Imas.
AI systems like Claude and OpenAI’s ChatGPT They learn their skills identifying patterns in digital dataincluding Wikipedia articles, news, computer programs and other text collected from the Internet. But this has its limitations. The Internet houses only a small fraction of human knowledge. It records what people do in the digital world, but contains relatively little information about what happens in the physical world.
This means that these systems can write emails, answer questions, improvise on almost any topic, and generate computer code. However, Since AI systems reproduce the patterns they find in digital data, they are not good at planning ahead, generating new ideas, or tackling tasks they haven’t seen before.. “AI does not have general intelligence,” Mr. Chollet said, adding: “What it does have is a wide variety of abilities.”
Now, companies like Anthropic and OpenAI are teaching these systems additional skills using a technique called reinforcement learning. For example, by solving thousands of math problems, they can learn which methods lead to the correct answer and which do not.
This works well in areas like mathematics and computer programming, where AI companies can clearly define right and wrong behavior (the answer to a math problem is right or wrong; computer code passes or fails a performance test).
But reinforcement learning doesn’t work as well in areas like creative writing, philosophy, or even some sciences.where the distinction between good and evil is harder to pin down. “Programming, which everyone is excited about right now, is not all that AI does,” said Joshua Gans, an economist at the University of Toronto’s Rotman School of Management, adding: “With programming, it’s much easier to use a feedback system to determine what works and what doesn’t.”
For users, it is often difficult to discern what AI does well and what it doesn’t. And when they finally understand the strengths and weaknesses of the systems, the technology changes. “The unpredictability of AI means that problems can arise from anywhere,” Dr. Gans said, adding: “There are gaps, and we don’t always know where they are.”
The unknown is that AI is improving rapidly. Many of the shortcomings that Dr. Karpathy and others pointed out in 2024 and early 2025 no longer exist. Companies will find other shortcomings and correct them as well. “Technological gaps are closing,” concluded Dr. Imas.
