The idea that superintelligent robots are alien invaders “coming to take our jobs” exposes serious flaws in how we think about work, value, and intelligence itself. Work is not a zero-sum game, and robots are not the “other” in competition with us. Like any technology, robots are part of us, growing out of our civilization in the same way that hair and nails grow out of living organisms. Robots are part humanity, and we are part machine.
When we “otherize” fruit-picking robots, that is, when we think of them as competitors in a zero-sum game, we are ignoring the real problem: the humans who harvested the fruit were deemed disposable by the farm owners and by society when they were no longer fit for the job. This means that the human workers were already being treated as non-humans, that is, like machines. Because we are already in the intolerable position of alienating each other, we are now in the intolerable position of viewing machines as other.
Many of our fears about artificial intelligence are rooted in some of our old and unfortunate traditions that emphasize domination and hierarchy. But the larger story of evolution is that cooperation allows simpler beings to join forces to produce something larger, more complex, and more enduring. It’s why eukaryotic cells evolved from prokaryotes, why multicellular animals evolved from single cells, and why human cultures evolved from human populations, livestock, and crops. Mutualism has allowed us to scale.
Many of our anxieties about AI are rooted in some of the ancient and often unfortunate traditions that emphasize domination and hierarchy.
As an AI researcher, my primary interest is not in computers (the “artificial” in AI), but in intelligence itself. And regardless of how it is embodied, it is clear that intelligence requires scale. An early large-scale language model we built internally at Google Research, the “Language Model for Dialogic Applications” or “LaMDA,” convinced me in 2021 that we had crossed an important threshold. While still very hit-or-miss, LaMDA had a staggering (for the time) 137 billion parameters and could nearly sustain a conversation. Three years later, state-of-the-art models have grown by orders of magnitude and improved accordingly. In just a few more years, we will see models with as many parameters as there are synapses in the human brain.
Modern humans as a species are also the result of an explosion in brain size. Over the past few million years, the skull volume of our hominin ancestors has increased fourfold. When researchers correlated primate group size with brain volume, they found that social group size increased in lockstep. Bigger brains allow larger groups to cooperate more effectively. And larger groups are more intelligent.
What we think of as “human intelligence” is a collective phenomenon that results from the cooperation of many people with individually narrow intellects, like you and I. Let’s acknowledge how ignorant most of us individually are when we catalog our intellectual achievements: antibiotics and indoor plumbing, art and architecture, advanced mathematics and hot fudge sundaes. Even if you started with a domesticated cow, a cocoa pod, a vanilla bean, sugar cane and a refrigerator – 99% of the hard work had already been done – could you make a sundae?
Human intelligence is made up of not just humans, but a wide variety of plants, animals, microbes, and even technologies from the Paleolithic to the present. The cow, the cocoa tree, the rice, the wheat, the ships, the trucks, and the railroads that have supported our explosive population growth are all fundamental. To ignore the existence of all these companion species and technologies is to imagine us as disembodied brains in containers.
Moreover, our intelligence is becoming more embodied and distributed in many ways. This will intensify as AI systems proliferate, making it harder to claim that our achievements are personal or entirely human. Perhaps we should adopt a broader definition of “human” to include this entire biotechnological package.
Some of our most incredible achievements, like the manufacturing of silicon chips, are truly global. The challenges we face are increasingly global, too. Threats like the climate crisis and the possibility of renewed nuclear war are not caused by any one of us, but by all of us, and we can only solve them collectively. Increasing the depth and breadth of our collective intelligence is a good thing if we want to thrive globally, but that growth is rarely recognized as cumulative and reciprocal. Why?
Simply put, because they worry about who will be on top. But dominance hierarchies are just a special trick that, born out of internal competition for mates and food, help cooperative groups of animals that are naturally prone to aggression towards one another avoid constant fighting by agreeing on who will win if a fight for priority breaks out. In other words, such hierarchies may be just a hack for half-clever monkeys, not a universal law of nature.
AI models can embody considerable intelligence, much like human brains, but they are not apes competing for status. As products of advanced human technology, AI models depend on humans, wheat, cows, and human culture in general, even more than Homo sapiens does. AI models are not hungry for our food or plotting to steal our lovers. They depend on us. And we may come to depend on them just as deeply. But from the beginning, the development of AI has been shadowed by concerns about dominance hierarchies.
The word “robot,” first used by Karel Capek in his 1920 play Rossum’s Universal Robot, is derived from the Czech word robota, meaning forced labor. Nearly a century later, a highly regarded AI ethicist published a paper titled “Robots Should Be Slaves.” Though she later regretted her choice of words, the robot debate remains one of domination. AI pessimists now worry that superintelligent robots will enslave or exterminate humans. Meanwhile, AI deniers believe that computers, by definition, lack any agency and are merely tools that humans use to dominate one another. Both perspectives are based on a zero-sum, us-versus-them mentality.
Many laboratories are currently working on AI agents, and they will likely become commonplace within the next few years, not because robots will “rule” them, but because cooperative agents are much more useful to both individual humans and human societies than mindless robots.
If there is a threat to the social order here, it comes not from robots, but from inequality between humans. Many of us have not yet understood that we are all interdependent: humans, animals, plants, machines, we are all in the same situation.