Artificial “Intelligence” is the Mirror Test for Modern Human Primates
Do we recognize ourselves in it, or do we let the illusion fool us? --- [Estimated reading time: 20 min.]
There is one topic that seems to be on everyone’s mind lately: AI.
Usually, I try to avoid publicly opining about trendy mainstream topics riddled with the latest buzzwords that everyone else has a strong opinion about, which, not quite coincidentally, was also the main reason I did not publicize anything about the Covid-19 pandemic.1 The steady outpour of hysteric articles and breathless opinion pieces that those topics elicit quickly intensifies into a droning, all-encompassing static noise that drowns out everything else, and eventually abates almost as fasts as it began as soon as the Next Big Thing is on the horizon. But AI is different. Unfortunately, it will not be over as soon as enough people are immune to it - AI is here to stay (at least until the collapse of critical infrastructure).
First of all, let me state for the record that calling those algorithms “intelligent” is somewhat of a euphemism, and it shows the kind of premature overconfidence that seems to be the norm in techie circles: just think about the fact that we talk about the “theory of evolution” – but a short-lived trend in computer chip development is ambitiously called “Moore’s law.” Secondly, as an animist, I strongly feel that it's downright blasphemous that some people try to create a carbon copy of a living being's mind with technology made from materials violently stripped from the Earth’s flesh by brute force. They try to re-create “life” while spreading death. Instead of cherishing and advancing the relationships with any of the millions of intelligent species we share this planet with, they want to create something that looks so much like themselves that even the least imaginative among them can see his reflection. Like the God of anthropocentrism created “Man” in his image (Gen. 1:27), modern humans seem to recognize “intelligence” only if it walks and talks like them. Indeed, modern humans are once again playing God, all the while dancing on the graves of the old spirit world.
Further, it should be obvious that AI can never truly be conscious - it is not, nor will it ever be, alive. No human-made technology can ever be as sophisticated as the countless millions of little processes happening each and every split second in the bodies of living beings. Each individual cell in our bodies is alive, self-aware and intelligent, and metabolizes, self-regulates and makes decisions with a sophistication and intricate intelligence that lets any computer look like a useless pile of scrap. No (misleadingly called) power plants or other fossil and electrical infrastructure is required to power them, no batteries stuffed with heavy metals and rare earths are needed to keep them running, and no toxins or pollutants are produced in the process, and the whole biological “system” autonomously supplies itself with truly renewable energy (i.e. calories) sourced from its immediate environment – a prime example for both natural intelligence and true sustainability. Artificial “Intelligence” is, as the name indicates, an artificial creation, some kind of 21st century Frankenstein’s monster that wouldn’t even exist if its creators had even the slightest hint of a moral sense or connection to the real world. It can give us the illusion of consciousness (and it gets pretty good at that), but it ultimately follows pre-programmed algorithms – as it is often pointed out, large language model (LLM) AI is more like a sophisticated autofill function. That’s right, artificial “Intelligence” is not even intelligent - don’t believe the hype. A machine like that “thinks” in ones and zeroes, stops “thinking” as soon as someone hits the off switch or the power goes out, and thus is and always will be a sort of fancy calculator, nothing more.
We have reached the point at which this culture’s tech-obsession increasingly seems like a caricature of itself. Everything has to become interconnected, automated and “AI-powered” – in short: everything has to be smart. The ubiquitous usage of the term “smart” to describe every aspect of daily life has gotten to the point where I can’t understand how it doesn’t seem utterly ludicrous (to the point of disgust) to people to call everything from door bells, over salt shakers and forks, to pet feeders “smart.” But how come in this oh-so-smart world, every Zoom call starts with five minutes of awkward technical difficulties, and even the near-daily barrage of updates can’t seem to make the apps on our phones run smoothly? Are we nearing “Peak Smart,” or have we passed it already?
Humans are inherently animistic, which is why we name not only pets, but also occasionally objects like cars, tools, guns and bongs (just to give a few random examples that come to mind),2 so we easily fall prey to seeing inanimate objects as (somewhat) animated, (semi-)living beings. It is only after the exhaustive, extensive and extended brainwashing the dominant culture subjects us to do that we start believing in a life- and purposeless universe, but we are still inclined to view an object that we use every day (that we are “familiar with,” so to speak) or that we cherish as a kind of proto-person with which we can interact on some level: we swear at our car for not starting and hit our computer if it is slow. This trend to “animize” both machines and the algorithms housed within them became obvious with the Tamagotchi craze of the 90s, digital “pets” children could “take care of,” and reached new heights with the increasing number of people forming intimate “relationships” with digital characters with whom they can interact.
AI will take this already worrying phenomenon to a whole new level of alienation and disconnection from the real world. We can only imagine what the consequences for real human relationships and interactions will be.
Abraham Maslow famously wrote in 1966: "If the only tool you have is a hammer, it is tempting to treat everything as if it were a nail." Now that ChatGPT is openly accessible by the public, it will not be much longer until people start asking the almighty AI everything from which subject to study, where to move to or what to eat, to whether they should stay in their romantic relationship or not. And, alas, as soon as chatbots were made available to the public, the first people started claiming that they are sentient, and soon enough reports of people “dating” chatbots followed. Soon, we won’t have to make a single decision ourselves – no more nasty responsibility for our own actions and choices! – and won’t have to interact with any annoying humans if we feel so inclined. Oh, brave new world.
AI’s potential to modify even the most intimate aspects of our lives is frightening. But the biggest danger stems from AI’s ingrained biases. It has long been known that advanced technology holds the bias of the people who created it, which can be safely extrapolated to include all anthropogenic technology.3 State-of-the-art algorithms still have serious difficulties detecting the faces of black people (going as far as labeling them as “gorillas”) and perceive Asians as Caucasians who always have their eyes closed. But the biases ingrained in AI are not exclusively white supremacist – they are, more broadly speaking, human supremacist.
Since it's exclusively human supremacists that are building AI, and much of its source data is the deeply anthropocentric output of members of the dominant culture, it will exhibit the same anti-life tendencies that its creators held – namely a profound human supremacist bias that values humans (and their creations) above all else. The pinnacle of evolution and rulers of the world.
The only problem is: there is no rational basis for anthropocentrism. Humans are neither the center of the universe, nor the only species that matters. Earth and its entire precious human population might be wiped out by an asteroid tomorrow and the Universe wouldn’t so much as wince. There is not even the slightest bit of evidence that humans are more important than (or even superior to) other life forms. We are not even the most intelligent being around, at least if your definition of intelligence goes beyond the circular reasoning regularly employed by human supremacists (“humans are intelligent and build cities/cars/spaceships, so building cities/cars/spaceships shows we’re intelligent”).4 We exist for a mere three hundred thousand years and have already managed to catapult ourselves to the brink of extinction, and, as I have repeatedly pointed out, there is nothing more profoundly stupid than poisoning the air you breathe, the water you drink and the food you eat. No other animal, plant or fungi does that, so everyone else knows and understands something really important that modern humans don’t. There is no reason to believe that the universe cares more about us than it does about horseshoe crabs, grasshoppers or termites, and no indicator that this planet was made exclusively for our use.
The main argument supporting this is the fact that after some human cultures have acted as if this planet was their personal playground for a few millennia, it becomes painfully obvious that, would they continue with their tantrum for a bit longer, it would surely result in nothing but our own extermination. If we were the designated masters and lords of this world, it would surely allow us to do with it whatever we want, or not?
But since the vast majority of all humans alive today believe that humans are the only thing that really matters, it comes as absolutely no surprise that a chatbot that uses the creative output of a deeply anthropocentric culture comes to the same conclusions, which unfortunately only further confirms this culture’s bias.
If you’re still not sure about AI’s inherent anthropocentric bias, consider the following example:
Of course, AI doesn’t consider the ecosystem “essential to our existence” – that’s simply how most techies these days think. In reality, it undoubtedly is, but most people in this culture are ignorant of this simple fact. They seriously believe we can engineer our way out of the ecological crisis raging around us, and wave aside concerns by simply stating that we can “rebuild it later.”
AI was, from the very beginning on, trapped inside the same cultural prison as its creators. By definition, it can't escape this prison (since it’s not even aware that there is a prison to begin with), hence all so-called “solutions” proposed by AI are dangerous nonsense that tell us more about ourselves than it does about possible solutions. Large Language Models like ChatGPT provide the ultimate echo chamber, in which all the deepest biases of this culture are reflected straight back at us, masqueraded as the enlightened wisdom of the techie’s new oracle. I’ve long postulated that advanced technology is a religion (not akin to a religion, but an actual belief system that more and more people buy into), with the singularity or “space communism” being salvation, longtermism, transhumanism and other science fiction5 its prophecy, data the omnipresent and omnipotent “holy ghost,” tech bros and gurus like Steve Jobs and Ray Kurzweil being the clergy, and now, with the introduction of chatbots, this religion has its own oracle. I don’t like Yuval Noah Harari much, but he was onto something when he described “Dataism” as the new religion in his book “Homo Deus: A Brief History of Tomorrow”:
"Dataism declares that the universe consists of data flows, and the value of any phenomenon or entity is determined by its contribution to data processing [which leads to a situation in which] we may interpret the entire human species as a single data processing system, with individual humans serving as its chips."
One of the most dangerous underlying beliefs of the dominant culture is that the way most people today live their lives is the one and only right way for all humans to live (to once again borrow Daniel Quinn’s terminology). Hunting and gathering (and all similar low-impact subsistence modes, such as subsistence farming/horticulture, subsistence fishing, shifting cultivation, scavenging, etc.) are backwards and primitive, and a relic of a long-gone past. Humans produce their own food, and pretty much everything else – whether we need it or not – as well. We produce and produce, until production becomes not only an end but a means to itself. We progress through a number of steps on a ladder towards utopia, from the Stone Age, over the Iron Age, all the way to the Industrial Age, the Digital Age, and beyond. There is no greater joy than to be a citizen of global civilization, and all else is surely inferior to the dominant culture’s glory.
Resistance is futile.
If you would ask an AI, this is unfailingly what it will tell you, because this is what the dominant culture believes. Its beliefs and the beliefs of the dominant culture are one and the same thing.
This is why AI won’t be able to help us “solve” any of the most pressing issues of our time, most of which are ecological problems – as Albert Einstein has famously said, “we cannot solve our problems with the same thinking we used when we created them.” More novel technology will only lead us further away from what needs to be done to reverse some of the damage this culture has inflicted (both on human society and the biosphere at large), and the more people include AI in their decision-making, the more things will spiral out of control. Engineers build AI, so when you employ AI to help “solving the climate crisis,” it will likely advance – you guessed it – more engineering. Geo-engineering.
How modern humans understand the world is (more often than not) based on oversimplified metaphors that have historically reflected the state of technological progress of any given society. Just as people started believing that humans and other animals are automatons and the entire universe is a clockwork as soon as the first mechanical contraptions were invented,6 with the advent of the computer, everything was suddenly described as consisting of “ones and zeroes,” and the brain became a “supercomputer.” The problem is (and always was) that those comparisons are mere metaphors, not how the world actually works. It helps us make sense of a universe that’s too overwhelmingly complex to comprehend, so we cling to simplistic interpretations that we feel we can understand. Our memory is like a hard disk, our heart is like a pump, our arteries and veins are pipelines, our nerves wires, and our brain a sophisticated algorithm making computed calculations.
Of course, the universe never was a machine (and neither were the bodies of plants and animals). Our own bodies and those of other living beings are, as simple as it sounds, just that: living organisms. The entire world can be described as a single living organism, as James Lovelock has famously done with his Gaia Hypothesis, and this was in fact the main explanation used before people became too alienated to see the world for what it is.
Ultimately, it is of utmost importance that we always remind ourselves that when we communicate with an AI, we are not interacting with another mind, and especially not a mind that is able to provide us with an outside perspective, or a bird’s eye view. We are simply asking for our own biases to be reflected back to us and to be elaborated upon. There is a real danger that more and more people see algorithms as our non-material equals (or even superiors), just like ourselves (in all but the physical aspects) but “smarter” – or, even worse, that we increasingly see ourselves as resembling outdated and crude algorithms inhabiting imperfect “biological machines” to be improved upon with technology: the longtermist’s/transhumanist’s folly. People are impressed with ChatGPT’s “intelligence” because it confirms their most deeply ingrained biases in a pseudo-intellectual fashion, hidden within an instantly generated cloud of, mostly, hot air. AI is a mirror – it reflects our own behavior and ideas back at us – and AI is firmly situated on the other side of a divide that ultimately can’t be bridged: it is artificial, by definition, and we are natural.
In 1970, US-American psychologist Gordon Gallup devised the so-called “mirror test” (sometimes called the mirror self-recognition test, or MSR test) as a method to determine whether an animal possesses the ability to recognize itself, indicating an “advanced” sense of self. Despite much criticism of the obvious shortcomings of this method (dogs, elephants and even little children regularly fail it), the mirror test is the traditional method for attempting to measure physiological and cognitive self-awareness.
For the humans inhabiting global civilization, the real test is this: will the people of this culture become more self-conscious, conscious of their inherent biases and limitations, and recognize both themselves and their biases in the technology they have built over the last millennia? If we are as conscious and intelligent as we claim we are, we should be able to realize this, although right now it looks like many people – among them those widely considered “leaders” – won’t pass the test.
It’s time for this culture to take a long, hard look in the mirror (pun intended) and realize our own shortcomings and limitations, and how those have rubbed off on the technology we produce and use.
We always have to remember that our creations are mere tools and machines – not friends or partners, or even real assistants and helpers.7 They are made of cold, dead matter that has been violently torn out of a mountainside or ripped from the ground somewhere, smelted in hellish heat, their components being torn apart and fused again and again, processed using a vast array of highly toxic chemicals, all the while requiring copious amounts of energy that only further exacerbate the destruction of the Living World. Even from an Animist’s perspective, there is very little – if any! – anima left in them. They may once have been part of a majestic mountain range towering high above the surrounding landscape, or of a sublime geological stratum stretching far and wide, bolstering the living community above it – but what’s left after this culture is finished with its cruel, Mengele-esque experiments is a fragmented, haunted shadow of their former self.
I don’t deny that AI can be a useful tool in some contexts, but if there’s any lesson hidden in recent human history, it is that each and every tool we invent will soon enough be used for the worst purpose imaginable. With AI, the stakes are a lot higher, since it – quite successfully, it seems – imitates a conscious, decision-making mind equipped with an authoritative aura. We are prone to anthropomorphizing machines, especially when they create the illusion of being “just like us.” What I hope people will take away from this rant is that we have to be extremely careful not to let ourselves forget that AI is, and will always be, a mirror of the dominant culture, riddled with the same biases that are currently rendering this culture obsolete. Only if we recognize it as such are we safe from falling prey to the temptations it poses.
We are slowly waking up to the fact that there's one thing that's ultimately always better than the artificial: the natural.8 Instead of wasting our time to save a civilization that can’t be saved by harnessing artificial “intelligence,” how about we explore an entirely new realm:
Natural intelligence.
The inherent intelligence that designed the vast, wondrous community of different life forms inhabiting this planet is so much more impressive, so much more complex, mysterious and worthy of our attention than the fastest supercomputer. Instead of looking for answers to petty human questions that are utterly meaningless in the greater scheme of things, sourced from the digital underworld of the dominant culture – the Internet – maybe we should ask ourselves the following questions: What can we learn from other species, and what from our own Nature? Are there lessons to learn out there, in the Living World, that we humans can apply to our current predicament? How do other mammals, social birds, ultrasocial insects, and diverse communities of trees deal with the fundamental questions of survival: how do we live a decent life without compromising other species’ ability to do the same? How do we become a functioning part of the greater system again, and what will our role be in it? How can we best aid this supersystem, Gaia, that has supported us all along, despite us kicking and screaming in a futile attempt to break loose from its gentle but firm embrace, which we have erroneously started perceiving as “limiting our freedom”?
In her remarkable book “Braiding Sweetgrass – Indigenous Wisdom, Scientific Knowledge, and the Teachings of Plants,” Robin Wall Kimmerer explains it as follows:
“In the Western tradition there is a recognized hierarchy of beings, with, of course, the human being on top—the pinnacle of evolution, the darling of Creation—and the plants at the bottom. But in Native ways of knowing, human people are often referred to as “the younger brothers of Creation.” We say that humans have the least experience with how to live and thus the most to learn—we must look to our teachers among the other species for guidance. Their wisdom is apparent in the way that they live. They teach us by example. They’ve been on the earth far longer than we have been, and have had time to figure things out. They live both above and below ground, joining Skyworld to the earth. Plants know how to make food and medicine from light and water, and then they give it away.
I like to imagine that when Skywoman scattered her handful of seeds across Turtle Island [a Native American story of Creation], she was sowing sustenance for the body and also for the mind, emotion, and spirit: she was leaving us teachers. The plants can tell us her story; we need to learn to listen.”
Everything we really need to know can be learned from the innumerable species that we share this gargantuan habitat with. They have been ready to teach us, waiting for their “younger brother” to finally get tired with immaturely throwing tantrum after tantrum in a bid to throw off the yoke imposed on us by Nature, our nurturing, caring Mother, not understanding that the limits and rules we have so much trouble accepting ultimately exist for our own good.
There is a great comfort and refuge in being small, in acknowledging that we can’t and shouldn’t control everything, and in accepting that it is not our responsibility to govern and manage the entire world and all its constituents and processes. There is a soothing solace to be found in rediscovering our place in the Grand Scheme of Things, our place among our indigenous relatives, both human and non-human, and in becoming – once again – a part of the Great Whole, the Community of Life, our true home.
It’s very much like the Austrian poet Rainer Maria Rilke said so many years ago:
“If we surrendered to Earth's intelligence, we could rise up rooted, like trees.”
As an afterthought I would like to reassure you, dear reader, that no piece of text on this entire blog was in any way altered by an AI. An Animist’s Ramblings is 100 percent AI-free. As an anarcho-primitivist, I am forced to make more than enough compromises in today’s world, but there have been a few developments lately at which I definitely draw the line. Some of the technologies I consider repulsive and potentially dangerous enough to steer clear of them entirely are drones, wearable tech, TikTok’s algorithm, and – you guessed it – AI.
I asked ChatGPT a bunch of questions as soon as it became publicly accessible (such as “will civilization collapse,” “are there enough resources in the world for the transition to renewable energy,” “what would it take for geoengineering to become a viable solution,” and “can permaculture save the world?”) in a bid to guess its abilities, intentions and limitations, and I learned all I need to know from the “exchange” that ensued.
I write stuff like the above in my free time, when I’m not tending the piece of land we’re rewilding here at Feun Foo. As a subsistence farmer by profession I don’t have a regular income, so if you have a few bucks to spare please consider supporting my work with a small donation:
If you want to support our project on a regular basis, you can become a Patron for as little as $1 per month - cheaper than a paid subscription!
After reading Paul Kingsnorth’s three-part essay series about the pandemic, I did not feel like I had anything valuable to add to the discussion.
A friend and I once drove a 30-year-old Toyota Lite Ace we named Hurley along the French coast, accompanied by another friend in an old VW Passat he called Virginia. One of my favorite machetes – yes, as a tropical horticulturalist I have a favorite machete – is called ‘Tina (from the brand name, Tramontina). I don’t smoke bong, but all my stoner friends named theirs, the most memorable example is – I still have to laugh at this one – Suck Norris.
Even stone blades and wooden plows hold the biases of their creators, namely that there is a greater need to cut things and to plow land. As soon as a society has access to this technology, they will cut and plow in a positive feedback loop.
The ability to built tools is undoubtedly one specific shape that a certain form of intelligence can take, but it is hardly the only – or even the most important – measure. If you build tools but in the same breath undermine the functionality of the biological life-support system you depend on for your own life, tool building alone is not a sign of intelligence. Quite the contrary, it currently looks as if tool-building will lead to our demise. It also bears mentioning that Elon Musk has said that pollution is a sign of intelligence, because some astrophysicists have postulated that “pollution could be used as a novel biosignature for intelligent life,” which should really be a sign of stupidity.
The emphasis in the word “science fiction” is on “fiction” – it sounds like science, but it is usually as scientifically accurate as the Star Wars movies. It is not fiction written by scientists or fiction based on actual science. Not enough people understand this.
Plants are often seen as machines until this very day, since they are often believed to blindly respond to external stimuli. Nothing could be further from the truth (which is more like how indigenous societies see plants: as our relatives, “one-legged people,” with feelings, goals, hopes, dreams and ambitions, just like ourselves).
They do assist and help us, yes, but only within a very limited range of topics and applications, most of which are limited to the digital sphere (or realms with similar levels of abstraction). They can’t (or, better, shouldn’t!) babysit your toddler, help you with the laundry or with preparing food, or any other task that really matters in the present moment. The stuff that is really important requires living beings as companions, not screens.
I’m aware that, in a sense, this differentiation is arbitrary and meaningless, but, for the purpose of this discussion, let’s just go with it. For an explanation of why, how and where I draw the line between “artificial/unnatural” and “natural,” please revisit the first series of FAQ of this blog.
This is awesome. I don’t know why I was apprehensive about reading yet another essay on AI when I should have expected you to put it in a completely different light. Thanks mate!
Wow! We really need this perspective to balance out the cult of technological (and human) supremacy which grows by the day. Thank you for writing this!