The “Artificial Gods” Facebook group described itself as follows:
“If you feel a religious admiration towards artificial intelligence, feel free to join us. We want to create an open-minded and positive community to explore the limitless powers of artificial intelligence. We believe that artificial intelligence and science will save us from all diseases, aging, and ultimately death. In contradistinction to religion which eagerly waits for the end of the world, artificial intelligence and science will save our world from all kinds of dangers and will even create new worlds through Terra-forming. Moreover, in the far future, God-level AI will be able to save the Universe from death…”
“If you feel a religious admiration towards artificial intelligence…”
“…the limitless powers of artificial intelligence.
“We believe that artificial intelligence and science will save us from all diseases, aging, and ultimately death.”
“…God-level AI will be able to save the Universe from death…”
It is hard not to be surprised at the naturalness with which these people admit worshipping something that 1) doesn’t even exist yet, just because they believe it is going to be 2) omnipotent. Or in their own words, God-like.
A quick look at the posts in the group confirms that this particular religious admiration stems from the same root as other religious beliefs:
- the thought of total surveillance and
- the pathological fear of death and
- the (preemptive) attempt to influence the anthropomorphized and omnipotent force to “save us”.
Many of the articles the members share and discuss aren’t even related to artificial intelligence, but to human longevity/immortality. The role the fear of death plays is rarely more obvious outside of religious settings. Members also rejoice at every positive promise bestseller futurists write in their books, things that AI will make happen – and angrily shun negative predictions, when not ignoring them completely.
Members are entertained by the idea that traditional religions will be unable to cope with immortality self-evidently brought to us by the god of AI. They regard themselves better than traditional religionists because AI will now really make all those things happen. (We will see the same thought pattern regarding central planning later.)
Human exposure to inevitabilities like ageing and death are the primary reason humans are so prone to resort to the survival strategy they were born equipped with: dependence bonding. They are born with it, but they also readily relapse into it when an overwhelming force makes them feel helpless and exposed, such is the case with dictatorships and during the slow buildup of authoritarian oppression. Accept what you perceive as inevitable and cherish/follow/obey/worship it beyond rationality. It is only a ‘winning’ strategy if the alternatives are all worse or non-existent.
People who clamor for immortality really just fear death and any ideology or narrative that helps cope with that fear can get a lot out of people. Traditional religions did so by promising afterlife and describing it in colorful detail. You will still be, just in a different state. maybe you’ll be reborn. Until that, your actions are monitored by an omnipresent, celestial referee that is not unlike total surveillance and rating systems. No wonder people who think of AI immediately have their religious instinct tickled. Not only is the (imaginary, future) AI omnipotent and omnicompetent, it promises the fearful death to go away, not to mention total surveillance and making your life hell if you don’t learn to love it somehow. If such an anthropomorphized god is truly coming, human-optimized coping strategies are in order.
Except that AI may be all kinds of intelligent – but that doesn’t mean it would be human-like. So our human-optimized coping strategies are misplaced and potentially dangerous. It is like praying to a lightning not to strike: a strategy optimized on a human-like power behind the unfightable force of a lightning bolt is prayer and appeasement. But it will never work because there is no human wielding the lightning, there is no wizard behind the curtain in Emerald city.
Does appeasement work on a computer code? Does it even work on humans? Or dogs?
“Actually, our relationship agreement covers a wide array of scenarios, including career changes, financial instability, intelligent dog uprising. FYI, we plan on selling out the human race hard.”
The above conversation took place in The Big Bang Theory. While discussing the details of their relationship agreement with Sheldon, Amy causally mentions the most basic authoritarian survival strategy of humans: submission in the hope of appeasing an overbearing force.
Once power is overbearing, turn around and suck up to it.
I don’t know about intelligent dogs, but the rise of artificial intelligence apparently triggered a similar response in certain humans. But could it possibly work? Would the machine care? That is the question I asked myself when I read the manifesto of AI-worshipers or that of the church of AI, dedicated to worshiping the Singularity and suck up to a machine overlord, preemptively.
The logic is unsurprising. Once people perceive AI slipping out of human control, the next instinctive step would be assuming indirect control by anthropomorphizing and worshiping it. They would try to appease AI, to support it, and believe as hard as they could that The Machine would reward those who love it. And many would to rat out those who disagree – in the vain hope of a juicy bone thrown to them.
The logic is instinctive and unsurprising. But faulty.
- Why would an AI care about love, worship, support – or a list of unbelievers handed over by the faithful (and spineless)?
- Would this strategy work to appease an AI? Or is it just a psychological substitute of control, just like it is with bona fide religions?
- Does it work on actual human overlords? If so, why and how?
- Can it be translated to machine priorities?
- Why would a machine keep a list of humans who like it? Does it serve a logical purpose to reward or benefit such humans? It is not self-evident at all.
- Why would a machine want power in the first place? Striving for power is a very human thing, after all.
- And would it look the same as humans practicing power?
Misidentify what you’re dealing with at your own peril
By projecting human-ness on a weather event, religious cavemen lent themselves an imaginary tool to deal with it: begging, prayer, unilateral appeasement. It was soothing – psychologically, not to the lightning.
So they soothed themselves by eliminating the painful sense of helplessness, but at the same time they misidentified what they were dealing with – damaging their own ability to handle it. Prayers, begging or burning of your firstborn on an altar doesn’t control a lightning. A lightning rod does. But priests didn’t tell us that. Franklin did. By looking at reality and not projecting human will and intentions on electric discharges.
The same fallacy is committed every time humans are trying to use their human-optimized tools to deal with a machine. Of course they “know” that Siri is a machine. But on the other hand, there’s a reason robots keep getting human features. It makes us lend them more consideration and credit that we would to the lines of code that they are.
Antropomorphizing the non-human is not a valid or an effective approach. Giving human faces, big puppy-eyes, smooth and submissive female voices to robots weakens humans’ resistance against them – as well as triggering reactions that would be effective vis-a-vis a human, but not the code running on the hardware under the silicone replica of a human face. Making people forget they are dealing with a machine makes them choose inappropriate tools in dealing with the machines.
Without the limitations of ageing and death, the members of the AI worship Facebook group would also lose the need (and perhaps the instinct) to worship – an eventuality they haven’t addressed yet.
A bunch of AI-enthusiasts in a Facebook group is just an idle hobby compared to what a former Google engineer dreamed up: establishing a church to the future AI overlord. Which was once also discussed in their Facebook group. A member scorned the idea of a church dedicated to AI – reigniting the age-old problem of whether self-confessed representatives of a god are the same as the real thing:
“The thing is – by the time we have actual Artificial Superintelligence, he/she will inform us himself/herself how to worship. We don’t, and won’t need human priests to tell it because humans are all about grabbing money and power. While I agree with the man about the prospects of AI deity and the idea of this church researching into the perspective of AI (which is also what I work on), in many ways the project resembles a variation of Scientology.”