Will advances in AI lead to a world of unbelievable blessing or to terrible nightmares? Both scenarios have been predicted by those who have been trying to foresee where this technology might take us. It seems we want the technology to be able to do everything. How amazing it would be to have unlimited intelligence, knowledge and power on tap! But at the same time, we fear that technology might be able to do everything. How could we possibly limit it were it to get out of control.
While these are radically different visions of the future, both rest on the same assumption. In this post, I want to argue that that assumption is flawed.
A typical terrifying scenario is described by philosopher Nick Bostrom.[1] In his thought experiment, an AI is given the task of creating paperclips. As the AI becomes hyper-intelligent, it seeks to make paperclip production increasingly efficient. It devotes all its energy to this project, eventually monopolising the world’s entire resources, so that the planet is reduced to an endless sea of paperclips. Nobody is able to prevent this cataclysm, as the hyper-intelligent AI finds ways to thwart all human attempts to turn it off.
On the other side, AI is seen as the pathway to a healthier more prosperous future. For example, in the field of cancer research, AI is playing a key role in screening and diagnosis, in drug discovery, and in personalised treatment.[2] Some even view hyper-intelligent AI as a means to saving humankind. Robert Kozma is an exponent of this idea.[3] He imagines an Artificial General Intelligence (AGI) that is able to “devise solutions that significantly reduce or end poverty—something that we, as humans, have yet to accomplish despite centuries of effort.” Indeed, these ideas are driving a new collection of disparate ideas known as TESCREAL.[4] This new “belief system” incudes very worrying ideas, for example, that colonisation of the universe is a higher priority than addressing poverty on our planet. Nevertheless, to its Silicon Valley supporters, this is an attractive utopian future created through AGI.
Behind all such promises lurks an age-old shadow which we encounter time and again in the pages of the Bible. That is the spectre of idolatry. When Moses disappeared up Mount Sinai, seemingly never to return, the Israelites decided to make their own god. This would not be a distant and invisible god, but a deity that they could see, touch and control. So, they make themselves a golden calf.[5] AI is seen in a not dissimilar way, when it considered a solution to human ills – and, preferably, one we can control.
Idolatry is a danger not only with general or hyper-intelligent AI, but in everyday life today. How many of us, when we start to feel unwell, turn immediately to AI or the internet for a remedy, rather than turning to God in prayer? I plead guilty to that failing!
The Old Testament prophets, of course, denounce idolatry. Although they decry a tendency for people to turn their backs on God,[6] their greatest criticism of idols is that they are impotent.[7] They are nothing more than blocks of inert wood, some of which will even end up in the fire.[8] The same could be said about AI – even AGI or hyper-intelligent AI. While these systems might give the impression of being all-capable, they are not. Intelligence is not omnipotence.
Returning to the paperclip apocalypse, the AI would need to build manufacturing plants. That requires human labour, energy and raw material. Intelligence alone cannot generate these things. Then, to create endless paperclips, the AI would have to convert physical stuff into pliable metal. Here it would run up against the laws of science. No amount of intelligence can turn animal and vegetable matter into galvanised or nickel-plated steel wire. Similar problems abound when it comes to the promises of AI – curing cancer, ending poverty, or building utopia. All these things require more than just intelligence. Some require the mobilisation of resources. Others run up against the fundamental laws of nature.
As if those were not problems enough, humans can always unplug the AI from the power socket or remove the battery. While it has been claimed that a hyper-intelligent AI would simply stop this from happening, there is not a lot that can be done to prevent a determined human being from flicking the off switch. Humans can survive without electricity; computers can’t.
AI is amazing technology that has been used to accomplish incredible feats and will become increasingly important in the future. However, we must neither idolise it nor be terrified of it. This technology may seem intelligent, but it is not omnipotent. That quality belongs only to God.
The Revd Canon Dr Tim Bull
Thurs, 6 Feb 2025
[1] https://nickbostrom.com/ethics/ai
[2] https://www.cancer.gov/research/infrastructure/artificial-intelligence
[3] https://www.ictworks.org/kozma-test-agi-end-global-poverty/
[4] https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/
[5] Exodus 32–34
[6] Jeremiah 2.27
[7] Isaiah 44.6-7
[8] Isaiah 44.12-17