I went to a talk last week to see Sam Altman, the CEO of OpenAI, on his last stop of his world tour in Melbourne. Sitting in a hall of 2000 people, I wasn’t exactly sure what to expect. It turned out that the interviewers were so happy that he turned up, they didn’t really test his mettle in any significant depth, and just seemed to be swept up in the almost cult-like visions of the coming new dawn for civilisation. The crowd also seemed to be hypnotised by this guy sitting at the vertex of a massive world pivot – like watching Magneto rotate the world with a flick of his hand – and then all lined up to get some advice on how to run their AI startups. An hour later, that was it. He was heading back to the US, and I plodded home feeling a bit empty.

So why did I just attend a talk with one of the most influential people in AI, in the midst of respected AI scientists all over the world raising the alarm about its potential to wipe out the human race, and yet the topic was barely discussed, lest debated?

Whilst it annoyed me at first, its got me more actively researching the debate in AI circles (my old turf) to see if I can better understand the different views, and see where there is some intelligent public discourse on what seems on the surface to be a pretty serious issue.

So I did, and this blog post is a write-up of what I found. In the process, I’ve realised that having spent the last 15 years fighting to avoid the very real climate extinction scenario alongside thousands of other passionate, intelligent and hardworking professionals, we have a lot of real world insights to offer AI community about extinction events, and how to take a more interdisciplinary and systems view of the changes that are occurring. Hopefully both the AI and climate communities can take something away from this piece.

So what are the experts saying?

So I have followed the blogs and public discussions of some of the most eminent practitioners in the AI field, as well as some of the most influential voices in the debate of who many are in positions of significant power, and have tried to summarise these viewpoints.

Lets start with Andrew Ng, co-founder of Google Brain, and Chief Scientist at Baidu, Director of Stanford AI lab – yeah, he knows a thing or two – and now runs an AI based VC in the US. He has opened up the debate in the last week in response to the letter that was released by safe.ai about the threat that AI poses to the potential extinction of the human race. As he points out in his post, its signed by a lot of his colleagues who he really respects (me also), but in his words, “he just doesn’t get how AI could pose any meaningful risk to our extinction”. Fair enough, he can’t see it, and thinks it is overblown. Acknowledging that many of his colleagues disagree, he has committed to opening a conversation with thought leaders on the topic. Seems pretty reasonable, recognises that it needs debate. One of these conversations that was reported last week was with Geoffrey Hinton.

Geoffrey Hinton is one of the best known commentators, who worked with James McClelland and David Rumelhart in USCD in the 1980’s and really first invented the mathematics (backpropagation) that still drive the world of neural networks. He recently left Google as he felt his independence would allow him to talk more freely and openly about the risks posed by AI. Hinton thinks that LLMs have a deep understanding of the world, and therefore they are rapidly developing reasoning capabilities that could outpace humans. He does speculate that there are significant risks for the planet on the road ahead, and believes that the AI transformation will be as big as the start of the industrial revolution. OK, so lots of change, and lots of risk to go with it, so start now thinking about how to regulate away the worst outcomes. Reasonably balanced, but certainly in the camp of acting early.

Now its hard to mention Geoffrey Hinton in a sentence without mentioning Yann LeCun – another pioneer of the field of convolutional neural networks and computer vision, ex Bell labs, New York University professor, and now Chief Scientist at Meta. His research, whilst still very focused on neural networks and machine learning, went down a different path to the language model approaches for cognition. His view on the recent explosion of chatbots like chatGPTs seems to be that LLMs are really “stochastic parrots”, and very skilled at mimicry and little else. In a recent post, he said that AI is going to be good for the world, and there is no existential threat because these things really aren’t that smart yet – more like a mouse brain than that of a human. So overall he’s positive that rapid progress in AI will be good for mankind. He then pointed to an article written by Marc Andressen.

Marc Andressen has written a blog called “Why AI will save the world”. For those who don’t know of him, he’s the super brilliant pioneer of computing since the 1980’s where he built the Netscape browser, and now is one of the most successful venture investors in history. That said, reading this makes him sound quite unhinged and radicalised, referring to some who have grave concerns for the future of AI as being part of a cult movement. Andressen doesn’t see the case for AI regulation, nor does he see merit in exploring the nuance in the debate. Instead he recommends to accelerate AI development so the west can beat China, support and embrace open source, and don’t regulate because it is “unscientific”. He does talk to the biggest threat to humanity – the geopolitical arms race with weaponised AI – where his answer is to outpace China on all fronts of AI. He is not alone in sharing that view; as he points out, that even John Von Neumann and Bertrand Russel felt similarly about the nuclear deterrent with Germany. This untethered capitalist view of AI is common amongst the Silicon Valley elite, but doesn’t really add any more insight or wisdom into a very serious question for the tech community.

Andrew Ng has also engaged in a recent conversation with Yoshua Bengio, another pioneer in the neural networks community, about the AI extinction risk issue. Whilst their report on the conversation was brief, they communicated the need for more succinct use cases on what the potential risks are to humanity, and try to see through the fog and uncertainty on the current debate. OK, that seems like he is definitely in the “Yeah, there is real risk here” camp, which is reinforced by his recent comments to the BBC and discussions on bad actors and the need for tighter regulation.

Perhaps the most outspoken voices on the human risks from AI come from Tristan Harris, an ex-Google ethics practitioner who produced the The Social Dilemma, and the Israeli historian and philosopher Yuval Noah Harari. Both make the case that AI will soon be able create intimate relationships with humans, and use them to persuade and manipulate people in ways far beyond the attention addictions we’ve seen in social media. Consequently, their aim is to kickstart an open conversation about these issues before they become entangled into our society, and we have no way to step back from things we don’t want. They are leading the push for a pause on the public release of powerful LLMs (not on research) to buy some time for these conversations to be held, and regulation to be considered.

So, thats a quick snapshot, but there are countless more views – there is Gary Marcus and Noam Chomsky who are more from the symbolic world of AI and don’t see LLMs to be very smart, or even on the path towards human reasoning, and therefore don’t see the existential risk. On the flipside, there are others from the symbolist AI school of thought, such as Stuart Russell who is an British AI expert, who has been a longstanding commentator on the existential risks posed by AI, such as military AI warfare going horribly wrong. There are also signatories to the safe.ai letter such as Audrey Tang, who is Minister for cybersecurity in Taiwan – not hard to imagine the scenarios she is most worried about.

So that is a snapshot of views, and mind you a pretty biased set of views that doesn’t cover the myriad of views from learned philosophers, ethicists, lawyers, psychologists, and so on. Note also the gender balance is pretty off kilter as well, which is a reflection of the over-dominance of the computer scientist and technologist viewpoint that is driving much of the debate.

A spectrum of AI pathologies

In my discussions with colleagues on the topic of extinction risk, it seems to be differing interpretations as to what types of risks are causing such startling disagreement across the AI community over the last few months. I’m not exactly sure what the full spectrum of these views are, but I think there is some benefit in a more descriptive language about the types of AI pathology, here is a breakdown of several ways AI can go wrong.

The most extreme – and hopefully the least likely – is the creation of a sentient AI that is capable of creating its own goals and objectives, and at some point decides that ending humanity is its goal. We’ve seen many instances in films and books that describe this type of apocalyptic narrative, and its perhaps best characterised as “evil AI genius”. The root cause is a problem in the motivations, aspirations, ethics and perhaps world view that leads to the creation of an objective that causes the destruction of humanity. OK, thats bad.

Then there is the scenario that is best described by the renowned futurist Swedish philosopher Nick Bostrom, who talks about the paperclip AI that is instructed to build the worlds best paperclips. The result is an AI that keeps building paperclips at the expense of everything else – including humans. Its kind of silly, but really depicts that AI exists with an resource constrained world, which can have side effects. Extending on Bostrom’s thinking, the root cause of some type runaway AI demise could be one of four things;

  • due to the poor articulation, communication or perception of the assigned goal, the “muddled AI servant”, which leads to the execution of a diabolical plan that destroys humanity, often referred to the AI value alignment issue.
  • due to the lack of understanding of the world as a place of limited resources and delicate ecosystems, the AI does not comprehend the associated costs of its actions, the “ignorant AI servant”, and leads to the execution of an idiotic plan with dire unintended consequences that drives extinction events (sound familiar?)
  • where AI’s simply make mistakes. Modern LLMs are becoming increasingly capable of writing and executing code, and it is quite possible for these AIs to get things wrong. Lets call it the “clumsy AI servant”, as they have the right objective, and a good understanding of the costs and tradeoffs, but execution flaws that lead to failures and collateral damage.
  • where the AI executes something on a significant scale to terrifying exactness, but under the direction from an instructor. This is the “dutiful AI servant” under direction of an evil master who will provide strategic guidance to the dutiful AI servant to make repeated attempts to reach its objective.

So now we have a few useful categories of how AI might fail badly, and possibly bad enough to take the human race out in its wake. Lets now examine under what conditions could AI pose an extinction level risk.

How AI might access the physical world

To baseline how some of these AI pathologies might inflict damage, we should start with AI systems already operating today. Many AI systems in operation – whether dutiful, clumsy or ignorant – could already do serious damage if they have access – or were able to gain access – to manipulate critical infrastructure or biological ecosystems that we need to survive. AI with access to physical systems – to control energy infrastructure, to manipulate DNA to create airborne pathogens, or have dangerous side-effects that destroys natural ecosystems – would undoubtedly have dire consequences for society. The impacts of these excursions may not result in extinction level events, but lets assume that the impacts are bad enough to warrant serious attention and mitigation.

With this framing, the critical concern for is how they might attain access. Lets run through a few scenarios;

For the evil AI genius, who’s skillsets belong in the basket of ASI (AI superintelligence), one could assume it would already be a proficient hacker and set its goal on cyberattacks on the infrastructure it needs to access physical systems. It would set a goal of penetrating these systems, and repeatedly try over and over until it succeeds. Given its assumed prowess in IT security, it should be way faster than humans, and therefore likely to succeed faster, and be hard for human cyber experts to defend against. Sounds nasty. But then again, ASI is not here today, so the likelihood today is low.

For the clumsy AI servant, who just makes a blunder in its coding, its really a single shot mistake that is a result of poor quality testing. An example might be to accidentally delete all the data it has access to (we’ve all done that before). Unlike the evil genius who keeps iterating in a tight loop – failing, learning and improving with every attempt – to reach a goal, the clumsy AI servant makes a mess, the human supervisors clean up the mess and fixes the issue and everyone moves on. A bit clumsy, but definitely recoverable and in the realm of mistakes that humans make on a daily basis. The mitigation here is don’t give open access to anything or anyone unless they absolutely need it – a paradigm known as zero trust in cybersecurity circles.

For the muddled AI servant, with a miscommunicated, yet dangerous objective, it is very unlikely that someone could mistakenly communicate an objective to an AI that would involve illegally gaining access (i.e. hacking) to important infrastructure unless it was intentional, therefore we can probably not treat this scenario as material. However, there many be cases where these AI are given access to the physical world – say, for example an AI that was supervising DNA altering pathogen experiments. Biohazards of this nature can get ugly, but fortunately our industrial infrastructure already has biohazard and biosecurity controls that should mitigate these worst case scenarios if followed.

For a dutiful AI servant under the direction of an evil master, this scenario is worth careful consideration. It closely represents the use of AI tools that help nefarious actors – nation states or cybercriminal groups – hack into critical or physical infrastructure, and execute their desired outcomes, either for ideological, strategic or economic outcomes. This could be pretty nasty if threat actors can use AI to get access to our critical infrastructure, either via the digital or bio mediums. On the positive side, the mitigations are at least understood – increased cybersecurity, biosecurity and overall resilience to these types of events, and adopting postures such as zero trust and early detection and response to improve resilience to these sort of intrusions.

Finally there is the ignorant AI servant, which is set out to achieve an objective but is oblivious to the side effects and impacts on other important systems for human survival – either digital, biological or societal. The clear parallel here is the manner in which the industrial revolution -specifically our addition to fossil fuels – which has seen ignorance lead to rapid changes to our climate, and is already driving a good part of our natural world to extinction. This type of cascading impact that results from AI could take many forms; one area that warrants careful monitoring is the resource consumption of AI once it is operating at scale. Its energy consumption alone could put other systems under extreme pressure, so too the hardware infrastructure required to build the servers, data centres, and the electricity supplies to service them.

So lets now visit a couple of these riskier scenarios in more detail.

Critical infrastructure cybersecurity and the dutiful AI servants

The problems associated with warring nations, and the markedly different philosophies between nation states are definitely not new, nor to be seen as the reason not to develop smarter intelligent systems. However, it is worth emphasising how much the nature of global security has shifted in the last 10 years towards the cyber realm, using the tactics of societal influence, espionage and clandestine armies to wage unseen battles in the digital realm. Some of this we see – cybercriminals, government hacks, ransomware attacks, and so on – and much of it we don’t see. Of the part that we can see, a recent conversation at the World Economic Forum had Interpol state that international cybercrime was set to grow to 12 Trillion dollars by 2025, which is equivalent to the 2nd largest world economy. That is a huge amount of money changing hands for the few thousand people that make up these private firm or state sponsored threat actors. That means they are extremely well resourced, and already have both ideological and economic motives to keep selecting targets around the world. By targets, that really means us – western countries – for the most part.

For the AI community, by building and releasing new tools that make it harder to separate fact from fiction, be it deep fakes of conversations, vision, voice and so on, it makes the job of theft much easier for cybercrime. This fracturing of trust is not accidental – to some international threat actor groups, this is the whole objective, and a way to sew discord and distrust into our societal systems, including our democracy itself.

With respect to protecting our critical infrastructure, lets look at our electricity systems, which are undergoing a massive 10+ Trillion dollar retrofit in the space of 20 years, which will serve the bulk of the planetary energy needs. They are increasingly becoming connected by global supply chains and data centres, leveraging cloud computing infrastructure and attaching 100’s of millions of active IoT devices. This is making the task of robust cybersecurity to defend against foreign states and cybercriminals much harder, let alone the huge additional burden that would come from the continual bombardment from AI driven intruders.

The largest risks that AI pose on humanity is if they get access to resources that they shouldn’t. Either via clever psychological tactics or sophisticated hacks, defence of our critical infrastructure will become the future battleground for the containment of AI – either from the dutiful servant or evil genius kinds of AI. Maintaining physical control paths over electricity and data centre systems seems like a sensible idea to always retain an option of last resort, and there are many other similar design features that would make our system safer and more future proof.

AI energy consumption and the ignorant AI servants

Whilst the types of AI super-intelligence are apparently as still several years away, the statistics for their projected energy usage don’t look great based on current trends.

For operations alone, some like Ludvigson estimated that each single request to chatGPT uses around 20 Wh of energy, and likely higher when including training costs etc. By comparison, an average mobile phone uses 5 Wh per day. If we were to take Sam Altmans view that every child can use AI as a personal tutor, and using it say 100 times a day throughout the school year, you are getting up to around 400 kWh of energy (~$160) to operate per annum per child. This is roughly the energy consumption of a laptop computer of the course of a year. OK, its a lot, but doesn’t sound that bad, does it?

Whilst we might be able to live with today’s LLM energy consumption, the big problems come from how we scale LLMs and AGI approaches that use very large parametric neural networks at their core.

This is because the energy consumption for query on chatGPT is roughly related to the number of parameters in the model. So looking back over the last few years, the energy consumption from GPT-2 (1.5 billion parameters) to GPT-4 (170 trillion parameters) is around 5 orders of magnitude increase. Four years of advancement, and five orders of magnitude in energy cost per query. If we keep improving our LLMs and AGIs to be even bigger, and use even more data, the energy consumption goes way out of control.

Based on this sort of trajectory, imagine we are using an AGI in four years – a single query to an AGI might take 2 MWhs – enough to run a household for a year!

On this point – there is clearly an economics and engineering challenge to solve here to reduce energy consumption of these AGIs. Energy may even become the critical limiting factor on how intelligent we can make these AIs. That will create a profound tension in the trade-off between being smarter, and ecological impact until such time that we have reached net zero electricity systems. Given that is still 20 years away in most geographies, there is a two decade gap where energy consumption could be a critical factor on whether we are able to hit our net zero emissions targets.

Regardless, if there is one thing we can learn from the blockchain hype-cycle, these technological advances COULD pose a huge environmental and social risks if they are released to the world without the right checks and balances. It will take close to a decade for the world to recover and repair from the blockchain phenomena and wasted hot air that it has produced. The AI community needs to be open and transparent about the AI energy consumption problem and not hide it behind trillions of private capital that is patiently waiting for the world to be addicted before revealing the true impacts on our energy system and the planet.

Take a systems view of existential risk

So lets go back to the topic about existential threats. There is the sentient version – the evil AI genius secretly plots to starve the planet of resources, and drives up energy consumption so that everyone dies. Too bad. Sounds like an average night on Netflix.

Then there’s the practical, societal implications that could result from the explosive growth in AI systems in the next 2-3 years. These aren’t outlier scenarios, they are more probable than not, and require immediate mitigation measures.

So out of this analysis, here is my message from the climate warriors back to the AI scientists and experts. If you take a more systemic view, you will discover there are very real risks today that will require increased monitoring and regulation by the AI community, and by others protecting our critical systems that keep us alive, and to ensure that we are not exposed to unacceptable risks by AI. We are already running a losing battle to save the world from one extinction event, and we need the AI community and its allies in the Silicon Valley to take on the responsibility for not making it harder than it needs to be.

Recommended Posts