

Discover more from That Space Cadet Glow
The pace of development and uptake of generative AI (such as ChatGPT, Stable Diffusion and Bard) continues unbounded. Every day, it seems, there are new models, new versions and new functionality being created and absorbed into our increasingly Borgian society. But what is also being created with each iteration are more and more ethical challenges, with widely differing views on the impact they will have and the actions required to mitigate them. In this edition of the newsletter, I try to make some sense of where the real risks might be and what can be done about them.
There is certainly a lot of shrieking going on. Most of which is unhelpful, but most of which generally makes the news headlines. We’ve had calls for a moratorium on all Large Language Model (LLM) development, calls for a global AI regulator, and grand statements from all of AI’s various godfathers (btw, who are the ‘godmothers of AI’? - in typical fashion a Google search for this asks if you meant to say ‘godfathers of AI’ 🙄 ). At the other end of the hype spectrum, we have legislators trying to draw up jolly sensible regulations in an attempt to squeeze the genie back in the bottle. So let’s try and unpick all this and see where it leaves us.
AI Armageddon…
Firstly, let’s consider the threats. What is grabbing the headlines right now is the ‘existential threat’ from AI, i.e. AI has the potential capability to wipe out human life from the planet. Geoffrey Hinton (godfather #1) has gone big on this, the British Prime Minister, Rishi Sunak, has come late to the party with similar thoughts, and, obviously, Elon Musk has gone on about it at length (ironically whilst also planning to build his own LLM). This article in The Conversation articulates well the need for those people to explain exactly how this extinction will actually come about. This grand threat to humanity may be true in theory (with the weaponisation of AI being the most likely), but that risk has always been there; it just may be slightly easier to guess when or if it could happen. But there is a big step between the AI we have now and the Artificial General intelligence that would be necessary for mass extinction (just ask Godfather #2). As I keep reminding everyone (and myself), these LLMs are pretty much just predicting the next word in a sentence. There is no cognition or sentience present in any of the clever maths that is used to create the models.
…or Death By a Thousand Cuts?
So, in risk management speak, whilst the AI existential threat impact is high, the likelihood is low and the time horizon is a good way off. What is much more pressing is the real and immediate threat of how us humans develop and use the AI. In the last issue I talked about Timnit Gebru (et al)’s Stochastic Parrots paper which seeks to address these immediate concerns - in the Guardian last week there was a really insightful interview with Meredith Whittaker, CEO of Signal, who summed up these different perspectives of AI risk: “If you were to heed Timnit’s warnings you would have to significantly change the business and the structure of these [BigTech] companies. If you heed Geoff [Hinton]’s warnings, you sit around a table at Davos and feel scared”.
The immediate threats come from three different groups (all of which are human). There are the developers of the AI - companies like OpenAI, Google and Meta (but also the archetypal geeks in their bedrooms); there are the ‘bad actors’; and there are the ‘innocent’ users (/victims).
The developers, in my mind, have the biggest responsibility. It is at this early point in the AI lifecycle that the most good and the least harm can be done. This is where conscious choices about the data that will be used for training are formulated, where decisions are made on the controls, guide rails and protections that will (or will not) be implemented, where the amount of internal and external testing that will be done before the model is released is determined. All of these choices are made by humans who will have a very good idea of the impact their model will have once it is in the public’s hands. A good example of the naive denial approach is detailed in an excellent piece of analysis done by Bloomberg who found that the Stable Diffusion image generation model is actually even more race and gender biased than the real world.
Once the model is in the wild then it is at the mercy of informed and uninformed users, but also those with the intent to exploit the models maliciously. These ‘bad actors’ pose a huge threat to the way trust works in society - by using the generative AI models to create false texts, false images and false videos they have the ability to sow distrust and hatred amongst otherwise passive groups. (Even without AI operating at scale, the ability of social media to influence people’s views in a dangerous way is already frightening). And a ‘bad actor’ doesn’t have to be (very) evil - this week we have seen one of the US presidential hopefuls using generated images of their rival to bolster their campaign. It is difficult to imagine robust ways to mitigate against this but suggestions such as watermarking will go a long way to help.
And what about the well-intentioned end users of the AI? Of course, there are informed users, and even other businesses that are using the models as part of their product or service, and this is where those amazing benefits are coming from. But the strong argument for the ‘democratisation of AI’ - removing the dependency and control of AI from a technical elite and putting it into the hands of everyone - is weakened when that democratisation happens like a dam burst. Suddenly this very powerful, inherently flawed tool is being used by 100s of millions of people, most of whom have no knowledge of how it works or what those flaws are. When ChatGPT can’t even do simple maths, then you have to worry about how much trust people are putting in the outputs.
Can We Handcuff the Genie?
Irrespective of which doomsayer scenario - Armageddon or a Thousand Cuts - is the most likely, it’s clear that we need to take urgent action if we are going to try and avoid either. If we can’t get the AI genie back in the bottle, can we, at least, handcuff it?
But, actually, the question of whether we can handcuff the genie is not that simple. We have already shown that it is not the AI that causes the problems but the people. The biggest problem, then, is which group, or groups (out of those I have described above), do we shackle? (And all without reducing too many of those amazing benefits).
The different proposed approaches to this can be divided into three main types: a global regulator; regulating by use case; or do nothing.
The global solution, promoted by the head of the UN, has been compared to the International Atomic Energy Agency (IAEA). The IAEA, with 176 member states, promotes the safe, secure and peaceful use of nuclear technologies while watching for possible violations of the Non-Proliferation Treaty. The problem here, though, is that AI is nothing like nuclear weapons (or nuclear energy, even). AI (including LLMs) can be made by almost anyone with some powerful computers and a chunk of money. It’s not a case of whether a nation has AI or hasn’t; AI is pretty much everywhere in every developed country. The threat is actually more akin to that from alcohol - something that lots of people have access to, that they use carelessly without always thinking about the consequences, and is driven by a market of big providers looking to make as much profit as possible. Nobody has suggested an International Alcohol Agency yet.
The EU is currently finalising their AI Act which is likely to come into force next year. As with GDPR, they are looking to create a ‘gold standard’ for AI regulation that the rest of the world will follow. The core approach is to define different risk-classifications to AI, some of which, such as the use of facial recognition systems in public places, will be banned outright. (Regular readers will be unsurprised that I fully support this particular aspect of the proposals). Less risky applications will have fewer controls to deal with, and whilst there will still be quite a bit of bureaucracy to deal with, overall the approach does seem to focus rightly on the potential harms.
Slightly late to the party are the UK, whose White Paper on AI regulation is currently in its consultation phase. This paper claims it is taking a ‘pro-innovation’ approach, which could be read as an ‘ethics-lite’ approach. The focus is on regulating by use case, which the UK government plans to delegate to the various industry-specific regulators. Centrally there are a set of 5 (non-statutory) high-level principles to guide the regulators and a proposed ‘AI admin office’ (my words, not theirs) to monitor, educate and support the efforts.
I have a few problems with the UK’s approach (which I’ve fed back via that consultation process). Firstly it doesn’t differentiate different risk classes of AI, like the EU approach does - these would help prioritise and formulate targeted regulations. LLMs would likely be in a class of their own, or at least regarded as ‘high-risk’. Secondly, the challenge with the use-case-focused approach is that some AI models (and LLMs in particular) will be applicable to many different sectors. Therefore there is a huge potential for overlap and conflict between regulators. So the organisations that will bear the brunt of the regulations are those that are using AI models from other organisations (the model developers). In my mind, the problems need to be first fixed at source by having ‘assured’ models that would give reassurance to the end-users and the businesses integrating them into their own products and services so that they can innovate with confidence.
What is clear from all these approaches (I’ve ignored the US for now as they don’t really have any meaningful plans in place, and India is very much in the ‘do nothing’ camp, and I hope I don’t have to explain why that is a bad thing) is that the regulators are way behind the curve and will continue to be for ever. The EU’s paper does have an LLM section which tries to address the specific challenges of those models, but it will soon be way out of date. The UK talks about LLMs in the consultation document, but, as I mentioned above, the huge and fast impact of LLMs has (IMHO) made the whole sector-based approach unworkable.
But it’s not just about regulation…
Regulation can go a long way to help mitigate the risks from AI, but it’s not the only thing we can do. Just like being able to search well on Google became a life skill, the same will be true for those using ChatGPT. But the difference with the latter is that there is more risk of the answers being misleading or completely wrong (and done in a very confident way). This is where the democratisation of AI (something I’ve long been advocating for) goes awry. Businesses need to start training their staff on how to use the tools responsibly (if they let them use them at all), and schools and colleges need to incorporate more LLM-specific education into their curriculum, starting now.
What is becoming clear is that, despite only really making it into the public consciousness six months ago, LLMs in all their guises (text generation, image generation, video generation, music generation…) are now the new normal. We need to adapt the way we work, the way we educate and (optimistically, I know) the way we act to ensure that we make the most of these very powerful but very dangerous technologies.
I’ve only been able to cover half the points I wanted to make in this newsletter - there really is a deluge of activity around LLMs right now. For those of you who want to dive in deeper, here are a few really good articles that I have read over the past month:
Will A.I. Become the New McKinsey?
How this moment for AI will change society forever (and how it won't)
Is Avoiding Extinction from AI Really an Urgent Priority?
Wrestlers' protest: The fake smiles of India's detained sporting stars
The last article in the list provides a really stark example of how these technologies are going to be abused by seemingly legitimate organisations.
Another thing we need to be thinking urgently about is how society copes with the mass adoption of AI (and automation). The threat has shifted very much from ‘blue collar’ jobs to ‘white collar’ jobs, and some firms are now proactively reducing or stopping the recruitment of specific roles that are candidates for automation.
One of the most impactful approaches, and one that I have long advocated for, is a Universal Basic Income, so I was pleased to see that the UK is just starting a trial, although on a limited scale (30 people to receive £1,600 per month for two years). But, as this article in Quartz asks, how many UBI trials do we need to prove giving away money works? By the time we actually get around to implementing a fully-functioning system, it may already be too late.
And finally…I usually provide a music recommendation at the end of my newsletter, but it’s been a while since I’ve done that, so here are some recent LPs that you might want to have a listen to:
Thee Sacred Souls - Thee Sacred Souls (Southern Californian sweet soul)
Paroxysm - Scrimshire (channeling your rage through beautiful jazz music)
Fuse - Everything But the Girl (a brilliant return from the 90s lo-fi icons)
Enjoy!
That Space Cadet Glow, Greenhouse Intelligence Ltd, 11 Vernon Road, London, SW14 8NH
Thanks for reading That Space Cadet Glow. If you haven’t already, you can subscribe for free.
If you do already subscribe, please share with someone who you think will enjoy it.