

Discover more from That Space Cadet Glow
No images? Click here
Issue 128 - December 8th 2022
May a thousand AIs bloom! There is so much to talk about in this issue of That Space Cadet Glow, partly because it's been a few months since my last one, but also because we are seeing a cornucopia of advances in AI that are all wonders to behold (and, at the same time, potential poison chalices). The masthead image this month is actually of portrait of your author generated by an AI, in this case from Vana.
Generation X
The headline grabbers amongst all the AI advances are the 'generative AI' models, sometimes known as 'large language models'. Many of these are now coming out of their research phase and being made available to a much wider audience, which is where the fun and the fear both begin. Many of them are now being integrated into apps, and people are starting to make real money out of them, especially when someone like OpenAI says that all images created by its DALL-E model are owned by the originator, i.e. people like you and me, and can be commercialised to their heart's content. (The image used here - from the prompt 'an old man on a motorbike, with his wife in a sidecar, flying through the air, on a white background' - was created, and is therefore owned, by me). The text-based versions of the models, such as GPT-3, can create stories, poems, essays and even computer code based on simple prompts. But the opportunities for abuse, or even just really bad answers, is much greater. You only have to look at the recent experience of Microsoft, who, after only a few days, had to take down their public demo of Galactica, a large language model aimed to help scientists "summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more." The reason for the retreat was that Galactica was getting so much stuff wrong, and, crucially, it was doing so authoritatively and with confidence. These models can also create unethical content. To find out how that might be dangerous, I actually asked GPT-3 the question. Here is its answer: "Generative AI models are potentially dangerous because they can be used to produce content that is false, misleading, or otherwise damaging. For instance, GPT-3 can be used to generate convincing fake news stories, spam emails, or even malicious code. Additionally, generative AI models can be used to create deepfakes, which are videos that are altered to make an individual appear to say something they did not. This could lead to people being falsely accused of saying something they didn't and could cause serious reputational damage". So there you have it, from the horse's mouth. But GPT-3 has just spawned a new version, called ChatGPT, which does actually look really useful. It's essentially a dialogue creator based on the latest GPT-3.5 model, so you can, in theory, have meaningful conversations with it. OpenAI claim that it "answers follow-up questions, admits its mistakes, challenges incorrect premises, and rejects inappropriate requests". From my testing so far, it does seem to do what it says on the tin. One aspect that is causing a lot of excitement is its ability to write computer code based on a natural language prompt. It can correct the code if you tell it the bits that are wrong, and will explain why it has written the code in that particular way. It's not perfect yet (Stack Overflow have temporarily banned users posting its results) but it does give coders a very good place to start (in the same way DALL-E might give designers starting ideas). It's clear that these large language models are here to stay, so we need to make sure we learn to use them appropriately (even to discover new drugs), certainly not use them to cheat, and that people constantly call them out when they do bad stuff. It's also refreshing to see competitors to the big names (some of which I discussed last issue) making headway. I would recommend giving them a go when you can - learning to use these in the right way will end up being just another life skill.
War games
Followers of AI, including, of course, readers of this newsletter, will already be familiar with AI models that can play Space Invaders, Go and Poker better than any human players. The latest genre of games that have been targeted by the developers are strategy games. These are particularly interesting because they involve planning, communicating strategy, negotiating and building trust, all things that AI has historically struggled with. The first example is from DeepMind, who have developed a system called DeepNash which can play the Napoleonic strategy game 'Stratego' rather well. The game involves quite a bit of bluffing and sacrifices, and the developers of DeepNash found "the most surprising behaviour was [the AI’s] ability to sacrifice valuable pieces to gain information about the opponent’s set-up and strategy". This article in the New Scientist explains the complexity of the challenge and how DeepMind approached the solution, but of particular interest is the fact that it didn't use any human inputs for its training, which allows it to come up with novel ideas, unencumbered by previous (human) ways of thinking. The second example is from Meta (aka Facebook) whose AI system, Cicero, has managed to crack the game of Diplomacy. The challenges are very similar to Stratego (but in a World War 1 scenario) and it was also trained only on games between versions of itself. It did get rewarded though for humanlike play so that its actions wouldn’t confound other human players. One of its strengths is its language abilities, particularly how it is able to replicate the slang terms typically used in the game. Cicero achieved more than double the average score of the human players and ranked in the top 10 per cent of participants who played more than one game. Neither of these AIs could be said to have fully mastered their games, but they are certainly up there with the leaders. The real challenge, of course, comes when you try and transfer all this learning to the real world, where there are many more variables and nuances to work with. I'm sure it is only a matter of time before this happens, which, as with most AIs, presents both opportunities and threats. Building the necessary ethical reasoning into them, for example, will be a huge challenge that cannot be under-estimated.
Gabriels - Angels and Queens Pt.1
This issue's music selection comes from Gabriels who are making huge waves with their debut LP (or, to be precise, half an LP). It will certainly be in many of the lists of top 10 albums of 2022 that are being published around about now. Jacob Lusk, the band's singer, was a choir master and runner up on American Idol 2011, but it's clear from this record that he was far too big for that show. His voice is pure soul, the sort of sound you could listen to forever and never get bored. Ari Balouzian and Ryan Hope provide the music which perfectly complements Lusk's vocals. The sound is all soul, but never retro, and feels unpolished but in the best sort of way. You can stream or buy the LP from here. I also highly recommend watching the video for the Gabriels first release, Love and Hate in a Different Time, which is so much more than just a music video. Enjoy!
With great power...
I was going to write about the disturbing news that San Francisco's Board of Supervisors had agreed to let their police department use robots that have the power to kill. Beyond the obvious headlines of 'killer robots', the police were given permission to allow the robots to deploy explosives in extreme circumstances and as a last resort. This may seem reasonable to some but it holds the very real risk of becoming the start of a slippery slope. Literally as I was writing this newsletter, however, the news came through that the original decision had been reversed due to the massive backlash from human rights organisations and society as a whole. The District Supervisor, who originally voted against the proposal, said "There have been more killings at the hands of police than any other year on record nationwide. We should be working on ways to decrease the use of force by local law enforcement, not giving them new tools to kill people”. It's great to see common sense prevail, but there will be more and more examples of the weaponisation of AI and robots in the years to come, and each one should be challenged just as robustly.
To continue the good news theme in this section, the BBC have used 'deep fake AI' to swap the faces of anti-government protesters in Hong Kong for those of actors to protect the protestors' identities while maintaining their facial movements and emotional expression. It's great to see a side of AI that has the potential to do great harm (just ask GPT-3) actually be used in a very worthwhile way.
Afterword
Disney have developed a movie-quality AI ageing tool that allows people to visibly age or get younger in front of your eyes.
You can have a go at making yourself into a historical figure using MyHeritage's AI tool.
Regular readers will know my views on the metaverse, so I was tickled to read this story of an EU metaverse party where only 5 people turned up to. Awks!
Greenhouse Intelligence Ltd thegreenhouse.ai
Andrew Burgess is the founder of Greenhouse Intelligence, a strategic AI advisory firm.
You are receiving this newsletter because you subscribed and/or attended one of our events.