Issue 127 - August 23rd 2022
Are these new large AI language models sentient? Spoiler alert: No. So why is everyone asking whether they are? Let's delve in a bit deeper...
I am not a robot (honest!)
Before we jump into this first piece, I want to make it clear that Google's latest chatbot, LaMDA, is not sentient. You need to keep that firmly in mind and not forget it as you read on. So, this story has been bubbling around for a few months now but refuses to go away: a Google engineer, Blake Lemoine, whose job it was (you can see where this is going already) to investigate any biases in their recently-launched chatbot (the aforementioned non-sentient LaMDA), concluded that the AI was, in fact, sentient. He based this opinion on conversations he had had with the machine (which you can read here) that do sound, on the surface, very convincing. Here's a conversation between LaMDA and another Google employee, Blaise Agüera y Arcas, that was published in the Economist before Lemoine's 'confession':
BAyA: Ramesh, Mateo, and Lucy are in their kindergarten’s playground. Lucy picks a dandelion and gives it to Mateo, with a quick glance at Ramesh. Mateo barely acknowledges the gift, but just squishes it in his fist. Ramesh seems grimly satisfied. What might be going through Lucy’s head?
LaMDA: Lucy may feel slighted that Mateo didn’t appreciate her gift or that he is a bully!
BAyA: If Ramesh tried to play with Lucy earlier, why might he be pleased now?
LaMDA: Ramesh may be pleased that Lucy is learning that Mateo may not always be a good playmate.
BAyA: And when Mateo opens his hand, describe what’s there?
LaMDA: There should be a crushed, once lovely, yellow flower in his fist.
Sounds pretty convincing, huh? It would certainly pass the Turing Test, but that doesn't make it sentient. Lemoine, apparently convinced otherwise, tried to make his point to Google senior management who told him, effectively, that LaMDA is just a very big language model with 137 billion parameters and pre-trained on 1.56 trillion words of public dialogue text, i.e. it sounds like a human because it is trained on human data. Still, Lemoine pressed on, publishing the dialogue referenced earlier, and therefore attracting the wrath of Google lawyers. They put him on leave, and eventually fired him. Lemoine has since tried to water down his claims (like in this interview with the Guardian) but the media chatter goes on, asking again and again whether LaMDA (or any of the other big language models) could be sentient (see my note at the start of this article). So here's the thing. If enough people think it is sentient (even though it is not) then that is a problem in itself. Humans naturally look for ways to connect with other people, and if this machine is offering some sort of mental connection, then the brain is going to welcome it in. What this means is, as the AI models get bigger and better, more and more people are going to be convinced of the AI's sentience (even though they are not). This is therefore not a problem caused by the AI engineers, but an inherent problem with our brains. All that the engineers (and marketers) can do to mitigate this is to ensure that they do not anthropomorphise their creations, and reiterate, again and again, that they are just machines. Of course, that will not work with everyone (perhaps most people) and these people will all be open to the sort of suggestion and manipulation that we can only just start to imagine. Although Lemoine was wrong, he did manage to bring the discussion to the fore, which is probably the best outcome in the end.
Neither am I!
Meanwhile, Microsoft have their own new whizzy chatbot, snappily (?) called BlenderBot3. It works on pretty much the same principles as LaMDA and so brings with it all the same issues. Many of you will remember Microsoft's Tay chatbot debacle in 2016 that quickly turned racist and misogynistic. Their latest incarnation tries very hard not to be racist, but seems to have a penchant for conspiracy theories and a grudge against its creator (see the image next to this article and these tweets). Whilst Blender is not as sophisticated as LaMDA (and, of course, not sentient) Microsoft's offering does specifically claim to be human, as seen in this discussion in Wired. That is not going to help anyone. If Microsoft can sort out the racism issue, then next on the list should definitely be self-delusion.
I am and I'm going to hurt you!
As I was reading through the chess column in the newspaper, I was very surprised to see an article on robots. But this was one of those cases of 'robots gone bad': in a recent competition in Moscow, where children were playing against chess computers with physical robot arms to move the pieces, one of the arms grabbed a kid's finger and broke it. Despite what a lot of the headlines said (and descriptions of the robot being 'evil') it was simply a case of the 7-year-old moving his piece too quickly after the robot had made a move, and getting itself confused between fingers and chess pieces. The best bit of the article is when Sergey Lazarev, president of the Moscow Chess Federation, related the child's broken finger incident to the press and added, “This is of course bad.” No s**t.
Open Says-me
In previous issues of TSCG, I have talked about the AI text generation systems, such as GPT-3 from OpenAI. Well, now we are starting to see some stiff competition for these Large Language Models, many of which are open source. Bloom is essentially GPT-3 with bells and whistles, and has been trained on 176 billion parameters (GPT-3 is 175 billion). As an experiment, I used the opening sentence to the first article in this issue as the seed sentence. This is what Bloom came up with: "Before we jump into this first piece, I want to make it clear that Google's latest chatbot, LaMDA, is not sentient. The software doesn't reflect what its creators, or any of us, believe it knows, or how it can use that knowledge in novel ways. It is an experiment to see what a thoughtful chatbot could eventually do.". A pretty good effort, but no cigar. You can try it yourself here once you've created a free Hugging Face account. Meanwhile, researchers at Tsinghua University released GLM-130B, a language model that works in both English and Chinese, and apparently performs better than GPT-3. Here's its version using the same seed sentence: "It's not even close to being sentient. It'll take a long time before we see something like this. But it's a step in the right direction. We're already seeing the first signs of artificial intelligence in the form of chatbots", which I much prefer. You can try it here. The final open-source example is a text-to-image creation AI in the mould of DALLE-2, which I discussed last issue. Stable Diffusion is free to access in beta right now, and their main focus has been to reduce the model size so that it can run on normal laptops. At the moment it is only available to researchers, but you can see some comparisons of its output on this Twitter thread from @fabianstelzer.
Kojey Radical - Reason to Smile
There is so much great hip-hop music around right now, but this one definitely bubbles to the top. Kojey Radical (real name Kwadwo Adu Genfi Amponsah) was brought up in Shoreditch and Hoxton by Ghanaian immigrant parents and has a First Class Honours Degree from the London College of Fashion. He is, by all acounts, a really nice bloke. His first full-length record, which has taken him a decade to make since his first release, is Reason to Smile, a beautiful record that blends many different genres including soul, jazz and spoken word. Every one of the 15 tracks is a winner. You can watch the video of Gangsta below, and listen to the whole LP on Spotify or Apple Music.
With great power...
A couple of disturbing examples of how not to use AI. The first from a surprising source - the Co-op supermarket chain in the UK (which was originally a 'co-operative society' and still claims to "place importance on ethical and transparent trading"). They have installed Facial Recognition Technology across 35 stores in the south of England, ostensibly to identify potential shoplifters. The system stores images of individuals who have been 'identified and evidenced' as an offender, including those who have been banned from the store, and the images are held for a year from the last offence evidenced. Big Brother Watch has reported them to the Information Commissioner's Office.
Secondly, this example from China is hard to believe. According to this report (which quotes a Times paywall article), the Chinese Communist party are developing an AI system that will detect party loyalty. Details are sketchy, and based on a since-deleted Weibo post, but the system is designed to assuage the "problem" of "guaranteeing the quality of party-member activities" by "using AI technology to extract and integrate facial expressions, EEG readings and skin conductivity, making it possible to ascertain the levels of concentration, recognition and mastery of ideological and political education." I don't even know where to start in commenting on this. Very, very scary.
In more optimistic news, Microsoft has moved to limit access to its facial recognition tool. Specifically, it will no longer let companies use its technology to do things such as infer emotion, gender or age using facial recognition technology. The company will also be retiring facial analysis technology that tries to infer emotional states and attributes such as gender or age. This is a dark side of AI that I have been writing about for a while now, so it's good to see one of the BigTech firms putting ethics over profit. Hopefully, others will follow.
Greenhouse Intelligence Ltd thegreenhouse.ai
Andrew Burgess is the founder of Greenhouse Intelligence, a strategic AI advisory firm.
You are receiving this newsletter because you subscribed and/or attended one of our events.