That Space Cadet Glow #120
Issue 120 - 29th June 2021
In this issue, I bring together four really interesting developments in how AI is being developed to better understand human traits, and how this could all take us closer to Artificial General Intelligence. Plus, some beautiful data. Happy Pride month!
Trust me, I'm an AI
AI is meant to do a better job than humans, otherwise what would be the point of having it? Well, a different approach, taken by researchers at Cornell University, is to try and get AI to do just as good, or bad, a job as humans. The first iteration of the AI, known as Maia, plays chess, and tries to play it just as a human player would, mistakes and all. This is an interesting approach because, apart from potentially being used to mimic real chess players, it could eventually be used in more serious settings, such as healthcare or defence, to better understand why and how mistakes are made. Maia was actually built from an open-source version of AlphaZero, Deepmind's AI that beat the best player in the world at Go. I've written before about how AlphaZero learnt independently from humans which led to novel and 'creative' moves being made, and how this could teach us new ways of doing things. With Maia, they modified the code to create a program that learned by favouring accurate predictions of human moves rather than the 'best' moves, using data from LiChess, a popular online chess server, for its training data. Although it doesn't mention this in the Wired article, I think that this approach of showing human fallibility may actually help people trust machines more.
Another way that machines could be trusted is if they were able to exhibit better social and cooperative skills. Researchers at MIT and Brigham Young University have developed a system they call S# (S Sharp) that has learnt to play cooperation games such as the Prisoner's Dilemma. As reported in KurzweilAI, "machines designed to selfishly maximise their pay-offs can, and should, make an autonomous choice to cooperate with humans across a wide range of situations. Two humans, if they were honest with each other and loyal, would have done as well as two machines. About half of the humans lied at some point, so the AI is learning that moral characteristics are better, since it’s programmed to not lie, and it also learns to maintain cooperation once it emerges". As with Maia, S# could help us understand our fallibilities and provide useful pointers about how to co-operate better for common good.
Both Maia and S# have been trained to solve some tricky challenges but they are still, at least for now, working within defined realms (chess and the prisoner's dilemma). But a key part of developing AI beyond these is to give it challenging enough problems to solve. Rui Wang, an AI researcher at Uber, has developed an AI that helps train other AIs by constantly adjusting the challenges to push the training just that little bit harder. At the moment it just trains a rudimentary stick figure running across an undulating landscape (the image at the head of this piece) but the Paired Open-Ended Trailblazer (POET), as it is called, generates the obstacle courses, assesses the bots’ abilities, and assigns their next challenge, all without human involvement. As this (paywall) article in MIT Technology says, "POET hints at a revolutionary new way to create supersmart machines: by getting AI to make itself". Using AI to build AI could actually be an important step towards Artificial General Intelligence.
The guys at DeepMind, however, think that all we need to reach AGI is Reinforcement Learning (RL). This is the subset of AI that learns by using trial and error to maximise rewards, and was famously demonstrated by AI models they built that beat the best human players at Space Invaders then Go (cf. AlphaZero mentioned earlier). Whereas most people think that AGI will come about by a combination of very specialised models, DeepMind is suggesting that, because RL is the basic way that humans learn, it can therefore cope with challenges such as knowledge, learning, perception, social intelligence, language, generalisation and imitation. It is certainly an attractive hypothesis, but as anyone who has read 'SuperIntelligence' by Nick Bostrom will tell you, you have to be very careful about what goals you set the machine - even asking for global happiness (a difficult enough thing to define in the first place) can end in disaster as the machines become overly obsessed with their task. Perhaps then a dash of human fallibility built into these models might actually be a good thing?
After the rather lengthy piece above, I thought it might be best just to show you some pictures. But these are rather special images, made from data. For example, if you look at the architecture of airport runways from very high up you can start to determine the wind patterns across the planet (because 'planes always take off and land into the wind).
If you map individual neurons in the brain, as Google have done, you can start to understand how the brain works. This data set, by way, models just 1 cubic millimetre of brain yet requires 1.4 petabytes of storage (1Pb = 1 million Gbs). By my calculations, a whole human brain would take up 1.75 million Petabytes (or 1.75 Zettabytes) if mapped to the same resolution. To put that into perspective, all the printed material in the world would be around 200 Petabytes.
Using a 570-megapixel camera at the Víctor M. Blanco telescope in Chile, the Dark Energy Survey (DES) photographed around a quarter of the Southern sky between 2013 and 2019. Using the image data they were able to work out that dark energy, the force that appears to be pushing the Universe to accelerate its expansion, has actually been constant throughout cosmic history. The good news is they are only halfway through their work, so more findings to come.
And because the Euros are on, someone has mapped all of the 1 million+ passes made in 890 major league football matches. Information really is beautiful.
St Vincent - Daddy's Home
St Vincent's latest LP is a bit of a corker. The title track refers to her father’s release from prison after a 10-year stretch for stock manipulation, and the rest of the tracks are, apparently, inspired by his record collection. There are influences from Pink Floyd, Randy Newman and Stevie Wonder, but all with a contemporary twist. For me, St Vincent is a female wanna-be Prince, but in all the best ways. Have a look below at her performance on Saturday Night Live to see what a showman she is. And of course, you can hear the whole album on Spotify or Apple Music.
Back in March 2020 I asked whether the growing pandemic would be an opportunity to reset the planet's climate emergency. The short answer is...no.
Facial recognition technology can be very intrusive, but at least you need to be facing the camera. But now AI can identify you from the way you walk.
If you want to try out an open-source version of GPT-3, the algorithm that can write articles and stories, have a look at Eleuther AI.
Andrew Burgess is the founder of Greenhouse Intelligence, a strategic AI advisory firm.