Issue 122 - 2nd November 2021
This edition of That Space Cadet Glows reflects on the current status of Artificial Intelligence and suggests that there is still a lot of growing up to do. Also, some amazing developments in robotic hands.
AI's Awkward Teenager Phase
Yann LeCun, AI guru and co-recipient of the 2018 Turing Award, reckons that the current state of AI is "characterised by trial and error, confusion, overconfidence and a lack of overall understanding". So, pretty much slap-bang in the middle of that awkward teenager phase then. It is certainly true that we have no real idea about how neural networks work, which means that most AI scientists are working by trial and error, and with a certain degree of confusion. The real problems come (as every parent of a teenager knows) when there is too much confidence. More confidence than understanding is a dangerous balance, and that is where we probably are with AI right now. Prof Stuart Russell, the founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley, also believes that AI scientists are 'spooked' by their own success in the field, and compares the advance of AI to the development of the atom bomb: “the physicists knew that atomic energy existed, they could measure the masses of different atoms, and they could figure out how much energy could be released if you could do the conversion between different types of atoms. And then it happened and they weren’t ready for it.” So how do we reel our unruly teenager in without beating all the ambition out of them (for there are huge benefits to be gained from responsible AI)? Professor Russell's solution, as he will lay out in this years BBC Reith Lectures, is to make sure that AI can work with, and alongside, humans. That means developing machines that know, like humans, the true objective is uncertain, meaning they must check in with us on any decision. Practically, that would include a code of conduct and trainng for researchers, plus legislation and treaties to ensure the safety of AI systems in use. But it is not just the responsibility of the researchers and government, but of the public in general to shape how AI will grow up to be a sensible adult that contributes to society. For one thing is certain, everyone grows up eventually...
Fill your face
...but in the meantime...
A key mantra of Responsible AI is 'just because you can do something doesn't mean you have to'. So when some schools in Ayrshire, Scotland, start using Facial Recognition Technology (FRT) on school children in order for them to pay for their lunch, you've got to think there must be less intrusive, less dangerous and more private solutions to try first. The UK's Information Commissioner's Office (ICO) put it more authoritatively: "Data protection law provides additional protections for children, and organisations need to carefully consider the necessity and proportionality of collecting biometric data before they do so. Organisations should consider using a different approach if the same goal can be achieved in a less intrusive manner". The stated reasons for introducing FRT into schools are that it is more Covid-safe and that it reduces the transaction time, both of which might be true, but are they good enough reasons to put children's safety at risk? And what sort of example does it set for the kids giving away all of their personal and biometric data for free? The good news is that, just a few days ago, the North Ayrshire Council has said that it has 'temporarily paused' the rollout after the ICO's enquiries - let's hope that it is permanently halted. But let's just reflect on how they got so far before anyone put up their hand and said 'hang on a minute..'
'Please put down your cigarette. You have 20 seconds to comply'
But in the meantime...part 2...
In Singapore, the government have been trialling robots to patrol housing estates and shopping centres in order to detect 'undesirable social behaviour'. It's not quite Robocop, but it's certainly moving in that direction. The robots have 7 cameras and shout out warnings to residents and shoppers whom it detects (using AI) are breaking the myriad rules, including smoking in prohibited areas, improperly parking bicycles, and breaching coronavirus social-distancing rules. The robots do not have FRT, but it is probably only a matter of time before they do - Singapore already has thousands of lamposts fitted out with FRT cameras. The impact on everyone's privacy is profound, and will only get worse before it gets better.
If you have to ask, then it's probably a bad idea
But in the meantime...part 3...
Another example of 'just don't do it' is Delphi, an AI prototype that is meant to "model people’s moral judgments on a variety of everyday situations ". Ugh. What you are meant to do is go to this page, ask Delphi a moral question and it will provide an answer, a bit like a virtuous Magic 8 Ball. Admittedly it is just a prototype, but it's just not very good. The system was trained using scenarios from Reddit, with answers crowd-sourced from Mechanical Turk (so from predominantly white, US males), which is just asking for trouble. It will certainly get better as more people use it, but what, actually, is the point? Should we be trying to use AI to answer our moral dilemmas? I know, let's ask Delphi... And the answer is 'It's bad'.
Foam, sweet foam
Bit of a grumble this edition, so let's focus on some technology that looks really beneficial. Researchers at the National University of Singapore have developed a smart foam material that allows robots to sense nearby objects, and to repair themselves when damaged. The self-repairing ability comes from creating a spongy material that fuses easily into one piece when it is cut. The sense of touch, as this Reuters report explains, comes from "infusing the material with microscopic metal particles and adding tiny electrodes underneath the surface of the foam. When pressure is applied, the metal particles draw closer within the polymer matrix, changing their electrical properties. These changes can be detected by the electrodes connected to a computer, which then tells the robot what to do". It can be used in robots to help them handle materials better but also to allow prosthetic users to have a more intuitive use of their robotic arms. I love the fact that this material solves some of the classic problems for robots, you can easily imagine it being used in every robot hand that is built.
She Drew the Gun - Behave Myself
This is powerful music. Louisa Roach, who goes by She Drew The Gun, delivers clever, indie-based psyche pop in an uncompromising style. Behave Myself is the follow-up LP to the brilliant Revolution of Mind, with this latest release upping the synths as well as the challenging lyrics. There are strong messages around mental health as well as politics - in a recent interview she said "it’s how you can’t really be free until everyone else who’s being held down by the systems that we live in is not being held back. You can’t turn a blind eye to it while other people’s struggles are still going on. Can you be truly free?" You can watch the video to the first single from the LP, Cut Me Down, below, and listen to whole album on Spotify or Apple Music.
Afterword
Many people think of Japan as technologically advanced, but they are only now trying to get rid of Floppy Disks and put all their data online.
Criminals used AI to clone a company directors voice and steal $35 million dollars from a bank.
I'm going to watch Dune on Friday, which will be made even more enjoyable knowing the screenplay was written in MS-DOS.
AJBurgess Ltd
ajburgess.com
thegreenhouse.ai
Andrew Burgess is the founder of Greenhouse Intelligence, a strategic AI advisory firm.