Issue 115 - 28th January 2021
Welcome to the first issue of 2021, in which I write about OpenAI's amazing new image generation algorithm, the AIs that try to guess our mental state, how to cope with the stresses of working from home, and how we are all slowly adapting to the way of the machines.
Say What You See
I've already banged on quite a bit about the AI model built by Open AI called GPT-3. They have now followed up with another beast of a model, this one called DALL-E (the name comes from a combination of artist Dali and the Pixar character, Wall-E). DALL-E’s party trick is to create entirely new images from text. That might sound quite simple, at least for a human to do, but for a computer requires clever algorithms, lots of training and plenty of computing power. For example, if we ask DALL-E to generate 'an armchair in the shape of an avocado' it comes up with a number of plausible options, one of which you can see here (you can see lots more, and choose other objects from the Open AI blog post). Things can get a little weird, or impractical, though: DALL-E was asked to generate an image of 'a pentagonal green clock'. Whilst some of them are clearly acceptable, others would be pretty useless if you actually wanted to tell the time. But, this linking of text and images is a significant advance because it brings together two distinct fields of AI, and, importantly, it aligns very much with the way humans think. In some ways, DALL-E is a step closer to Artificial General Intelligence because it is able to generate new artefacts, even if they don’t make logical sense (such as a 'snail made of harp'). But, we must remember that DALL-E doesn’t actually understand anything about what it is creating: it won’t grasp in any way at all that a ‘harp snail’ (or is it a ‘snail harp’?) could not be a real thing, just as it won’t grasp that ‘pentagonal green clock’ could be a useful, and somewhat decorative, way to tell the time. As we found with GPT-3, the closer we get to mimicking human intelligence, the further we realise we are away from actually recreating it.
How Do I Feel?
As mental health has, unfortunately, become such a focus for many of us, researchers and entrepreneurs are trying to help us all out by applying AI to the problem. Most of them, IMHO, should be filed under the category of 'Just because you can, doesn't mean you should'. Facebook researchers tried to predict mental illness based on messages and photos posted on their platform. Volunteers gave full access to their feeds (would anyone do that IRL?) so the researchers could predict whether they had a mood disorder (like bipolar or depression), a schizophrenia spectrum disorder, or no mental health issues. Apparently, "swear words were indicative of mental illness in general, and perception words (like see, feel, hear) and words related to negative emotions were indicative of schizophrenia. And in photos, more bluish colours were associated with mood disorders." The motivation is a good one, but should we be leaving something so serious down to what we post on a site where we are generally trying to look better than we actually are? Labelling someone as having a mood disorder because they post lots of sky pictures doesn't sit well with me. An even more serious example is from the Department of Veterans Affairs in the US who are trying to predict a veteran's likelihood of suicide. Again, a worthy cause, and one that could save lives. But at what cost? What about the vet who wasn't suicidal and gets a call telling them they are? What about the privacy question, and the fact that the model is a black box? These are really tough ethical issues that are only just starting to play out in society as AI becomes more ubiquitous, and I don't think anyone has the answers yet. But I will leave you with one egregious example that should definitely not be deployed anywhere: a wristband that tells your boss if you are unhappy.
The Virtual Commute
For most of the last 20 years, I've had a home office, so all of the recent excitement about this cultural and social shift of working from home has seemed rather unnecessary. One area of the WFH lifestyle that has got a lot of attention is the commute, or the lack of it. For most people (especially those in big cities), this is a godsend, but it does have its downsides. I've always been aware of the need for a 'stress corridor' - a distinct gap between work and home life so that the work stuff stays at work - and the commute is an ideal way of achieving that. But when working from home, the commute can simply be walking down the stairs, and that doesn't give enough time for the brain to adjust. Often, after a days work, I'll come downstairs and just sit on the sofa staring into space or idling through my 'phone, telling my wife that I am (metaphorically) "on the train". Then she knows to ignore me for a while until I'm ready to 'arrive home'. (Sometimes, because she works at home as well, we can share a virtual carriage on the train). Other people seem to have had similar ideas - this article from the BBC describes the German concept of 'Feierabend'. It is explained as "the moment you stop working for the rest of the day...[then] it's the part of the day between work and going to bed.” The emphasis is on the period of rest, and draws distinct boundaries between that and work (by going for a cycle, for example), which is so much more important to do now. Of course, some people can get the wrong end of the stick: Microsoft, in their wisdom (or from just trying to sell more Teams licences), wants to let users schedule 'virtual commutes' when they will be asked to "set goals for the day and reflect on wins, losses and to-dos". I would boldly suggest they have completely missed the point. Perhaps it is they who need the stress corridor most of all?
Programming Language
In my work advising on Robotic Process Automation, I often reflect on the technology's future and imagine a world where there are more RPA 'bots using IT software than there are humans. At this point the need for a User Interface (UI) becomes obsolete and the role of the robot is to unseeingly pass data back and forth between systems (effectively intelligent APIs). People who are currently designing the pretty, user-friendly UIs would no longer have a role. Similar things are happening in the world of report writing. This article from Venture Beat suggests that companies are changing the way that they write reports so that they are more easily understandable by machines. This is because a "substantial amount of buying and selling of shares [is] triggered by recommendations made by robots and algorithms which process information with machine learning tools and natural language processing kits." If humans are being left out of the loop, why bother trying to flower up your earnings report? I've always thought the same thing could happen to contracts - these are often read and analysed by machines now, so why not simplify all of that impenetrable legal-speak and just break it down to the bare facts? But then, of course, we wouldn't need as many lawyers, and what sort of world would that be? ;-)
Bicep - Isles
This electronic dance duo have created the ideal LP for 2021 - "rave music for your living room" as it's described by The Guardian. Sublime listening.
Afterword
One of the strangest peripheral impacts of Covid symptoms is the number of complaints received by a scented candle manufacturer. Follow the story on this Twitter feed.
In a sign of growing dissatisfaction with the practices of Big Tech, 400 employees of Google have decided to form a union. This is on top of a huge row when Alphabet effectively sacked Timnit Gebru, one of their senior ethics advisors. Cracks are appearing in the power base.
If you didn't see this Boston Dynamics video of their robots dancing to 'Do You Love Me?' just before Christmas it is definitely worth watching.
Under His Eye
Each month I hope that there is less bad news to write about Facial Recognition Technology (FRT), but each month there are more and more examples of this technology being abused.
To put it all in context, this article from Harvard University highlights how racial bias is still prevalent in many training datasets, and that cameras are widespread in some US cities. The outcome of this is that innocent people are being arrested - in this example Nijeer Parks was accused of shoplifting and trying to hit a police officer with a car at a Hampton Inn in New Jersey. The police had identified him using FRT, even though he was 30 miles away at the time. Perhaps even more worryingly, large corporations (in this case, Huawei) have been found to be planning to build race detection into their software.
There are efforts to mitigate these abuses though. Competitions have been held to encourage the development of unbiased models, and Amnesty International is crowd-sourcing a map of FRT cameras in New York.
And finally...after reporting last month on an FRT system for cows, I can now reveal that there are also facial recognition systems for pigs.
AJBurgess Ltd
ajburgess.com
thegreenhouse.ai