Good morning world, today in the Google trends we discover that MIT scientists trained an AI robot to become a psychopath by only exposing it to gruesome Reddit pictures of death and violence. Its name: Norman, which refers to Norman Bates, a fictional character notably portrayed by Anthony Perkins in Alfred Hitchcock’s movie Psycho. So far so good.
They say you can learn something new every day. My horoscope also said I was going to get a surprise today. That was about right! MIT scientists Pinar Yanardag, Manuel Cebrian and Iyad Rahwan tested the effects of data fed into an algorithm on its outlook. The experiment consisted of exposing the algorithm to gore and dark images found on Reddit to see how it could affect its software.
Don’t even ask which subreddit, the team chose to not mention it due to its disturbing graphic content.
The AI program was specifically being tested on whether or not it could look at and understand pictures. Yanardag, Cebrian, and Rahwan made the robot capable of generating a textual description so it could tell what it could see. So, after training it on macabre images of deaths and other delights, the scientists made Norman take the Rorschach test. You know, the series of weird inked pictures some psychologists use to analyze the emotional state and mental health of their patients. In other words, to detect underlying thought disorders.
What do you see? A butterfly on the Joker’s face. And you? Not the point, focus!
The Rorschach test was then taken by another AI bot that was trained on a whole different kind of images – dogs, cats, kids, etc. Sounds more joyful, doesn’t it? Both responses were compared and the gap is striking.
When the standard AI that was shown random life pictures was asked about what it saw in the Rorschach inkblot above, it wrote, “A group of birds sitting on top of a tree branch.” When Norman was asked to describe what it saw, it expressed, “A man is electrocuted and catches to death.”
And it goes on for nine more inkblot tests. What do you see Standard AI? “A close up of a vase with flowers.” Norman? “A man is shot dead.” Standard? “A black and white photo of a baseball glove.” Norman? “Man is murdered by machine gun in broad daylight.” Inkblot #8 is also a ‘good’ one:
If you wish to take the test, the MIT team will be happy to record your responses.
Conclusion? On the website Norman AI dedicated to their research, the scientists published, “The data that is used to teach a machine learning algorithm can significantly influence its behavior.” Don’t blame the “biased and unfair” algorithm, blame the data that was fed to it. And the human who fed it.
This experiment is a further step towards the understanding of AI, although super scary. It shows that robots can express an adequate reaction to what they have been trained on. Artificial Intelligence could really be a valuable assistance in some cases, such as when Google is using AI to detect cancer cells. But it also could go wrong. Sure, all Norman can do is write. But who knows what other kinds of machines could actually do.
Can you see the cover of the June 2028 New York Times that reads “Elon Musk did warn us” as AI went wrong? I know, being very pessimistic here. Everything is probably going to be just fine.