Would you be afraid of a 3year old?
AI news comes in two varieties: hype and doom. Recently, the hype cycle has been prevalent. AlphaGo defeated the best human player, machine translation became a useful tool, and chatGPT now provides witty responses to all questions. But doom is never far away: warnings of job losses, killer robots, and sentient AI. The fear is understandable, but it reflects our perceptions of AI more than its actual capabilities. The main cause is humanizing those models and assuming they have drives, motives, and emotions that would cause them to act evil when they really just do a task.
To be clear, current AI technology has numerous flaws, so we must use and develop it with caution. These problems are caused by bias and user discrimination. The automated grading system in the United Kingdom unfairly punished some students, the machine translation system that incorrectly translated "Good morning" as "Attack them," resulting in legal trouble for an innocent person, and the speed camera that issued a ticket to an innocent driver because it mistook a knitted jumper for a license plate are all examples of how AI can cause havoc. The worst example is the Indian man who starved after an automated decision system denied him food rations.
All those tools, though, acted out of design flaws, not malice. The consequences are still dire, but it identifies the problem.
Despite these concerning reports, I do not anticipate any lethal AI threats. I'm not alone. According to Andrew Ng, a pioneer in neural networks, "Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven't even landed on the planet yet!". What evil actions would a sentient machine translation even take? Produce poor translations to irritate you?
The concerns are understandable, though, given that we have machines with human characteristics. They play games, answer questions, translate sentences, and identify people in photographs. If they can do all of that, they must be like us, right? So they most likely also have hopes, dreams, and aspirations. But complex artificial intelligence systems must be tailored to each task, such as Go, sentence analysis, or photo recoloring. These three are each unable to complete the other two tasks. Despite the efforts of many intelligent people, AI tools frequently perform like three-year-olds. AI cannot yet decide to perform tasks that are beyond its capabilities.
Even AI researchers are affected by humanization bias, though. An external Google employee, Blake Lemoine, claimed after extensive discussions with Lambda that the model possessed self-awareness and consciousness. A journalist posed slightly different questions to Lambda from Lemoine. And the model denied being conscious.
Maarten Sap, a University of Washington researcher, investigated language models' Theory of Mind (the ability to imagine and understand the thoughts and feelings of others). A patient's Theory of Mind can be determined using a variety of question-based psychological tests. They can also be administered to language models. But this logic is flawed. People answer questions based on their complex inner workings. Language models just generate a list of likely words in response. While their behaviors are similar, their motivations and paths to the same goal are not.
As a result, asking whether models have these psychological abilities is pointless. They do not have a Theory of Mind, feelings, consciousness – or malice. Why and how would they develop this capability in the absence of explicit programming? Each AI task must be meticulously defined and trained. That never includes giving sentience, emotions, or aspirations. AI models may reflect the naiveté and lack of checks and balances of their designers, but they do not act out of evilness.
So should you fear AI becoming evil? No. Should you keep an eye on their design? Definitely. Should you try using it on a daily basis? I invite you to try AI. What we understand cannot scare us.