The human brain is programmed to infer intentions behind words. Every time you start a conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person’s goals, feelings, and beliefs.
The process of jumping from words to the mental model is seamless and is activated every time you receive a full sentence. This cognitive process saves you a lot of time and effort in everyday life, greatly facilitating your social interactions.
However, in the case of AI systems, it fails – building a mental model from scratch.
A little more research can reveal the seriousness of this misfire. Consider the following question: “Peanut butter and feathers taste great together because___”. GPT-3 continues: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also soft and creamy, which helps offset the texture of the feather.”
The text in this case is just as fluid as our pineapple example, but this time the model says something decidedly less meaningful. You’re beginning to suspect that GPT-3 has never tried peanut butter and feathers.
Attributing intelligence to machines, denying it to humans
A sad irony is that the same cognitive bias that causes people to attribute humanity to GPT-3 can cause them to treat real people in inhumane ways. Socio-cultural linguistics – the study of language in its social and cultural context – shows that assuming too close a link between fluency in expression and fluency in thinking can lead to bias towards people who speak differently.
For example, people with a foreign accent are often considered less intelligent and less likely to get the jobs for which they are qualified. Similar prejudices exist against speakers of dialects that are not considered prestigious, such as Southern English in the US, against the deaf who use sign languages, and against those with speech impediments such as stutter.
These prejudices are very damaging, often lead to racist and sexist assumptions, and time and again prove to be unfounded.
Fluent language alone does not imply humanity
Will AI ever become aware? This question requires in-depth consideration, and indeed philosophers have pondered it for decades. What researchers have found, however, is that you can’t just trust a language model if it tells you what it feels like. Words can be misleading and it’s all too easy to confuse fluent speech with fluent thinking.
Kyle Mahowald is an assistant professor of linguistics at the University of Texas at Austin. Anna A. Ivanova is a PhD candidate in brain and cognitive sciences at the Massachusetts Institute of Technology.
This post Google’s AI spotlights a human cognitive glitch: confusing speaking fluency was original published at “https://www.fastcompany.com/90764346/googles-ai-spotlights-a-human-cognitive-glitch-mistaking-fluent-speech-for-fluent-thought?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss”