Categories
Artificial intelligence

Look behind the curtain: Don’t be dazzled by claims of ‘artificial intelligence’

We are presently living in an age of “artificial intelligence” — but not how the companies selling “AI” would have you believe. According to Silicon Valley, machines are rapidly surpassing human performance on a variety of tasks from mundane, but well-defined and useful ones like automatic transcription to much vaguer skills like “reading comprehension” and “visual understanding.” According to some, these skills even represent rapid progress toward “Artificial General Intelligence,” or systems which are capable of learning new skills on their own.

Given these grand and ultimately false claims, we need media coverage that holds tech companies to account. Far too often, what we get instead is breathless “gee whiz” reporting, even in venerable publications like The New York Times.

If the media helped us cut through the hype, what would we see? We’d see that what gets called “AI” is in fact pattern recognition systems that process unfathomable amounts of data using enormous amounts of computing resources. These systems then probabilistically reproduce the patterns they observe, to varying degrees of reliability and usefulness, but always guided by the training data. For automatic transcription of several varieties of English, the machine can map waveforms to spelling but will get tripped up with newly prominent names of products, people or places. In translating from Turkish to English, machines will map the gender-neutral Turkish pronoun “o” to “he” if the predicate “is a doctor” and “she” if it’s “a nurse,” because those are the patterns more prominent in the training data.

In both automatic transcription and machine translation, the pattern matching is at least close to what we want, if we are careful to understand and account for the failure modes as we use the technology. Bigger problems arise when people devise systems that purport to do such things as infer mental health diagnoses from voices or “criminality” from pictures of people’s faces: These things aren’t possible.

However, it is always possible to create a computer system that gives the expected type of output (mental health diagnosis, criminality score) for an input (voice recording, photo). The system won’t always be wrong. Sometimes we might have independent information that allows us to decide that it’s right, other times it will give output that is plausible, if unverifiable. But even when the answers seem right for most of the test cases, that doesn’t mean that the system is actually doing the impossible. It can provide answers we deem “correct” by chance, based on spurious correlations in the data set, or because we are too generous in our interpretation of its outputs.

Importantly, if the people deploying a system believe it is performing the task (no matter how ill-defined), then the outputs of “AI” systems will be used to make decisions that affect real people’s lives.

Why are journalists and others so ready to believe claims of magical “AI” systems? I believe one important factor is show-pony systems like OpenAI’s GPT-3, which use pattern recognition to “write” seemingly coherent text by repeatedly “predicting” what word comes next in a sequence, providing an impressive illusion of intelligence. But the only intelligence involved is that of the humans reading the text. We are the ones doing all of the work, intuitively using our communication skills as we do with other people and imagining a mind behind the language, even though it is not there.

While it might not seem to matter if a journalist is beguiled by GPT-3, every puff piece that fawns over its purported “intelligence” lends credence to other applications of “AI” — those that supposedly classify people (as criminals, as having mental illness, etc.) and allow their operators to pretend that because a computer is doing the work, it must be objective and factual.

We should demand instead journalism that refuses to be dazzled by claims of “artificial intelligence” and looks behind the curtain. We need journalism that asks such key questions as: What patterns in the training data will lead the systems to replicate and perpetuate past harms against marginalized groups? What will happen to people subjected to the system’s decisions, if the system operators believe them to be accurate? Who benefits from pushing these decisions off to a supposedly objective computer? How would this system further concentrate power and what systems of governance should we demand to oppose that?

It behooves us all to remember that computers are simply tools. They can be beneficial if we set them to right-sized tasks that match their capabilities well and maintain human judgment about what to do with the output. But if we mistake our ability to make sense of language and images generated by computers for the computers being “thinking” entities, we risk ceding power — not to computers, but to those who would hide behind the curtain.

Leave a Reply

Your email address will not be published. Required fields are marked *