Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent.’
Kate Crawford studies the social and political implications of artificial intelligence. She is a professor of communication research and science and technology studies at the University of Southern California and senior principal investigator at Microsoft Research.
Her new book AI Atlas discusses what it takes to create AI and what is at stake as you reshape our world.
Microsoft’s Kate Crawford on AI.
Technology companies like to portray artificial intelligence as a precise and powerful tool for good. Kate Crawford says that mythology is flawed. Secondly, In her book AI Atlas, she visits a lithium mine, an Amazon warehouse, and a 19th-century phrenological skull archive to illustrate the natural resources. Human sweat and bad science underpinning some versions of the technology. Above all, Crawford, a professor at the University of Southern California and a researcher at Microsoft. She highlighted many applications and side effects of AI are in urgent need of regulation.
KATE CRAWFORD: It is presented as this ethereal and objective way of making decisions, something that we can plug into everything from teaching kids to deciding who gets bail. But the name is deceptive: AI is neither artificial nor intelligent.
We study AI from vast amounts of natural resources, fuel, and human labor. And it’s not intelligent in any human intelligence way. In a word, It cannot discern things without extensive human training, and it has a completely different statistical logic for how meaning is coming out.
Since the very beginning of AI back in 1956, we’ve made this terrible error. A sort of original sin of the field, to believe that minds are like computers and vice versa. We assume these things are analog to human intelligence, and nothing could be further from the truth.
Should we stop using AI?
We need to look at the nose-to-tail production of artificial intelligence. The seeds of the data problem were planted in the 1980s when it became common to use data sets without a close knowledge of what was inside or privacy concerns. It was just “raw” material, reused across thousands of projects.
This evolved into an ideology of mass data extraction, but data isn’t an inert substance—it always brings context and politics. Sentences from Reddit will be different from those in kids’ books. Images from mugshot databases have different histories than those from the Oscars, but they are all used alike. This causes a host of problems downstream. In 2021, there’s still no industry-wide standard to note what kinds of data are held in training sets—secondly, how it was acquired or potential ethical issues.
The Bottom Line
We’ve seen research focused too narrowly on technical fixes and narrow mathematical approaches to bias rather than a wider-lense view of how these systems integrate with complex and high-stakes social institutions like criminal justice, education, and health care. I would love to see research focus less on questions of ethics and more on questions of power. These systems are being used by powerful interests who already represent the most privileged in the world.