A few weeks ago, software engineer Brandon Jackson found himself shut out of his smart home for a full week. When Alexa wouldn’t respond to his commands, he called the Amazon help desk to see what the issue was. Evidently, the company locked him out because of his apparent racism: “I was told that the driver who had delivered my package reported receiving racist remarks from my ‘Ring doorbell’ (it’s actually a Eufy, but I’ll let it slide).” Later, without any explanation or apology, Amazon allowed Jackson access again.
Jackson later viewed this experience as a lesson in keeping devices local and diversifying smart-home service providers. However, the meme used by Not the Bee of the evil computer HAL from 2001: A Space Odyssey, responding, “I’m sorry, Dave. I can’t unlock your house,” is a more accurate observation. Considering people’s increasing dependence on artificial intelligence (AI) to manage their lives, it’s only inevitable that these devices will render users helpless and vulnerable to corporate control.
Around the same time that Jackson was assuring Amazon that he wasn’t racist, the article “Why AI Will Save the World” by Silicon Valley entrepreneur and venture capitalist Marc Andreesen went viral. As the title suggests, Andreesen argues that AI represents a huge technological advance that will boost worker productivity, eliminate global strife, precipitate a cultural renaissance, and “make the world warmer and nicer.”
According to Andreesen, AI is like other technological innovations in that it makes tasks easier to perform and leaves more time for other things. Like a dishwasher or a Roomba freeing up homemakers from the drudgery of cleaning dishes and floors, AI will free workers from so much thinking. Enlightened populations in the future will be able to contend with an infinitely complex world by equipping themselves with “infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful” AI.
Andreesen sanguinely insists that people will work side by side with AI, resulting in ever more social progress. Writer Sam Woods goes further with this idea in a recent article, “Who’s on the Other End of the Chatbot?,” suggesting that AI can function as a thinking partner that can help us better understand ourselves: “You can have LLMs [Large Language Models] interrogate you, argue with you, challenge your assumptions, challenge what you’re saying and thinking.” This would certainly lead to better decision-making—assuming the user is still the one making decisions.
However, what AI boosters like Andreesen and Woods seem to miss is that AI’s technological capabilities represent a difference in kind, not degree. Unlike construction vehicles or self-service checkouts, which automate basic functions like digging holes or processing orders and purchases, AI is automating complex functions like deliberation and communication. Instead of acting as a tool that enhances or supplements human labor, it is essentially replacing it.
To say that this will free people to grow smarter and help society progress is like previous generations declaring that television and the internet would do the same thing. In all likelihood, most people will use the free time enabled by AI to “amuse themselves to death.” This was predicted by the brilliant sci-fi novella With Folded Hands… in which androids take over the world and prevent human beings from doing anything because it would expose them to stress or harm. Finally, the androids start lobotomizing everyone, leaving all men and women to sit dumbly in their rocking chairs “with folded hands.” For a more kid-friendly version of this story, one can also watch Wall-E.
As a high school English teacher, I had to laugh at Andreesen’s hypothetical AI tutor “helping [students] maximize their potential with the machine version of infinite love.” Why would any kid listen to a computer try to teach him how to write essays or solve algebra problems, especially when that computer can do these things itself? And what exactly would the “infinitely loving” AI tutor do to make a student more cooperative? Would it be empowered to reward or punish the student by increasing or limiting access to various amenities and recreational applications? “Solve for X, and you will be allowed five minutes of TikTok.”
This dilemma hits on something deeper about AI and its supposed potential for boosting human performance. Sure, AI is infinitely more knowledgeable, rational, and objective than any human being, but this makes it fundamentally unrelatable. Unlike human teachers, who can have relationships with their students (which is how they motivate their students to do their work in the first place), AI software lacks such a capacity. They can’t feel disappointed in their “pupil” slacking off, nor can they take pride in her achieving mastery—they can only impotently simulate these feelings.
Because a true relationship with AI is impossible, it is therefore impossible to trust AI. It’s not that the AI will somehow become self-aware and turn evil; it’s that AI is bound by its programming and lacks a conscience. As in the case with Brandon Jackson, or more recently Fox News, AI programs are designed to spy on their users, report them to an unaccountable megacorporation, and then be used to punish those users and force compliance.
Andreesen seems to recognize this danger when he mentions the abuse of AI technology in dictatorial regimes like that overseen by the Chinese Communist Party (CCP): “They view it as a mechanism for authoritarian population control.” Already, the (CCP) uses AI to monitor Chinese citizens, assign a social credit score, and reward or punish them based on their score. This forces the entire Chinese population to submit to the CCP’s agenda, no matter how stupid or brutal it might be.
The same could easily happen with any Big Tech company—nearly all of which, not coincidentally, have close ties with the CCP. Whether it’s Amazon, Apple, or Google, these companies have every reason to disempower consumers and make them ever more dependent on their products. Their ideal user is not the talented young visionary discovering ways to colonize Mars but the couch potato discovering new ways to spend his UBI check. In return for sucking the life and soul out of their users, these companies will compensate by disincentivizing them from using hateful language and expressing problematic views
Nevertheless, with all this acknowledged, the possibility of an AI-driven surveillance state doesn’t necessarily mean that AI technology is intrinsically evil and should be avoided at all costs. Rather, it demonstrates that AI technology is powerful and its use must be regulated so that all Americans can enjoy its benefits while being protected from its harms. It falls to us to become educated on AI and do our part to hold all levels of government accountable for keeping us safe as well as free with this new technology. We cannot assume, like Andreesen does, that governments and businesses will automatically act rationally and try to empower people with AI; rather, we should assume the opposite, cultivate personal discipline with our technology use, and remain vigilant in curbing excesses and abuses. In practice, this would mean allowing the use of AI in a productive capacity (analyzing and processing data for industrial and commercial use, for example) but not in an invasive personal capacity (monitoring and determining individual behavior). Put simply, we must all make sure that AI remains a tool and doesn’t become an unwanted friend.