Jenna Burrell is Director of Research at Data & Society.
In 2010, Paul Dourish and Genevieve Bell wrote a book about tech innovation that described the way technologists fixate on the “proximate future” — a future that exists “just around the corner.” The authors, one a computer scientist, and the other a tech industry veteran, were examining emerging tech developments in “ubiquitous computing,” which promised that the sensors, mobile devices, and tiny computers embedded in our surroundings would lead to ease, efficiency, and general quality of life. Dourish and Bell argue that this future focus distracts us from the present while also absolving technologists of responsibility for the here and now.
We are now presumably living in some version of that future, reliant on wifi-connected washing machines, smart speakers that keep our shopping lists for us, and RFID tags that help locate a child’s precious favorite toy (all devices in use in my home). Yet our engagement with technology hasn’t removed struggle, chores, and tedium from our lives. While it has inarguably altered our social relationships, it hasn’t miraculously gifted us more time to focus on them. With the good has come the bad, the disappointing, the cludgy, and the intractable. Meanwhile, the techie futurists continue to train their vision on the horizon, and the latest headlines about artificial intelligence serve to keep our eyes fixed there, too, instead of on the messy present, and the dilemmas on our doorsteps.
Dominating the current news cycle is a 22-word statement, signed by executives and other researchers that bluntly states, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” With its tone of maximal emergency (and coming on the heels of a similar, if wordier, call for a “pause on AI”) the statement seems intent on drowning out other priorities and conversations about AI — which, make no mistake, are legion. From racial bias in facial recognition, to privacy violations in data collection, to the exploitation of labor in building these systems, many of these challenges do not have easy solutions, certainly not ones that prioritize business interests and maximal profitability.
Looking to the “proximate future,” even one as dark and worrying as AI’s imagined existential threat, has some strategic value to those with interests and investments in the AI business: It creates urgency, but is ultimately unfalsifiable. Something not-quite-yet realized cannot be disproven. In this context, consider the fact that OpenAI CEO Sam Altman (the company responsible for ChatGPT) signed on to the alarmism of those 22 words. The timing is perhaps not a coincidence. As the EU heads toward significant AI regulation, Altman recently suggested such regulation might force his company to pull out of Europe. The proposed EU regulation, of course, is focused on copyright protection, privacy rights, and suggests a ban on certain uses of AI, particularly in policing — all concerns of the present day. That reality turns out to be much harder for AI proponents to confront than some speculative future.
With AI’s harms (as documented by non-profits like DAIR and AJL, as well as by numerous AI ethics scholars and media outlets) subject to increasing public attention and scrutiny, the blithe optimism of the old futurism now looks clueless. In response, we see those who build tech finally acknowledging that it isn’t inherently a force for good. In the era of social media, old predictions can also come back to haunt their authors. Computer scientist Geoff Hinton’s prediction that radiologists would become obsolete, replaced by diagnostic AI, has been dredged up from the archives as we face a major shortage of radiologists that AI has come nowhere near addressing. Elon Musk’s old statements about how self-driving cars would imminently surpass and replace human drivers can be held up against the Tesla whistleblower’s data dump that shows safety problems with the system are still rife. We can all participate in the effort to examine these predictions against the reality of what’s happening now, and reassess the authority and trustworthiness of those who made them.
When powerful tech figures link arms behind an attention-grabbing statement alleging AI superpowers, they are also saying, as the Wizard of Oz did, “pay no attention to the man behind the curtain.” They are saying “the power resides in the machine.” They are distracting us from the enormous amount of wealth and power they stand to gain from the rise of AI — and from the fact that it doesn’t have to be this way. They would like to skip past today’s real work, that of tackling the challenges and wrestling with the possibilities that will shape the world to come. Instead, they ask us to look to the future — but they forget that the future belongs to us all.
Jenna Burrell is Data & Society’s director of research. She oversees all aspects of the institute’s research program, ensuring the rigor and integrity of its work. Before joining D&S she was a professor at the School of Information at UC Berkeley. Her research focuses on how marginalized communities adapt digital technologies to meet their needs and to pursue their goals and ideals.