For years, companies have operated under the assumption that in order to improve their artificial intelligence software and gain a competitive advantage, they must gather enormous amounts of user data—the lifeblood of machine learning.
But increasingly, collecting massive amounts of user information can be a major risk. Laws like Europe’s General Data Protection Regulation, or GDPR, and California’s new privacy rules now impose heavy fines for companies that mishandle that data, such as failing to safeguard corporate IT systems from hackers.
Some businesses are now even publicly distancing themselves from what used to be a standard practice, such as using machine learning to predict customer behavior. Alex Spinelli, the chief technologist for business software maker LivePerson, recently told Fortune, that he has cancelled some A.I. projects at his current company and at previous employers because those undertakings conflicted with his own ethical beliefs about data privacy.
For Aza Raskin, the co-founder and program advisor for the Center for Humane Technology non-profit, technology—and by extension A.I.—is experiencing a moment akin to climate change.
Raskin, whose father, Jef Raskin, helped Apple develop its first Macintosh computers, said that for years researchers studied different environmental phenomena like the depletion of the ozone layer and rising sea levels. It took years before these different environmental issues coalesced into what we now call climate change, a catch-all term that helps people understand the world’s current crisis.
In the same way, researchers have been studying some of A.I.’s unintended consequences related to the proliferation of misinformation and surveillance. The pervasiveness of these problems, like Facebook allowing disinformation to spread on its service or the Chinese government’s use of A.I. to track Uighurs, could be leading to a societal reckoning over A.I.-powered technology.
“Even five years ago, if you stood up and said, ‘Hey social media is driving us to increase polarization and civil war,’ people would eye roll and call you a Luddite,” Raskin said. But with the recent U.S. Capitol riots, led by people who believed conspiracy theories shared on social media, it’s becoming harder to ignore the problems of A.I. and related technology, he said.
Raskin, who is also a member of the World Economic Forum’s Global A.I. Council, hopes that governments will create regulations that spell out how businesses can use A.I. ethically.
“We need government protections so we don’t have unfettered capitalism pointing at the human soul,” he said.
He believes that companies that take data privacy seriously will have a “strategic advantage” over others as more A.I. problems emerge, which could result in financial penalties or damaged reputations.
Companies should expand their existing risk assessments—which help businesses measure the legal, political, and strategic risks associated with certain corporate practices—to include technology and A.I., Raskin said.
The recent Capitol riots underscore how technology can lead to societal problems, which in the long run can hurt a company’s ability to succeed. (After all, it can be difficult to run a successful business during a civil war.)
“If you don’t have a healthy society, you can’t have successful business,” Raskin said.
Jonathan Vanian
@JonathanVanian
[email protected]