Fears about AI ‘very legitimate,’ Google CEO says
Google CEO Sundar Pichai, head of one of the world’s leading artificial intelligence companies, said in an interview last week that concerns about harmful applications of the technology are “very legitimate,” but the tech industry should be trusted to responsibly regulate its use.
Speaking last week, Pichai said that new AI tools, the backbone of innovations such as driverless cars and disease-detecting algorithms, require companies to set ethical guardrails and think through how the technology can be abused.
“I think tech has to realize it just can’t build it, and then fix it,” Pichai said. “I think that doesn’t work.”
Tech giants have to ensure that artificial intelligence with “agency of its own” doesn’t harm humankind, Pichai said. He said he is optimistic about the technology’s long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be “far more dangerous than nukes.”
Google’s AI technology underpins a range of initiatives, from the company’s controversial China project to the surfacing of hateful conspiratorial videos on its YouTube subsidiary — a problem he vowed to address in the coming year. How Google decides to deploy its AI has also sparked recent employee unrest.
Pichai’s call for self-regulation followed his testimony in Congress, where lawmakers threatened to impose limits on technology in response to its misuse, including as a conduit for spreading misinformation and hate speech. His acknowledgement about the potential threats posed by AI was a critical assertion because the Indian-born engineer often has touted the world-shaping implications of automated systems that could learn and make decisions without human control.
Pichai said in the interview that lawmakers around the world are still trying to grasp AI’s effects and the potential need for government regulation. “Sometimes I worry people underestimate the scale of change that’s possible in the mid-to-long term, and I think the questions are actually pretty complex,” he said. Other tech giants, including Microsoft, recently have embraced regulation of AI — both by the companies that create the technology and the governments that oversee its use.
But AI, if handled properly, could have “tremendous benefits,” Pichai said, including helping doctors detect eye disease and other ailments through automated scans of health data. “Regulating a technology in its early days is hard, but I do think companies should self-regulate. This is why we’ve tried hard to articulate a set of AI principles. We may not have gotten everything right, but we thought it was important to start a conversation.”
Pichai, who joined Google in 2004 and became chief executive 11 years later, in January called AI “one of the most important things that humanity is working on.” He said it could prove to be “more profound” for human society than “electricity or fire.” But the race to perfect machines that can operate on their own has rekindled familiar fears that Silicon Valley’s corporate ethos — “move fast and break things,” as Facebook once put it — could result in powerful, imperfect technology eliminating jobs and harming average people.
Within Google, its AI efforts also have created controversy: The company faced heavy criticism this year because of its work on a Defense Department contract involving AI that could automatically tag cars, buildings and other objects for use in military drones. Some employees resigned due to what they called Google’s profiting off the “business of war.”
Asked about the employee backlash, Pichai said that his workers were “an important part of our culture.”
In June, after announcing that Google wouldn’t renew the contract next year, Pichai unveiled a set of AI ethics principles that included general bans on developing systems that could be used to cause harm, damage human rights or aid in “surveillance violating internationally accepted norms.”
Google CEO Sundar Pichai





