The Drum Awards Festival - Extended Deadline

-d -h -min -sec

Google Artificial Intelligence AI

AI researchers with ties to OpenAI and Google speak up for whistleblower protections

Author

By Webb Wright, NY Reporter

June 4, 2024 | 5 min read

A new open letter calls attention to what it paints as an industry that prioritizes speed over safety and discourages dissent.

OpenAI

OpenAI has become a leader in the race to build AGI, even as it faces one controversy after another. / Adobe Stock

A group of researchers hailing from leading AI companies, including OpenAI, argued in an open letter published today that their industry needs to do more to protect whistleblowers in light of the potential consequences of the technology it’s building.

AI, according to the open letter, has the power “to deliver unprecedented benefits to humanity,” but only if it’s developed with adequate guardrails – along with reliable channels of communication through which individuals with insider knowledge can voice their concerns. If built with too cavalier an attitude around safety and security, the authors write, advanced AI systems could eventually spiral out of control, “potentially resulting in human extinction.”

The companies building AI are motivated by the rewards of what’s already shaping up to be an immensely lucrative industry. On the other hand, today’s open letter argues that those same companies don’t have much incentive to be transparent about the potential safety risks of the AI-powered products they’re bringing to market.

“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm,” the letter states. “However, they currently have only weak obligations to share some of this information with governments and none with civil society. We do not think they can all be relied upon to share it voluntarily.”

Traditional legal protections for whistleblowers, the letter also notes, are reserved for cases involving illegal activity and so would not apply to the as-yet unregulated AI industry.

The letter was signed by 13 people with current or former ties to OpenAI and Google DeepMind, most of whom remained anonymous. It was also endorsed by AI luminaries Geoffrey Hinton (who left his post at Google last year to bring attention to the issue of AI safety), Yoshua Bengio and Stuart Russell.

“I think [the open letter is] quite brave and effective, particularly as it’s coming from the inside,” Russell, a professor of computer science at the University of California, Berkeley, told The Drum. “OpenAI cannot continue telling governments, ‘Trust us, we’re the experts and we know what we’re doing.’”

OpenAI has struggled to remain in the good graces of public opinion since it was launched into the cultural spotlight following the release of ChatGPT in 2022. While it’s received billions of dollars in investments from Microsoft and come to be viewed as one of the leaders in the race to build artificial general intelligence (AGI), it’s also attracted the ire of artists and news publishers, who claim that the company has stolen their intellectual property to train large language models.

Jan Leike and Ilya Sutskever, who had previously led OpenAI’s superalignment team – which focused on optimizing the safety of AI models – both left the company last month. Leike, who has since been hired by Anthropic, accused his former employer in an X post of sidelining safety as it rushed headlong to launch new products.

Then, on May 18, Vox published a report drawing attention to non-disclosure and non-disparagement clauses found in employee off-boarding documents at OpenAI. Violating or refusing to sign those clauses could reportedly result in employees forfeiting their vested equity in the company, which for some is valued in the millions of dollars.

OpenAI’s PR challenges became even more severe two days later when actress Scarlett Johanssen issued a public statement effectively claiming that her voice had been used without consent by OpenAI to train a new model called GPT-4o. (OpenAI has denied that the model’s voice was based on Johansson.)

Incidents like these have led to a perception among critics of some leading AI companies – and OpenAI in particular – as reckless juggernauts that, operating in a legal vacuum, have proceeded to steamroll over individuals and institutions in an effort to capture market share. And it’s exactly this scenario, the authors of the new open letter argue, which necessitates mechanisms to protect whistleblowers.

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the letter reads.

OpenAI announced last week that it had formed a new safety and security committee comprised of company board members. The company has not responded to The Drum’s request for comment.

For more on the latest happenings in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.

Google Artificial Intelligence AI

More from Google

View all

Trending

Industry insights

View all
Add your own content +