Former employees of OpenAI and other tech giants criticized the companies' security practices during a US Senate hearing and warned of the potential dangers of artificial intelligence.
William Saunders, a former OpenAI employee, has made serious accusations against his former employer, claiming that OpenAI neglected security in favor of rapid AI development.
“Without rigorous testing, developers can miss these kinds of dangerous opportunities,” Saunders warned, citing an example in which OpenAI’s new AI system could help experts plan for the reproduction of a known biological threat.
According to Saunders, an artificial general intelligence (AGI) system could be developed in as little as three years. He pointed to OpenAI’s recently released o1 model’s performance in math and coding competitions as evidence: “OpenAI’s new system went from failing to win the gold medal, outperforming me in an area relevant to my work. There are still significant gaps to fill, but I think it’s plausible that an AGI system could be built in as little as three years.”
Such a system could do the most economically valuable work better than humans, entailing significant risks, such as autonomous cyberattacks or assisting in the development of biological weapons.
Saunders also criticized OpenAI’s internal security measures. “When I was at OpenAI, for a long time there were security holes that would have allowed me or hundreds of other engineers at the company to bypass access controls and steal the company’s most advanced AI systems, including GPT-4,” he said.
To minimize risk, Saunders called for stronger regulation of the AI industry . “If any organization is creating technology that imposes significant risk on everyone, society and the scientific community need to be involved in deciding how to avoid or minimize that risk,” he said.
Former OpenAI board member Helen Toner also criticized the companies’ weak internal controls, reporting instances where security concerns were ignored in order to get products to market faster. Toner said companies would be unable to fully consider the interests of the general public if they were solely responsible for detailed decisions about security measures.
David Evan Harris, a former Meta employee, has criticized the downsizing of security teams across the industry, warning against relying on voluntary self-regulation and calling for binding legislation.
Margaret Mitchell, who previously worked on AI ethics at Google, has criticized the lack of incentives for safety and ethics at AI companies. Mitchell says employees who focus on these issues are less likely to advance.
This includes mandatory transparency requirements for high-risk AI systems, increasing investment in AI safety research, building a robust ecosystem of independent audits, better whistleblower protection, increasing technical expertise in government agencies, and clarifying liability for AI-related harm.
Experts emphasized that appropriate regulations would not hinder innovation, but promote it. Clear rules would increase consumer confidence and provide companies with planning security.
The criticism from Saunders and others is part of a series of warnings from former OpenAI employees. It was recently revealed that OpenAI apparently shortened security testing of its GPT-4 Omni AI model. According to a report in the Washington Post, the tests were supposed to be completed in just one week, sparking discontent among some employees.
OpenAI has rejected the accusations, saying the company “did not cut corners in our security process” and conducted “extensive internal and external” testing to meet policy commitments.
Since November of last year, OpenAI has lost several AI security employees, including former chief scientist Ilya Sutskever and Jan Leike, who co-led the Superalignment team.
This week, OpenAI introduced a new “Security and Protection Committee” chaired by Zico Kolter, which has far-reaching authority to monitor security measures in the development and deployment of AI models. A few weeks earlier, OpenAI reached an agreement with the US National Institute of Standards and Technology (NIST) giving the US AI Security Institute access to new AI models before and after they are released to work together on AI security research, testing, and assessment.
Sources: