Exec tells first UN council meeting that big tech can’t be trusted to guarantee AI safety
By EDITH M. LEDERER
Associated Press
UNITED NATIONS (AP) — An artificial intelligence company executive told the first U.N. Security Council meeting on AI’s threats to global peace that the handful of big tech companies leading the race to commercialize AI can’t be trusted to guarantee the safety of systems we don’t yet understand and that are prone to “chaotic or unpredictable behavior.” Jack Clark, co-founder of the AI company Anthopic, said that’s why the world must come together to prevent the technology’s misuse. Clark said the most useful things that can be done now “are to work on developing ways to test for capabilities, misuses and potential safety flaws of these systems.”