Iran closes Strait of Hormuz again over US blockade and fires on ships
Flashstack
Severity weighted live coverage

When do we take AI doomers seriously? That’s a key subtext of Elon Musk’s attempt to shut down OpenAI’s for-profit AI business. His attorneys argue that the organization was set up as a charity focused on AI safety, and lost its way in pursuit of lucre. To prove that, they cite old emails and statements from the organization’s founders about the need for a public-spirited counterweight to Google DeepMind. Today, they called their only expert witness: Peter Russell, a University of California, Berkeley computer science professor who has studied AI for decades. His job was to offer background on AI, and establish that this technology is dangerous enough to worry about. Russell co-signed an open letter in March 2023 calling for a six-month pause in AI research. In a sign of the contradictions here, Musk also signed the same letter, even as he was launching xAI, his own for-profit AI lab. Russell told jurors and Judge Yvonne Gonzalez Rodgers that there were a variety of risks associated with the development of AI, ranging from cybersecurity threats to problems with misalignment and the winner-take-all nature of developing Artificial General Intelligence (AGI). Ultimately, he said that there was a tension between the
Lean: n/a · Source quality n/a · Factual vs opinion n/a.