Shooting at lake near Oklahoma City leaves at least 10 wounded, police say
Flashstack
Severity weighted live coverage

One of the more encouraging developments in artificial intelligence is that some of the people building it have started acting like it might be dangerous. Not in the Skynet sense or the HAL 9000 sense or even the "oops, it deleted all my emails" sense, though AI might be dangerous in all of those ways too. The question is whether the latest models are dangerous to infrastructure, dangerous to privacy, dangerous to security, and dangerous to the blurry line between public and private. For years, Big Tech has been heavy on the gas, light on the brakes—and we have all benefited tremendously, even as angry debates about the downsides have raged. But with AI, at least in a few notable cases, the companies themselves have begun doing something unusual. They have started saying no. Anthropic has announced that it would not broadly release Claude Mythos Preview, a frontier model that it says has already found "thousands of high-severity vulnerabilities," including in every major operating system and web browser. Instead, it is confining access to a consortium that includes Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks, and some other organizations that
Lean: 0.090 · Source quality 68/100 · Factual vs opinion 66/100.
Methodology