1a·1o·spread 0.00·10 hr agoartificial intelligence·via Fox News10 hr agoNEWYou can now listen to Fox News articles! A new conservative influencer entered the creator space. But what made her stand out? She didn't exist. She was a AI-generated by a man in India. Emily Austin, a podcast host and content creator in the sports and political space, warned transparency is important as AI becomes more prevalent.A 22-year-old in northern India, hoping to become an orthopedic surgeon and move to America one day, was looking to make more money. He did so by generating an AI conservative female influencer with the help of Google's Gemini Nano Banana Pro, who he named Emily Hart.Austin spoke with Fox News Digital about this "frightening" story and on the expansion of AI in the creator space.Austin believes there is not enough awareness of this issue.PALANTIR'S SHYAM SANKAR: AMERICANS ARE 'BEING LIED TO' ABOUT AI JOB DISPLACEMENT FEARS"There definitely needs to be at least transparency, or maybe they should, you know, market more of an awareness like, 'Hey guys, this person that is using a real voice and a real face could actually be fake.'"Austin said that right now, there is only so much that can be done to combat this."There's only so much we
1a·1o·spread 0.00·10 hr agoartificial intelligence·via STAT10 hr agoKatie Palmer covers telehealth, clinical artificial intelligence, and the health data economy — with an emphasis on the impacts of digital health care for patients, providers, and businesses. You can reach Katie on Signal at palmer.01.Getting a paper published in Science is a highlight of many researchers’ careers. But for internist and clinical artificial intelligence researcher Adam Rodman, it’s also been a source of some agita. On Thursday, Rodman and his colleagues published a compilation of experiments, including one using real-world data from a Boston emergency department, that show a large language model from OpenAI can outperform physicians in case-based diagnostic and clinical reasoning evaluations. To Rodman, the paper’s co-senior author, it’s a response to a gauntlet thrown down in Science in 1959. That paper “described how you would know that a clinical decision support system was capable of doing diagnosis better than humans,” he said. “And they can do it.” But as generative AI tools like chatbots are heavily marketed — both to patients and clinicians — it makes him worried that the science experiments, all based on simulated and historical cases, will be misconstrued as proof of AI’s safety and efficacy when used to treat real patients. STAT+
1a·1o·spread 0.00·11 hr agoartificial intelligence·via The Conversation11 hr agoIn the summer of 2025, OpenAI released ChatGPT 5 and removed its predecessor from the market. Many subscribers to the old model had become attached to its warm, enthusiastically agreeable tone and complained at the loss of their ingratiating robotic companion. Such was the scale of frustration that Sam Altman, OpenAI’s CEO, had to acknowledge that the rollout was botched, and the company reinstated access. Anyone who’s been told by a chatbot that their ideas are brilliant is familiar with artificial intelligence sycophancy: its tendency to tell users what they want to hear. Sometimes it’s very explicit – “that is such a deep question” – and sometimes it’s a lot more subtle. Consider an AI calling your idea for a paper “original,” even if many people have already written on the same topic, or insisting that your dumb idea for saving a tree in your garden still contains a germ of common sense. AI sycophancy seems harmless, maybe even cute, until you imagine someone consulting a chatbot about a weighty question, like a military strategy or a medical treatment. We study the impact of extensive human interactions with chatbots, and we recently published a paper on the ethics of AI
1a·1o·spread 0.00·11 hr agoartificial intelligence·via The Atlantic11 hr agoThis article was featured in the One Story to Read Today newsletter. Sign up for it here.If there is any field in which the rise of AI is already said to be rendering humans obsolete—in which the dawn of superintelligence is already upon us—it is coding. This makes the results of a recent study genuinely astonishing.In the study, published in July, the think tank Model Evaluation & Threat Research randomly assigned a group of experienced software developers to perform coding tasks with or without AI tools. It was the most rigorous test to date of how AI would perform in the real world. Because coding is one of the skills that existing models have largely mastered, just about everyone involved expected AI to generate huge productivity gains. In a pre-experiment survey of experts, the mean prediction was that AI would speed developers’ work by nearly 40 percent. Afterward, the study participants estimated that AI had made them 20 percent faster.But when the METR team looked at the employees’ actual work output, they found that the developers had completed tasks 20 percent slower when using AI than when working without it. The researchers were stunned. “No one expected that outcome,” Nate
1a·1o·spread 0.00·2 hr agoartificial intelligence·via CNBC2 hr agowatch nowDepartment of Defense CTO Emil Michael on Friday said Anthropic is still a supply chain risk, but that Mythos, the company's artificial intelligence model with advanced cyber capabilities, is a "separate national security moment.""I think the Mythos issue that's being dealt with government-wide, not just at Department War, is a separate national security moment where we have to make sure that our networks are hardened up, because that model has capabilities that are particular to finding cyber vulnerabilities and patching them," Michael told CNBC's "Squawk Box" on Friday.Michael's comments come after a heated clash between the DOD and Anthropic spilled into public view earlier this year. The DOD declared Anthropic a supply chain risk, which means its technology purportedly threatens U.S. national security, after the two sides failed to agree on how Anthropic's models could be used by the agency.Because of the supply chain risk designation, defense contractors have to certify that they do not use Anthropic's Claude models in their work with the military. Anthropic sued the Trump administration in March to try to reverse the Pentagon's blacklisting. It is not clear how the DOD could use Anthropic's Mythos model without violating the supply chain risk designation.Michael said
1a·1o·spread 0.00·10 hr agoartificial intelligence·via Defense One10 hr agoJames Baker ANTHROPIC By Patrick Tucker Science & Technology Editor May 1, 2026 09:24 AM ET Artificial Intelligence Pentagon The United States has “a tight time window to adapt” to the “civilizational" challenge of AI, according to a former senior Pentagon thinker who's joining Anthropic as a “strategist-in-residence.”James Baker led the Defense Department’s Office of Net Assessment—often referred to as the “Pentagon’s Think Tank”—from 2015 to 2025, when it was temporarily closed by the Trump administration. At Anthropic—the AI company now amid a six-month withdrawal from federal service, as ordered by President Trump—Baker will to lead analysis of how AI is affecting U.S. institutions and competition with China, the company announced Friday.As ONA director, Baker advised defense secretaries and national security advisors on the long-term effects of emerging technology on national security; he had earlier served on the Joint Staff and in other advisory roles.For decades, ONA helped the U.S. military adapt to social, economic, environmental, and technological trends. The office was established in 1973 by Andrew Marshall, a policy strategist in the Nixon administration. Using a data-driven, “system-of-systems” approach, it sought to predict the interrelation and effects of trends from tech development to military affairs to labor. The office
1a·1o·spread 0.00·11 hr agoartificial intelligence·via Defense One11 hr agoBill Clark/CQ-Roll Call, Inc via Getty Images By Alexandra Kelley Staff Correspondent, Nextgov/FCW May 1, 2026 11:49 AM ET Industry AI & Autonomy Pentagon Seven leading AI developers have deals to install tools in classified Defense Department networks, a wide spread meant to prevent "vendor lock," Pentagon officials said Friday.Amazon Web Services, Google, Microsoft, NVIDIA, OpenAI, Reflection, and SpaceX are cleared for Impact Level 6 and Impact Level 7 network environments, part of a bid to streamline data synthesis, improve warfighter decision-making, and increase situational understanding and awareness.“Together, the War Department and these strategic partners share the conviction that American leadership in AI is indispensable to national security,” a press release said. “This leadership depends on a thriving domestic ecosystem of capable model developers that enable the full and effective use of their capabilities in support of Department missions. As mandated by President [Donald] Trump and Secretary [Pete] Hegseth, the Department will continue to envelop our warfighters with advanced AI to meet the unprecedented emerging threats of tomorrow and to strengthen our Arsenal of Freedom.”The new AI tools will be available via GenAI.mil, the Pentagon’s central AI platform. In late April, Google rolled out its Gemini 3.1 Pro model on
1a·1o·spread 0.00·11 hr agoartificial intelligence·via The Atlantic11 hr agoDonald Trump is on TikTok doing his morning routine. “Get ready with me for a big day 💄🇺🇸,” reads the caption, as the president holds a makeup brush to his cheek. The scene is a still, ostensibly a screenshot of a TikTok clip. Like so much other AI-generated slop coursing through the internet, the image is fake and ridiculous. It also looks unnervingly real: There are no hands with six fingers, physics-defying angles, or other flagrant signs of AI-generated imagery. At quick glance, it really looks like the president is putting on bronzer.Created in ChatGPT with the prompt “Trump doing a makeup tutorial on TikTok”I made this deepfake with OpenAI’s new image-generation model. ChatGPT Images 2.0, released last week, can create photorealistic visuals that are noticeably more convincing than what its predecessors might have produced. The tool has flooded the internet with hyperreal fakes: for example, Jeffrey Epstein as a Twitch streamer. I created the “screenshot” of Trump’s fake TikTok after encountering a similar image on the ChatGPT subreddit, and I’ve since been able to use Images 2.0 to create all kinds of alarming deepfake images—including of Elon Musk getting whisked away by the FBI, world leaders suffering medical emergencies,