Skip to content
Vistoa
FeedTopics
Sign inSign up
BREAKINGJeanine Pirro says she has evidence officer was shot at White House press dinner1 hr ago
Top StoriesUnited StatesCanadaWorldPoliticsGeneralBusinessTechHealthSportsEntertainmentAviationPublishers

In Harvard study, AI offered more accurate diagnoses than emergency room doctors

1 articles · 1 outlets · spread 0.00

In Harvard study, AI offered more accurate diagnoses than emergency room doctors
artificial intelligence2 hr ago

In Harvard study, AI offered more accurate diagnoses than emergency room doctors

Full coverage view across outlets, lean, source quality, and framing. Compare framing without algorithmic ranking.

1 articles1 outletsSpread 0.008 claims
Vistoa

Article-level news analysis, transparent scoring, and API tools for readers, publishers, and teams that need source context.

DMCA and copyright review

Copyright owners can submit notices, counter-notices, and source material concerns through the dedicated review flow.

Open DMCA review

Product

  • Home
  • Feed
  • Search
  • Topics
  • Saved

Platform

  • About
  • Methodology
  • Home
  • Search
  • Saved
  • Me

From the Left

0 outlets

No coverage from this perspective yet.

From the Center

1 outlet
  • TechCrunch·May 3

    In Harvard study, AI offered more accurate diagnoses than emergency room doctors

    A new study examines how large language models perform in a variety of medical contexts, including real emergency room cases — where at least one model seemed to be more accurate than human doctors. The study was published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians. In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI. “At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.” In Harvard Medical School’s press

From the Right

0 outlets

No coverage from this perspective yet.

Claim synthesis

Pro users see canonical claims across the cluster and which outlets reported each one.

Learn more

Outlets covering this story

TechCrunch

First seen

May 3, 2026

Latest

May 3, 2026

Outlets

1

Diversity

100/100

  • Pricing
  • API docs
  • Publishers
  • Account

    • Sign in
    • Create account
    • Reader settings
    • API console

    Legal

    • Terms
    • Privacy
    • Security
    • DMCA

    © 2026 Vistoa. All rights reserved.

    Limited excerpts, attribution, analysis, and outbound publisher links remain core product boundaries.