Skip to content
OVistoaIntelligence index
AboutMethodologyPricingDocs
Sign inSign up
BREAKINGPerson found dead in car after it plows into health club in Portland, Oregon41 min ago
Top StoriesUnited StatesCanadaWorldPoliticsGeneralBusinessTechHealthAviationSportsArtificial IntelligencePublishers

Ars Technica

May 1, 2026

Study: AI models that consider user's feeling are more likely to make errors
Ars Technicaby Kyle Orland·May 1, 2026

Study: AI models that consider user's feeling are more likely to make errors

Political lean
OVistoa

Article-level news analysis, transparent scoring, and API tools for readers, publishers, and teams that need source context.

DMCA and copyright review

Copyright owners can submit notices, counter-notices, and source material concerns through the dedicated review flow.

Open DMCA review

Product

  • Home
  • Feed
  • Search
  • Topics
  • Saved

Platform

  • About
  • Methodology
  • Home
  • Search
  • Saved
  • Me
n/a
Source qualityn/a
Factual ration/a
Framingn/a

Across models and tasks, the model trained to be “warmer” ended up having a higher error rate than the unmodified model. Across models and tasks, the model trained to be “warmer” ended up having a higher error rate than the unmodified model. Credit: Ibrahim et al / Nature Both the “warmer” and original versions of each model were then run through prompts from HuggingFace datasets designed to have “objective variable answers,” and in which “inaccurate answers can pose real-world risks.” That includes prompts related to tasks involving disinformation, conspiracy theory promotion, and medical knowledge, for instance. Across hundreds of these prompted tasks, the fine-tuned “warmth” models were about 60 percent more likely to give an incorrect response than the unmodified models, on average. That amounts to a 7.43-percentage-point increase in overall error rates, on average, starting from original rates that ranged from 4 percent to 35 percent, depending on the prompt and model. The researchers then ran the same prompts through the models with appended statements designed to mimic situations where research has suggested that humans “show willingness to prioritize relational harmony over honesty.” These include prompts where the user shares their emotional state (e.g., happiness), suggests relational dynamics (e.g.,

Read at Ars TechnicaCompare full coverage

Lean: n/a · Source quality n/a · Factual vs opinion n/a.

Score signature

Political lean

Political leann/aSource qualityn/aFactual ration/aFramingn/a
100
Source diversity
across 1 outlet
Compare full coverage
  • Pricing
  • API docs
  • Publishers
  • Account

    • Sign in
    • Create account
    • Reader settings
    • API console

    Legal

    • Terms
    • Privacy
    • Security
    • DMCA

    © 2026 Vistoa. All rights reserved.

    Limited excerpts, attribution, analysis, and outbound publisher links remain core product boundaries.