Tuesday, October 29, 2024
HomeTech NewsOpenAI tool used in hospitals is 'making up treatments' - US News...

OpenAI tool used in hospitals is ‘making up treatments’ – US News – News


There are fears about the impact of AI used in hospitals due to a significant malfunction, according to researchers.

OpenAI’s Whisper tool claims to have near “human level robustness and accuracy,” yet it has been found to make up things never said, according to an Associated Press investigation published Sunday with witness testimony from more than a dozen software engineers, developers and academic researchers.

The experts said some of the invented text, referred to as hallucinations among industry folk, have included racial commentary, violent rhetoric and even imagined medical treatments.

Whisper is used across various industries to translate and transcribe interviews and interactions as well as generate text for popular consumer technology and create subtitles for videos.

However, the experts are most concerned about how it is used in some medical centers to transcribe patient consultations with doctors and warn it should not be used in “high-risk domains.”

Another recent study by computer scientists uncovered 187 hallucinations in more than 13,000 clear audio snippets they examined, which would lead to tens of thousands of faulty transcriptions over millions of recordings, according to the researchers.

Alondra Nelson, who led the White House Office of Science and Technology Policy for the Biden administration until last year, warned of “really grave consequences.”

“Nobody wants a misdiagnosis,” said Nelson, a professor at the Institute for Advanced Study in Princeton, New Jersey. “There should be a higher bar.”

The prevalence of such hallucinations has led experts, advocates and former OpenAI employees to call for the federal government to consider AI regulations. At minimum, they said, OpenAI needs to address the flaw.

“This seems solvable if the company is willing to prioritize it,” said William Saunders, a San Francisco-based research engineer who quit OpenAI in February over concerns with the company’s direction. “It’s problematic if you put this out there and people are overconfident about what it can do and integrate it into all these other systems.”

An OpenAI spokesperson said the company continually studies how to reduce hallucinations and appreciated the researchers’ findings, adding that OpenAI incorporates feedback in model updates.

While most developers assume that transcription tools misspell words or make other errors, engineers and researchers said they had never seen another AI-powered transcription tool hallucinate as much as Whisper.



Source link

RELATED ARTICLES

Most Popular