top of page
Writer's picturelifeunearth

AI in Health Care to check racial bias

Google, Microsoft officials share how racial bias could hinder

the spread of health care

Big Tech: Racial bias in data must be fixed before AI expands to health care


With reproductive AI in the news, Google and Microsoft officials talk about using AI in healthcare. , highlighting biases embedded in health data issues implicit racial bias. Meanwhile, CIDRAP reports that famous Twitter users have helped change public opinion about the response to the pandemic. ARLINGTON, Virginia - As new generative AI models like ChatGPT become more common, some experts say health care must address implicit racial bias embedded in health data to ensure such tools work. , Google and Microsoft executives discussed the use of artificial intelligence in healthcare at the Healthcare Datapalooza event held Thursday in Arlington, Virginia.


There is a lot of excitement about the potential of AI models in healthcare, such as ChatGPT, a chat platform that crunches massive data sets to generate text, video and code. The goal is for AI to someday support clinical decision making [and] improve patient literacy with learning tools that reduce jargon, said Jacqueline Shreibati, M.D., Google's senior clinical director.


However, there are shortcomings in the use of these models in health care. The most important of these is that clinical evidence is constantly evolving and changing. Another important issue is that the data may have a racial bias that needs to be reduced. "A lot of our data has structural racism in the code," Shrebati said.


The challenge is how to deal with these biases to ensure fair results in the use of AI, a priority for the healthcare industry. Microsoft, an investor in the company that developed ChatGPT, is aware of the anxiety that artificial intelligence creates in the health industry, said Michael Uohara, chief medical officer of the software giant's federal health services.

"Our approach at Microsoft is not just looking at our products and saying, 'Hey, we need to be more responsible,' we actually start at the end of the product development life cycle," he said. Another important thing is to be transparent during the design process and clearly state what the AI ​​tool can achieve. There are still many questions about the application of ChatGPT in healthcare.


A recent Stat News editorial found that while the chatbot dataset includes the Current Procedural Terminology code, there were problems with the sources provided for certain diagnoses. digital platform Doximity has released a beta version of its ChatGPT tool for doctors to help with administrative tasks such as preparing pre-authorization requests for insurance companies. Doximity hopes to create a set of medical guidelines that can be used as draft letters to insurers, appeal denials and post-procedure instructions for patients. (Kaiser Health News)

Recent Posts

See All

Comments


bottom of page