Healthcare AI Should Be Trained on “Gold Standard Data Sets”
Last week, healthcare sector leaders urged Congress to pass regulations on the use of artificial intelligence (AI) in the industry based on issues facing AI programming, such as implicit bias and patient privacy.
When considering legislation, Congress must consider the training procedures that could result in bias to ensure equitable use of AI in medicine, which was the overall sentiment of the hearing.
According to this Hill article, Dr. David Newman-Toker of Johns Hopkins University School of Medicine neurology department, said AI systems should be trained on “gold-standard data sets” to ensure health care professionals aren’t “converting human racial bias into hard and fast AI-determined rules.”
Along these lines, a novel normative framework for healthcare AI was launched last week that asserts that medical knowledge, procedures, practices, and values should be considered when integrating the technology into clinical settings.
Developed by researchers from Carnegie Mellon University, The Hospital for Sick Children, the Dalla Lana School of Public Health, Columbia University, and the University of Toronto, this new framework is designed to help stakeholders holistically evaluate AI in healthcare.
The framework advocates for healthcare AI to be viewed as part of a larger “intervention ensemble,” or a set of practices, procedures, and knowledge that enable care delivery. This conceptual shift characterizes AI models that reflect the values and processes of the people and environment surrounding it.
It is clear that very thoughtful consideration is happening when it comes to bringing AI to life in the healthcare sector. In order for AI to work in an effective and ethical manner, these kinds of guardrails will certainly help to safely and effectively make healthcare AI a reality.
RosettaHealth can assist with any health information challenges you might have, book a free consultation with one of our interoperability experts.