A.I. Could Worsen Health Disparities
One of the big problems with reliance on machine learning algorithms is placing too much faith in the results without regard for the training set used. If the data used to build the AI is flawed, then your outcomes will be still flawed. And unlike humans, people are more trustworthy of computer decisions because it is believed that is unbiased, but that is not the case at times:
It turns out Amazon’s AI hiring tool discriminated against women
When Algorithms Discriminate
Artificial Intelligence’s White Guy Problem
Machine Bias
Artificial intelligence is beginning to meet (and sometimes exceed) assessments by doctors in various clinical situations. A.I. can now diagnose skin cancer like dermatologists, seizures like neurologists, and diabetic retinopathy like ophthalmologists. Algorithms are being developed to predict which patients will get diarrhea or end up in the ICU, and the FDA recently approved the first machine learning algorithm to measure how much blood flows through the heart — a tedious, time-consuming calculation traditionally done by cardiologists.
It’s enough to make doctors like myself wonder why we spent a decade in medical training learning the art of diagnosis and treatment.
One of the big problems with reliance on machine learning algorithms is placing too much faith in the results without regard for the training set used. If the data used to build the AI is flawed, then your outcomes will be still flawed. And unlike humans, people are more trustworthy of computer decisions because it is believed that is unbiased, but that is not the case at times:
It turns out Amazon’s AI hiring tool discriminated against women
When Algorithms Discriminate
Artificial Intelligence’s White Guy Problem
Machine Bias