For certain tasks, AI has long been more accurate than doctors. But it isn't used because doctors and patients don't trust it. So perhaps PR is exactly the missing piece of the puzzle.
I don't understand this mindset of not using technology due to a lack of trust. I can easily understand not blindly following an AI's findings, however they should probably be item #1 for the attending doctor to investigate.
The worst case scenario is that the doctor disagrees with the findings and continues investigation. The best case scenario is that you hit the nail on the head much more quickly than if it was an entirely manual process.
One answer is very simple: $$$. Study 12 years of your life to be a specialist only to be ultimately replaced by some installed machine at a Walmart Pharmacy.
> I don't understand this mindset of not using technology due to a lack of trust.
The more complex a tool is, the more likely it is to have flaws. Is it any surprise that the medical industry is slow to trust tools where the feared negative outweigh the few clear positives?
I would rather trust a robot to operate on me rather than diagnose me. The human will be able to adapt and communicate to me while doing so.
Well in that case, why bother going to the hospital at all!
Doctors are a social conduit for understanding symptoms, explaining the reasoning behind eventual diagnosis, and being able to perform it themselves. A computer would not be, and certainly not in a way a distraught patient could communicate with easily. I also don't envision computers being tied to writing prescriptions any time soon, which is the other reason I would use a doctor.
Statistical analysis is not always correct and it's hardly new. I believe it was the primary argument behind germ theory—germ theory wasn't exactly new, but it was hard to refute when looking at how sanitation correlated with better recovery rate.
Using terms like "AI" is exactly the kind of behavior that leads to doctors not trusting it. We should be teaching doctors why stats are a valuable tool, not to trust a black box because of "AI". How do you know when to question it?
It isn't used not because of patient lack of trust - but because of resistance by the medical community, some warranted, some unwarranted , and most of this resistance isn't because of trust issues. And the last thing that will change the minds of good doctors is a PR piece.
The paper "Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err" [1] has references to examples.
> Dawes subsequently gathered a large body of evidence showing that human experts did not perform as well as simple linear models at clinical diagnosis, forecasting graduate students’ success, and other prediction tasks (Dawes, 1979; Dawes, Faust, & Meehl, 1989).
Given that a majority of people (and doctors) fail even the simplest probabilistic reasoning tests, this is not particularly surprising. [2]
> We know from several studies that physicians, college students (Eddy, 1982), and staff at Harvard Medical School (Casscells, Schoenberger, & Grayboys, 1978) all have equally great difficulties with this and similar medical disease problems. For instance, Eddy (1982) reported that 95 out of 100 physicians estimated the posterior probability p(cancer|positive) to be between 70% and 80%, rather than 7.8%
The first documented success was the system MYCIN in the mid-1970s, which beat expert human performance on diagnosing blood infections: http://en.wikipedia.org/wiki/Mycin
I can't seem to find the paper I'm thinking of in some quick Google Scholar searching, but I believe there was a follow-up article that looked into why such a fairly simple system was able to beat humans, when it clearly lacked the range of expertise of the human experts (and was entirely missing information on some real conditions). The paper, if I'm remembering correctly, concluded that the win was almost entirely due to one specific failing displayed by the humans (even expert specialists) but not shared by computers: very bad intuition for conditional probabilities.