It’s estimated that about 12 million people in the U.S. are misdiagnosed in outpatient care every year.
And that’s a very conservative estimate, according to Harvard T.H. Chan School of Public Health’s Michael Barnett.
Now, a new study led by Barnett, assistant professor of health policy and management, suggests that pooling the diagnoses of multiple physicians into ranked lists, facilitated by online tools, could help improve diagnostic accuracy. In resource-rich settings, the use of smartphones and the internet could potentially enable group diagnoses in near real-time, say the authors. But even in low-resource settings where paper and pencil are the norm, diagnostic accuracy could potentially be improved by using collective intelligence.
The study was published in JAMA Network Open.
For centuries the prevailing model of diagnosis has been for an individual physician to assess a patient and arrive at a diagnosis. One exception is in hospitals, where teams routinely meet to discuss cases. Although collaborative, team-based diagnosis is considered superior to individual diagnoses, there hasn’t been much evidence to prove that that is true.
To test whether combining multiple individuals’ diagnoses, or a “collective intelligence” approach, could improve diagnostic accuracy, Barnett, senior author David Bates, professor in the Department of Health Policy and Management, and colleagues analyzed data from the Human Diagnosis Project (Human Dx), a large online database through which physicians and medical trainees solve user-submitted cases. Participants in the Human Dx community are able to create cases from their own clinical practice with information such as a patient’s medical history, physical exam, and diagnostic test results. Respondents submit a ranked list of possible diagnoses for a case and learn from the Human Dx platform how their answers compared to the final diagnosis.
The new study — the largest to date of “collective intelligence” in medicine — included more than 2,000 physicians and trainees solving more than 1,500 clinical cases. The researchers compared the accuracy of individual physicians or trainees solving cases to the accuracy of pooling together multiple physicians’ diagnoses and picking the highest-ranked collective diagnoses.
The study found that combining multiple diagnoses into a ranked list outperformed individual accuracy even in groups as small as two (62.5 percent versus 75.1 percent accuracy), with accuracy increasing up to groups of nine (85.6 percent accuracy) across a broad range of medical cases and common symptoms such as chest pain or fever. “The magnitude of the increased accuracy with even small teams was surprising,” said Barnett.
The study found that groups even outperformed specialists in solving cases that were matched to the specialist’s are of expertise.
The findings suggest that virtual team-based diagnosis could be an important new tool to tackle the difficult challenge of misdiagnosis, Barnett said. In the U.S. and similar settings, the ability to gather collective intelligence about diagnoses, either online or through another technology, could be a big help in the busy medical world because it could provide superior results with little coordination. “It could also be a cheap way to improve diagnostic accuracy even in low-resource settings, where diagnostic expertise can be hard to come by,” he said.