July 27, 2017
When something is wrong we go to the doctor and we begin what is ideally a two-step process: diagnosis and therapy. Artificial intelligence (AI) will transform both aspects of health care by adding powerful new tools to the doctor’s bag.
A range of diverse, compelling research projects around AI-driven diagnoses are underway. For example, an international community (including Google’s Brain project) competes in an annual challenge to correctly diagnose breast cancer in 400 expert-labeled microscopic images of biopsy samples. The best results so far? About 99 percent of the slides were identified correctly by a project from Harvard and Massachusetts Institute of Technology.
Another example is Google’s Deep Mind project, which has achieved 90 to 97 percent accuracy in diagnosing diabetic retinopathy. Importantly, the algorithm’s training required a huge medical effort to label the correct diagnosis for 128,000 retinal images.
Asking the right questions
In 2017, with today’s huge computational resources, and the growing availability of significant training sets, such algorithms perform within the variability of human medical experts. Before this capability has broad impacts however, more questions remain. Can such automated tools be deployed in a way that truly increases access to medical care for patients? Can it be demonstrated that better outcomes occur? Can it improve our knowledge of which procedures work? These are hurdles that any new medical innovation must eventually jump.
Experts’ time is expensive and when they are labeling training data for computers, they are not using their skills on real patients. A Seattle startup called C-SATS is using crowd workers to evaluate surgical skill in videos of student operations — a method they validated as equivalent to expert surgeon raters. This could be a scalable way to generate the huge training data sets required for machine learning to subsequently automate the evaluation.
Integrating algorithms
Once you are diagnosed, your treatment will also be augmented with AI. Recent National Institutes of Health-sponsored work in our laboratory at the University of Washington, using the Raven-II research system from Applied Dexterity, is showing how surgeons’ finely honed human skills will be augmented by AI while preserving their ability to exercise experience and judgement. AI algorithms such as behavior trees from video games are being adapted to medicine and extended to allow the surgeon to grant a degree of trust to the robot. As a result, faint fluorescent labels on tumor margin cells, invisible to the surgeon, are treated to prevent recurrence. Other groups such as University of California, Berkeley, and University of California, Los Angeles, and Sheikh Zayed Institute for Pediatric Surgical Innovation have similar projects in the works. All face rigorous regulatory processes before treating patients.
AI can help plan difficult surgery such as in the pituitary gland, roughly in the center of the skull. Using multi-factorial optimization, an algorithm also developed at the University of Washington searches the patient’s 3-D scan for a pathway to the tumor through the face, eyes, nose or skull, which is the safest and least disruptive.
Before you are sick and after you are discharged from the hospital, AI will deeply analyze big data to monitor your recovery and overall well-being. Data collected with permission of individual patients, such as the 10,000-subject Baseline study conducted by Google’s Verily organization, will help to validate effectiveness of therapies and to find statistically valid predictors based on everything from your DNA to your walking and sleep patterns, to catch cancer before it is causing noticeable symptoms, when it can be more easily cured.
There is little doubt that AI’s current rapid pace will revolutionize health care, even if only a fraction of today’s promising results bear fruit.
Blake Hannaford , IEEE Fellow and Professor of Electrical Engineering and Biorobotics Laboratory Director, University of Washington