Three technological developments under the umbrella of artificial intelligence (AI) are synergistically changing the practice of medicine: big data, machine learning, and robotics. Individually, they are already significant forces; together, they reinforce each other. Although it may take years, these ideas are likely to lead to disruptive changes in all medical specialties.
“In the next 10 years, we’ll see diagnostics as well as medicine change. And that will come from both medicine and devices, and the information systems that support them.”
– Jonathon Rothberg; CEO Butterfly Network; Adjunct Professor, Yale School of Medicine
The common denominator of all of these ideas is “big data.” It refers to our increasing ability to collect, store, and analyze information about things that were previously thought of as “non-measurable.”
For example, video data, the binary description of the pictures on our screens, can be stored and automatically analyzed to determine what the video is about, to identify specific people in the video, and even the emotions they are expressing. As doctors, we strongly rely on what we observe. The increasing ability of computers to accurately analyze video and photographic data will eventually present new opportunities — for example, faster, more accurate melanoma detection.
Similarly, auditory data can be collected and analyzed across the acoustic spectrum. This allows recognition of speech content as well as “acoustic fingerprints” not audible to the human ear. Industries outside of medicine are already taking advantage of novel ways to analyze acoustic data, such as identifying the location where a gun was fired, as well as the type of ammunition used. This technology is already deployed in cities across the United States and, going forward, may provide ways to reduce gun injuries and deaths.
Medicine is increasingly using acoustic technology in the form of ultrasound imaging. A fascinating example is a startup technology company that is designing an ultrasound system with AI to automate the diagnosis of certain conditions. When integrated with telemedicine, this platform could expand global access to diagnostic imaging, and provide quality care in remote clinics that don't otherwise have access to a trained technician.
Making Machines Smarter
"I’m convinced that if it’s not already the world’s best diagnostician, it will be soon."
- Andrew McAfee, MIT, on the IBM Watson supercomputer
The huge increase in the amount of data we collect, including data from our vision, hearing, and other senses, will create new problems. Namely, once enormous sets of data have been collected, what do we do with them? How do we write programs to analyze data when the data in question are far too complex for us to understand?
That’s where machine learning makes its impact. It gives us the ability to examine data sets on a scale so large that our human brains struggle to create algorithms to make sense of them. Machine learning allows computers to improve their own understanding of data sets, find interconnections among discrete data, and make predictions and recommendations based on the connections within those immense data sets.
I want to emphasize a point that surprised me. Machine learning is exactly what it says. The machine (i.e., the computer program) is teaching itself how to analyze the data and deal with complex situations for which there may be no “right” answer.
In other words, machine learning is not defined as getting the program to follow a complex algorithm written by humans. It is having the computer write its own program based on its ability to examine, trial, and iterate at amazing speed. Although humans are responsible for the “machine’s” ability to learn, the improvements that occur happen at a pace and scale that only computers can achieve.
Machine learning is already being put to use in playfully creative applications such as recipe design and games. But, more importantly, it is increasingly being used in fields such as the medical and transportation industries. Physicians will eventually need to know how to effectively deal with machine learning applications so that we can continue to provide the best care for our patients.
Ramping Up Robotics
“Autonomous robotic surgery—removing the surgeon’s hands—promises enhanced efficacy, safety, and improved access to optimized surgical techniques.” – Shademan, et. al. Science and Translational Medicine, 04 May 2016: Vol. 8, Issue 337, pp. 337ra64.
No matter how far artificial intelligence has advanced, humans have always had an advantage: we have hands connected to our minds that give us the ability to touch and manipulate our surroundings. When it comes to interacting with the environment, we win. However, with advancements in robotics, that may be changing.
A recent article in The Economist describes the Smart Tissue Autonomous Robot, or STAR, an autonomous surgical robot created at the National Children’s Health System in Washington D.C. In lab trials, the STAR has been trained to use haptic feedback (touch data) along with visual and other inputs to re-anastomose severed pig intestines…all by itself.
This technique stands in contrast to the robots currently used in surgery, which are controlled by a surgeon, usually seated 10 feet away from the patient.
It will still be quite a while before we see any autonomous surgical robots in hospitals or clinics. Although the STAR successfully reconnected the severed pig intestines, the entire process was set up by humans, allowing the robot to successfully complete a very specific and limited task.
Anesthesiologists have already faced a similar development: the Sedasys machine, developed by Johnson & Johnson. Sedasys was an automated system for delivering sedation to patients during endoscopies. The manufacturer claimed that Sedasys could measure various patient vital signs and adjust sedation delivery, keeping patients comfortable for the procedure without deepening the level of sedation into general anesthesia.
The American Society of Anesthesiologists opposed the approval of Sedasys by the FDA, arguing that machine intelligence would be ill-equipped to make nuanced clinical decisions or cope with emergencies. Some hospitals using Sedasys reported cost savings and improved efficiencies. However, Sedasys may ultimately have been ahead of its time. The machine was withdrawn from the market earlier this year because of weak sales.
Bracing for Disruption?
Despite the removal of Sedasys from the market, it would be a mistake to dismiss it or other nascent technologies. Disruptive innovation depends on the incumbents ignoring technological upstarts as “not ready for prime time.” This gives new technologies time and space to become effective. Even more importantly, it gives new companies time to develop effective business models.
The combination of innovative technologies and innovative business models has led to disruption in other industries. The practice of medicine and the business of healthcare are not immune.
Whether we describe AI in terms of potential or hype, or consider its arrival to be wonderful or catastrophic, reality is likely to be somewhere in between. The technologies that are being created today have potential that is mind-boggling, but they are not a guaranteed panacea for patient care.
A significant amount of trial and error lies ahead before we can reliably and easily use AI in medicine. However, based on the trajectory and pace of its development, it seems inevitable that AI decision-support tools will be part of our patient care, especially for those of us who are still decades away from retirement.
Despite the justified concerns and appropriate skepticism about AI in medicine, I remain cautiously optimistic. I believe the advances in big data, machine learning, and robotics will allow us to practice in ways we never expected, and to treat patients more effectively than we ever thought possible. Which, in my opinion, makes big data a big deal.
This article was originally published in the CEP America blog on June 22, 2016.