Alphabet, through Verily Life Sciences, is invested in the use of healthcare through applying technology and data science. Google’s in-house Brain team is as well, with the machine learning division running a pilot study that applies existing voice recognition technology to transcribe medical conversations.
Google notes that most of the Modern Doctor’s day is spent managing records and adding notes to patient case files:
Unfortunately, physician routinely spends most of the time doing documentation than doing what they love most — caring for patients. part of the reason is that doctors spend ~6 hours in an 11-hour workday in the Electronic Health Records (EHR) on documentation.1 Consequently, one study found that more than half of surveyed doctors report at least one symptom of burnout.2
This work is tedious but important as “good documentation helps produce good clinical care by communicating a doctor’s thinking, their issues, and their plan to the rest of the team.”
Building off the increase of medical scribes that listen to patient-doctor conversations and takes notes, Google is investigating the use of voice recognition technology already available in Assistant, Home, and Translate.
Google specifies that has been possible to leverage Automatic Speech Recognition (ASR) models of medical conversations for advanced transcription.Their work has already gone through solutions by recording the doctor and the patient.
Furthermore, it’s not limited to predictable medical terminology, but supports “everything from weather to complicated medical diagnosis.”
Google currently works on a pilot study for research with Stanford University “what kinds of clinically relevant info will be extracted from medical conversations to help physicians in reducing their interactions” with medical systems. On the privacy front, Google notes patients can need to provide consent and will be de-identified to protect their privacy.