ComputerWorld's Juergen Fritsch writes cogently on the failure of EHRs to transform physician clinical documentation from unstructured narrative text to structured data:
Not surprisingly then, at least 60 percent of all clinical documentation is still captured in unstructured narrative form – for example via dictation/transcription or a front-end speech recognition tool – and simply stored as text blobs in the EHR. While this keeps physicians productive and at the same time results in more meaningful and higher quality documentation, such unstructured documentation has not been very accessible to electronic systems in the past.
National Coordinator for Health IT Farzad Mostashari tweeted a link to the article:
Yes, integration of NLP (with voice front-end) into EHRs is the future (also helps w ICD10) bit.ly/PnOZTC
— Farzad Mostashari (@Farzad_ONC) September 19, 2012
It is refreshing that Stage 2 of Meaningful Use recognizes the need for structured clinical documentation. It is disappointing, however, that thought leaders in the US still relegate all future user interface enhancements for physicians to voice recognition. In practice, voice recognition can be very effective, and it is true that natural language processing (NLP) can extract meaningful data from free-form text after it is translated by a voice recognition engine. However, it doesn't have to be this hard.
In Canada, the province of Ontario has collected structured pathology reports on almost every type of prevalent cancer for years. They did not implement voice recognition, allow pathologists to dictate at will, and attempt to use artificial intelligence to translate all of this into the required data elements recommended by the College of American Pathologists and Cancer Care Ontario. They needed consistent, standard data to learn more and make decisions about cancer care. Instead, they issued a data standard, certified various vendors as effective implementers of that standard, and let pathologists choose structured clinical reporting tools. More than half of the hospitals chose mTuitive's structured reporting tool, xPert for Pathology. Physicians use a simple user interface to complete standardized checklists that result in apples-to-apples comparisons for cancers across the province. In comparison, using sophisticated artificial intelligence just to arrive at a standard set of data elements on a checklist looks like a Rube Goldberg machine.
Fritsch's claim that structured reporting modules built into EHRs "put the burden of creating structured patient data on the physicians by making them point-and-click through templates full of checkboxes, radio buttons and drop-down menus, thereby slowing them down significantly" is true about most structured reporting tools built into EHRs today. However, it does not mean that all user interfaces that use mouse-and-keyboard or touch are hard to use. He is also correct that many EMRs "encourage the use of one-size-fits-all forms and copy-and-paste behavior that often results in cluttered, lower quality clinical documentation that fails to represent the subtleties and uniqueness of each individual patient’s story." Using intelligent defaults that remember physician's most common answers, but with additional answers and more descriptive terms within easy reach, you can have the best of both worlds — speedy entry of structured documentation that is also personalized and specific to the patient encounter.
Applications like mTuitive's xPert for Pathology and OpNote bring fast, easy-to-use interfaces to pathology and operative reporting. Rather than forcing physicians to speak (and remember!) every data element in very detailed clinical protocols, a simple, checklist-style user interface replaces pathology and operative report dictation. By focusing on areas of deep domain expertise (pathology, surgical specialties) and using proven methods in usability design, smaller companies like mTuitive can create extremely usable point-and-click (and now tap-and-swipe) interfaces that are far faster and easier to use than voice recognition, without the need for Watson on the backend to translate speech into simple findings. We embrace the ideas behind Atul Gawande's Checklist Manifesto, and extend those ideas to clinical documentation. It is a lot easier to correctly report on 50 pathologic findings when you follow a standard checklist.
And as for that ICD-10 conversion? Synoptic, structured reporting helps with that too.