Tools & Strategies News

Natural Language Processing Tool Predicts Severe Maternal Morbidity

An automated natural language processing tool that uses clinical notes and EHR data performed as well as a previously validated, manual risk stratification tool for severe maternal morbidity.

Two hands on a red background, one adult and one child. The adult's hand is white, and is holding the child's hand, which is red

Source: Getty Images

By Shania Kennedy

- A new study published this month in JAMA Network Open found that an automated natural language processing (NLP) tool relying on clinical notes and EHR data achieved similar predictive performance to a manual, previously validated tool for severe maternal morbidity (SMM) risk stratification.

Maternal health has become a significant population health concern, especially in light of growing maternal morbidity and mortality in the US. The Commonwealth Fund reported in a November 2020 issue brief that the US has the highest maternal mortality rate among similarly developed countries.

The Centers for Disease Control and Prevention (CDC) indicate that approximately 700 people die annually in the US during pregnancy or in the following year, and an additional 50,000 have unexpected complications during labor and delivery that result in short- or long-term adverse health impacts. Some of these are related to maternal morbidity, which the study stated is a public health priority in the US.

The study further explains that maternal morbidity is typically addressed via risk stratification tools routinely used in obstetrics, which help care teams assess and communicate risks associated with delivery for a particular patient. However, the study authors highlight that these tools rely on manual user input, which can burden care teams.

To improve SMM risk stratification, the researchers set out to develop an automated NLP method capable of analyzing clinical notes and EHR data to forecast patient risk.

NLP is the arm of artificial intelligence (AI) that enables computers to “understand” human language in the form of voice or text data by leveraging machine learning or deep learning and computational linguistics. In clinical or research settings, NLP is used to extract relevant data from free-text sources like EHRs and clinical notes, which can be difficult to obtain and use otherwise.  

To develop their tool, the research team gathered data from 19,794 patients receiving care at Brigham and Women’s Hospital and Massachusetts General Hospital between July 1, 2016, and Feb. 29, 2020. They used a subset of data from 4,043 patients to generate the Obstetric Comorbidity Index (OB-CMI), a previously validated, comorbidity-weighted score to stratify SMM risk. They used data from the remaining 15,760 patients to train the tool. A total of 115 individuals in the testing cohort and 468 in the training cohort experienced SMM.

The researchers developed the tool using a vocabulary of 2,783 words gathered from the records of individuals from the training set. Following training, the tool was tested on the OB-CMI dataset to evaluate its performance compared to the previously validated risk-scoring tool. To compare the two, the researchers prioritized positive predictive value and model sensitivity to identify individuals at the highest risk of SMM.

Overall, the researchers found that the area under the receiver operating characteristic curve of the NLP-based model was 0.76, which was comparable with that of the OB-CMI model. They also performed similarly in terms of sensitivity and positive predictive value, with the NLP model achieving 28.7 percent sensitivity to the OB-CMI’s 24.4 percent and a positive predictive value of 19.4 percent compared to the OB-CMI’s 17.6 percent.

According to the study, these findings indicate that AI and NLP have the potential to improve risk stratification and patient care while reducing non-patient-facing tasks for care teams and clinical staff.

However, more research is needed to validate an NLP approach and determine its role in clinical practice, researchers noted.