Identifying Heart Disease Risk Factors from Clinical Text

ODSC - Open Data Science
3 min readOct 18, 2019

People needlessly die every single day due to preventable heart attacks. The clues are hiding right within the notes doctors and clinicians take during routine health care visits. In this presentation, we uncover how these clues can be deciphered and converted into actionable insights using advanced NLP and Deep Learning techniques. Machine Learning & AI have come a long way to process unstructured text information and this presentation showcases the same.

[Related Article: ODSC East 2019: Major Applications of AI in Healthcare]

Why BERT for Identifying Heart Disease?

The field of Natural Language Processing had one of the biggest breakthroughs around October 2018, with the release of BERT (Bidirectional Encoder Representations from Transformers). BERT is revolutionary because it demonstrated the ability to surpass the scores for major NLP tasks. BERT created so much excitement in the NLP world, as much as what ImageNet did to Computer Vision. This is what we wanted to apply to clinical text data to extract risk factors for a disease.

In order to demonstrate the capabilities of BERT, we used it as a classifier and as embedding in our NLP/Deep Learning models. Embedding is a way by which the text information is converted into vectors. The key advantage of using BERT was its power in understanding the context of a word through the bidirectional nature of the embedding itself.

https://odsc.com/london/

We used i2b2’s shared task dataset that comprised of clinical text data, that were annotated by humans. With the premise that embedding plays a key role in the performance of the model, we explored using BERT as dynamic (contextual) embedding as well as a classifier. The data was available in XML format, with annotations, as shown below.

We built several models using BERT, such as token classifier, sentence classifier, as well as ensemble models on Cloud TPU (GCP). In addition, we came up with a novel approach of stacking embeddings, as shown in the diagram below.

BERT+Character stacked embedding model performed best amongst all other models that we experimented with. Analyzing the results from our models, we identified predictions that were accurate and missed out by human annotators. The results also demonstrated the power of contextual embeddings, as seen here. It was able to identify risk factors based on the context in which the relevant text appeared.

30+ models were built on cloud TPU as well as on multiple GPU instances, one to predict each tag of interest.

[Related Article: Machine Learning and Compression Systems in Communications and Healthcare]

Interested in understanding why dynamic embeddings are better or what stacked embedding entails? Or, do you want to see where the model outperforms human annotators or just understand how to build models on cloud TPUs? Do join for a highly informative session at ODSC Europe 2019.

Original post here.

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.