State-of-the-art NLP Made Easy with AdaptNLP

Andrew and Brian are speakers for ODSC East 2020 this April 13–17 in Boston. Be sure to check out their talk, “State-of-the-art NLP Made Easy,” there!

Natural Language Processing (NLP) has advanced significantly since 2018, when ULMFiT and Google’s release of the BERT language model approached human-level performance on a range of use cases. Since then, several models with similarly interesting names such as XLM, GPT-2, XLNet, and ALBERT have been released in quick succession, each improving on its predecessors. While these state-of-the-art models can solve human-level, language-based tasks on large volumes of unstructured text for certain use cases, getting a handle on what to use, when to use it, and how to use it can be a challenge.

At Novetta, we explored what it would take to streamline the implementation of state-of-the-art models for different NLP tasks allowing for quick use by practitioners.

[Related article: Level Up: spaCy NLP for the Win]

We have developed an open-source framework, AdaptNLP, that lowers the barrier to entry for practitioners to use these advanced capabilities. AdaptNLP is built atop two open-source libraries: Transformers (from Hugging Face) and Flair (from Zalando Research). AdaptNLP enables users to fine-tune language models for text classification, question answering, entity extraction, and part-of-speech tagging.

To show how AdaptNLP can be put to use, we will address a Question Answering (QA) task using BERT. This task automates the answering of questions, posed by humans, against a corpus of text.

Using AdaptNLP starts with a Python pip install.

First, we import EasyQuestionAnswering which abstracts transformer-based Question Answering tasks to their most basic components.

We can now frame our question as a simple string. The context variable holds the source text that we want to search through for an answer. Because a question may have multiple valid answers, we specify how many results to return using top_n.

We now use predict_qa(), which defaults to a pre-trained BERT-based QA model, to determine what part of the corpus may be our answer. We then pass the question, the context data, and the number of answers we would like to see. The results contain the text the model believes to be the answer, a probability score, and the locations of this answer as it relates to the original corpus.

We can now take a look at best_answer to see the most relevant result or best_n_answers to see the number of answers that we previously specified.

Note: We have limited the example output to three results for brevity and to demonstrate variety.

[Related article: Introduction to Spark NLP: Foundations and Basic Components]

These outputs can easily be integrated into user-built systems by providing text-based metadata such as extracted answer text, start/end indices, and confidence scores. By standardizing the input and output data and function calls developers can easily use NLP algorithms regardless of which model is used in the backend. Before AdaptNLP, we would individually integrate the latest released model and pre-trained weights and then reiterate through a build for an NLP task pipeline. This time-consuming and repetitive process was also due to the rapid advancements and releases of NLP models. To overcome this, AdaptNLP provides a streamlined process that can leverage new models in existing workflows without having to overhaul code.

Using the latest transformer embeddings, AdaptNLP makes it easy to fine-tune and train state-of-the-art token classification (NER, POS, Chunk, Frame Tagging), sentiment classification, and question-answering models. We will be giving a hands-on workshop on using AdaptNLP with state-of-the-art models at ODSC East 2020 in Boston.

About the speakers/authors:

Brian Sacash is a Lead Machine Learning Engineer in Novetta’s Machine Learning Center of Excellence. He helps various organizations discover the best ways to extract value from data. His interests are in the areas of Natural Language Processing, Machine Learning, Big Data, and Statistical Methods. Brian holds a Master of Science in Quantitative Analysis from the University of Cincinnati and a Bachelor of Science in Physics from Ohio Northern University.

Andrew Chang is a Senior Machine Learning Engineer in Novetta’s Machine Learning Center of Excellence. Andrew is a graduate from Carnegie Mellon University who has a focus on researching state of the art machine learning models and rapid prototyping ML technologies and solutions across the scope of customer problems. He has an interest in open source projects and research in natural language processing, geometric deep learning, reinforcement learning, and computer vision. Andrew is the author and creator of AdaptNLP.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.