State of the Art Natural Language Processing at Scale

Print Friendly, PDF & Email

The two part presentation below from the Spark+AI Summit 2018 is a deep dive into key design choices made in the NLP library for Apache Spark. The library natively extends the Spark ML pipeline API’s which enables zero-copy, distributed, combined NLP, ML & DL pipelines, leveraging all of Spark’s built-in optimizations. The library implements core NLP algorithms including lemmatization, part of speech tagging, dependency parsing, named entity recognition, spell checking and sentiment detection.

With the dual goal of delivering state of the art performance as well as accuracy, primary design challenges are:

  1. Using efficient caching, serialization & key-value stores to load large models (in particular very large neural networks) across many executors
  2. Ensuring fast execution on both single machine and cluster environments (with benchmarks)
  3. Providing simple, serializable, reproducible, optimized & unified NLP + ML + DL pipelines, since NLP pipelines are almost always part of a bigger machine learning or information retrieval workflow
  4. Simple extensibility API’s for deep learning training pipelines, required for most real-world NLP problems require domain-specific models.

This talk will be of practical use to people using the Spark NLP library to build production-grade apps, as well as to anyone extending Spark ML and looking to make the most of it.

The presenters are: Alex Thomas, a data scientist at Indeed, and David Talby, chief technology officer at Pacific AI. the slides are available HERE.

 

 

Sign up for the free insideAI News newsletter.

Speak Your Mind

*