Starting with the basics, this book teaches you how to choose from the various text pre-processing techniques and select the best model from the several neural network architectures for NLP issues.
Deep Learning for Natural Language Processing starts off by highlighting the basic building blocks of the natural language processing domain.
The course goes on to introduce the problems that you can solve using state-of-the-art neural network models. After this, delving into the various neural network architectures and their specific areas of application will help you to understand how to select the best model to suit your needs.
As you advance through this deep learning course, you’ll study convolutional, recurrent, and recursive neural networks, in addition to covering long short-term memory networks (LSTM). Understanding these networks will help you to implement their models using Keras. In the later chapters, you will be able to develop a trigger word detection application using NLP techniques such as attention model and beam search.
If you are an aspiring data scientist looking for an introduction to deep learning in the NLP domain, this is the course for you.
A good working knowledge of Python, linear algebra and machine learning is a must.
By the end of this course, students will be able to:
- Understand various pre-processing techniques for deep learning problems
- Build a vector representation of text using word2vec and GloVe
- Create a named entity recognizer and parts-of-speech tagger with Apache OpenNLP
- Build a machine translation model in Keras
- Develop a text generation application using LSTM
- Build a trigger word detection application using an attention model
Course attendance certificate issued by Semos Education