AI's Latest and Greatest

Lecture 8: Recurrent Neural Networks and Language Models



Lecture 8 covers traditional language models, RNNs, and RNN language models. Also reviewed are important training problems and tricks, RNNs for other sequence tasks, and bidirectional and deep RNNs.

——————————————————————————-

Natural Language Processing with Deep Learning

Instructors:
– Chris Manning
– Richard Socher

Natural language processing (NLP) deals with the key artificial intelligence technology of understanding complex human language communication. This lecture series provides a thorough introduction to the cutting-edge research in deep learning applied to NLP, an approach that has recently obtained very high performance across many different NLP tasks including question answering and machine translation. It emphasizes how to implement, train, debug, visualize, and design neural network models, covering the main technologies of word vectors, feed-forward models, recurrent neural networks, recursive neural networks, convolutional neural networks, and recent models involving a memory component.

For additional learning opportunities please visit:
http://stanfordonline.stanford.edu/

language

1491249000

2017-04-03 19:50:00

1:18:3

UCdKG2JnvPu6mY1NDXYFfN0g

Stanford University School of Engineering

200

5

source

Similar Posts

WP2Social Auto Publish Powered By : XYZScripts.com