Sign in to confirm you’re not a bot
This helps protect our community. Learn more

Introduction

0:00

Sequence modeling

3:07

Neurons with recurrence

5:09

Recurrent neural networks

12:05

RNN intuition

13:47

Unfolding RNNs

15:03

RNNs from scratch

18:57

Design criteria for sequential modeling

21:50

Word prediction example

23:45

Backpropagation through time

29:57

Gradient issues

32:25

Long short term memory (LSTM)

37:03

RNN applications

39:50

Attention fundamentals

44:50

Intuition of attention

48:10

Attention and search relationship

50:30

Learning attention with neural networks

52:40

Scaling attention and applications

58:16

Summary

1:02:02
MIT 6.S191 (2023): Recurrent Neural Networks, Transformers, and Attention
MIT Introduction to Deep Learning 6.S191: Lecture 2 Recurrent Neural Networks Lecturer: Ava Amini 2023 Edition For all lectures, slides, and lab materials: http://introtodeeplearning.com Lecture Outline 0:00​ - Introduction 3:07​ - Sequence modeling 5:09​ - Neurons with recurrence 12:05 - Recurrent neural networks 13:47 - RNN intuition 15:03​ - Unfolding RNNs 18:57 - RNNs from scratch 21:50 - Design criteria for sequential modeling 23:45 - Word prediction example 29:57​ - Backpropagation through time 32:25 - Gradient issues 37:03​ - Long short term memory (LSTM) 39:50​ - RNN applications 44:50 - Attention fundamentals 48:10 - Intuition of attention 50:30 - Attention and search relationship 52:40 - Learning attention with neural networks 58:16 - Scaling attention and applications 1:02:02 - Summary Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!

Follow along using the transcript.

Alexander Amini

321K subscribers