NLP Course | For You
That is an extension to the (ML for)
Natural Language Processing course
I train on the Yandex School of Data Analysis (YSDA)
since fall 2018 (from 2022, in Israel department). For now, solely a part of the matters is prone to be lined right here.
This new format of the course is designed for:
- comfort
Straightforward to seek out, study or recap materials (each commonplace and extra superior), and to
attempt in apply. - readability
Every half, from entrance to again, is a results of my care
not solely about what to say, but in addition
how one can say and, particularly, how one can present one thing.
- you
I wished to make these supplies so that you just (sure, you!)
may examine by yourself, what you want, and at your tempo.
My foremost objective is that will help you enter your personal very private journey.
For you.
If you wish to use the supplies (e.g., figures) in your paper/report/whatnot and
to quote this course, you are able to do this utilizing the next BibTex:
title={ {NLP} {C}ourse {F}or {Y}ou},
url={https://lena-voita.github.io/nlp_course.html},
creator={Elena Voita},
yr={2020},
month={Sep}
}
Lectures-blogs
which I attempted to make:
- intuitive, clear and fascinating;
- full: full lecture and extra,
- up-to-date with the sphere.
Bonus:
Seminars & Homeworks
For every matter, you’ll be able to take notebooks from
our 8.8k-☆ course repo.
From 2020, each PyTorch and Tensorflow!
Interactive components & Workouts
Typically I ask you to go over “slides” visualizing some course of,
play with one thing or simply suppose.
Evaluation and Interpretability
Since 2020, prime NLP conferences (ACL, EMNLP) have the
“Evaluation and Interpretability” space: yet another affirmation that
evaluation is an integral a part of NLP.
Every lecture has a piece with related outcomes on inside workings of fashions and strategies.
Analysis Pondering
Be taught to suppose as a analysis scientist:
- discover flaws in an strategy,
- suppose why/when one thing will help,
- give you methods to enhance,
- find out about earlier makes an attempt.
It is well-known that you’ll study one thing simpler
if you’re not simply given the reply straight away, but when you concentrate on it first. Even
should you do not need to be a researcher, that is nonetheless a great way to study issues!
Demo: Analysis Card
Right here I outline the start line:
one thing you already know.
take a look at potential solutions.
?
Why this or that may be helpful?
Attainable solutions
Right here you will note some potential solutions. This half is a motivation
to attempt a brand new strategy: often, that is what a analysis undertaking begins with.
?
How can we use this to enhance that mannequin?
Current options
Right here I’ll summarize some earlier makes an attempt. You aren’t purported to give you
one thing precisely like right here – keep in mind, every paper often takes the authors a number of
months of labor. It is a behavior of occupied with this stuff that counts: you might have
a number of concepts, you attempt; if they do not work, you suppose once more. Ultimately, one thing
will work – and that is what papers inform you about.
Have Enjoyable!
Simply enjoyable.
Right here you may see some NLP video games associated to a lecture matter.
- Distributional semantics
- Rely-based (pre-neural) strategies
- Word2Vec: study vectors
- GloVe: rely, then study
- Analysis: intrinsic vs extrinsic
Evaluation and Interpretability- Bonus:
Seminar & Homework
- Intro and Datasets
- Common Framework
- Classical Approaches: Naive Bayes, MaxEnt (Logistic Regression), SVM
- Neural Networks: RNNs and CNNs
Evaluation and Interpretability- Bonus:
Seminar & Homework
- Common Framework
- N-Gram LMs
- Neural LMs
- Technology Methods
- Evaluating LMs
- Sensible Suggestions
Evaluation and Interpretability- Bonus:
Seminar & Homework
- Seq2seq Fundamentals (Encoder-Decoder, Coaching, Easy Fashions)
- Consideration
- Transformer
- Subword Segmentation (e.g., BPE)
- Inference (e.g., beam search)
Evaluation and Interpretability- Bonus:
Seminar & Homework
- What’s Switch Studying?
- From Phrases to Phrases-in-Context (CoVe, ELMo)
- From Changing Embeddings to Changing Fashions (GPT, BERT)
- (A Little bit of) Adaptors
Evaluation and Interpretability
Seminar & Homework
Weeks 5
and
6
in
the course repo.
To be continued…
- Instinct
- Constructing Blocks: Convolution (and parameters: kernel, stride, padding, bias)
- Constructing Blocks: Pooling (max/imply, k-max, world)
- CNNs Fashions: Textual content Classification
- CNNs Fashions: Language Modeling
Evaluation and Interpretability
To be continued…