Language Models
Learning Outcomes
Language Models (LM) are one of the main building blocks in Natural Language Processing (NLP) to understand and generate text. We will focus on statistical language models such as n-grams as well as on the more recent Transformers-based architectures (i.e., BERT/GPT families of language models). At the end of the course, the students will be able to use and adapt language models for tasks such as sentiment analysis or text classification, and they will also understand how language models work under the hood. In particular, the students will learn:
a) what are the main components of a language model;
b) how language models are trained;
c) how to adapt language models for new tasks;
d) which are the opportunities, risks, ethics around language models.