Course Information
- Time: Monday & Wednesday 9:30-10:45AM
- Location: Rice 340
- Instructor and office hours: Chen-Yu Wei, Tuesday 3-4PM @ Rice 409
- TA and office hours: Braham Snyder, Friday 4-5PM @ Rice 442
Overview
Reinforcement learning (RL) is a powerful learning paradigm through which machines learn to make (sequential) decisions. It has been playing a pivotal role in advancing artificial intelligence, with notable successes including mastering the game of Go and enhancing large language models.
This course focuses on the design principles of RL algorithms. Similar to statistical learning, a central challenge in RL is to generalize learned capabilities to unseen environments. However, RL also faces additional challenges such as exploration-exploitation tradeoff, credit assignment, and distribution mismatch between behavior and target policies. Throughout the course, we will delve into various solutions to these challenges and provide theoretical justifications.
Prerequisites
This course is mathematically demanding. Students are expected to have strong foundations in probability, linear algebra, and calculus. A basic understanding of machine learning and convex optimization will be beneficial. Proficiency in python programming is required.
Topics
Bandits, online learning, dynamic programming, Q-learning, policy evaluation, policy gradient.
Platforms
- Piazza: Discussions
- Gradescope: Homework submissions
Grading
- (40%) Written assignments
- (30%) Programming assignments
- (30%) Final project
Late policy for assignments: 10 free late days can be used across all assignments. Each additional late day will result in a 10% deduction in the semester’s assignment grade. No assignment can be submitted more than 7 days after its deadline.
Schedule
Date | Topics | Materials | Assignments |
---|---|---|---|
1/13 | Introduction | Slides, Recording | HW0 (no submission needed) |
1/15 | Bandits: Explore-then-exploit, ε-greedy | Slides, Recording | |
1/20 | MLK Holiday | ||
1/22 | Bandits: Boltzmann exploration, Inverse gap weighting, Reduction | ||
1/27 | Bandits: UCB, LinUCB | ||
1/29 | |||
2/3 | |||
2/5 | |||
2/10 | |||
2/12 | |||
2/17 | |||
2/19 | |||
2/24 | |||
2/26 | |||
3/3 | |||
3/5 | |||
3/10 | Spring recess | ||
3/12 | Spring recess | ||
3/17 | |||
3/19 | |||
3/24 | |||
3/26 | |||
3/31 | |||
4/2 | |||
4/7 | |||
4/9 | |||
4/14 | |||
4/16 | |||
4/21 | |||
4/23 | |||
4/28 |
Resources
- Bandit Algorithms by Tor Lattimore and Csaba Szepesvari
- Reinforcement Learning: An Introduction by Richard Sutton and Andrew Barto
- Reinforcement Learning: Theory and Algorithms by Alekh Agarwal, Nan Jiang, Sham Kakade, and Wen Sun
- Statistical Reinforcement Learning and Decision Making: Course Notes by Dylan Foster and Sasha Rakhlin
Previous Offerings
- CS 6501 Reinforcement Learning (Spring 2024)
- CS 6501 Topics in Reinforcement Learning (Fall 2022) by Prof. Shangtong Zhang