Abstract: Recent years have witnessed great progress in online machine learning and reinforcement learning. Motivated by this, there has been a lot of interest in applying these learning techniques to continuous control problems with applications on robotics, autonomous vehicle, data center cooling systems, smart grid, etc. However, many issues emerge when considering real-world environments and physical systems. For example, despite online learning’s great successes in i.i.d. or adversarial environments, most real-world environments are more complex than the i.i.d. case but more benign than the adversarial case. For instance, the wind directions evolve in a highly nonstationary pattern but some wind predictions are usually available especially for the near future. In the first part of my talk, I will discuss how to utilize limited predictions to improve online control performance. I will also provide a theoretical lower bound to justify the fundamental value of the predictions. Another major challenge is how to guarantee safety for physical systems with uncertainties. In the second part of my talk, I will provide a novel online control algorithm that guarantees constraints satisfaction despite bounded disturbances in the system. Our algorithm also achieves O(\sqrt T) regret bound, thus enjoying both performance and safety guarantees.

Bio: Yingying Li is currently a postdoc research fellow at the Coordinated Science Laboratory at UIUC. She obtained her Ph.D. at Harvard in 2021 and bachelor degree at the University of Science and Technology of China in 2015, both in applied math. She was a research intern at MIT-IBM Watson AI Lab in the summer of 2020. Her work was chosen as the Editor’s Choice by Automatica, a Journal of IFAC, the International Federation of Automatic Control.