
I am a lead research scientist and activity lead at the Bosch Center for AI. I am interested in the fundamental problems behind data-efficient and safe reinforcement learning with the goal of enabling learning on real-world systems.
Previously, I completed my PhD at ETH Zurich, for which I received the ELLIS PhD award. My PhD advisors were Andreas Krause and Angela Schoellig. I held a AI fellowship from the Open Philanthropy Project, was an Associated Fellow at the Max Planck ETH Center for Learning systems and a postgraduate affiliate at the Vector institute. I was also the workflow co-chair for ICML 2018 and completed research internships at Microsoft Research and Deepmind.
Recent News
- Jul. 2023: Invited talk at the Reinforcement Learning Summer School
- Dec. 2022: Invited talk at the NeurIPS trustworthy AI Workshop
- Jul. 2022: Invited talk at IJCAI Workshop on Safe RL
- Feb. 2021: Two papers accepted at ICLR and AISTATS
- Oct. 2021: Invited talk at the Control Seminar, University of Oxford
- Oct. 2021: Outstanding reviewer award for NeurIPS 2021
- Sep. 2021: Invited talk at TU Darmstadt
- Sep. 2021: Panel speaker at IROS workshop on Safe Real-World Robot Autonomy
- Mar. 2021: Guest lecture on safe reinforcement learning as UCSD
Talks and Lectures
Guest lecture on Safe Bayesian Optimization for CS 159: Data-Driven Algorithm Design at Caltech.
ETH Day 2018 short presentation (in German)
Invited talk at the Workshop on Reliable AI 2017
NIPS/CoRL 2017: "Safe Model-based Reinforcement Learning with Stability Guarantees".