My goal is to enable robots to safely and autonomously learn in uncertain real-world environments. This requires new reinforcement learning algorithms that respect the physical limitations and constraints of dynamic systems and provide theoretical safety guarantees.
I currently hold an AI fellowship from the Open Philanthropy Project and am an Associated Fellow at the Max Planck ETH Center for Learning systems. Previously I was the Workflow Co-chair for ICML 2018 and a postgraduate affiliate at the Vector institute. I have completed research internships at Microsoft Research and Deepmind.
ETH Day 2018 short presentation (in German)
Invited talk at the Workshop on Reliable AI 2017
NIPS/CoRL 2017: "Safe Model-based Reinforcement Learning with Stability Guarantees".