Dr. EfstratiosGavves (https://www.egavves.com/)is an Associate Professor at the University of Amsterdam teaching Deep Learning at the MSc in AI, an ELLIS Scholar, and co-founder of Ellogon.AI. He is a director of the QUVA Deep Vision Lab with Qualcomm, and the POP-AART Lab with the Netherlands Cancer Institute and Elekta. Efstratios received the ERC Career Starting Grant 2020 and NWO VIDI grant 2020 to research on the Computational Learning of Time for spatiotemporal sequences and video. His background is in computer vision, and now focusing on temporal machine learning and dynamical systems, efficient computer vision, and machine learning for oncology. He is currently supervising 15 PhD students on various topics pertaining theory and applications of Deep Learning and Computer Vision, including neural network, dynamical systems, and physical laws in learning algorithms, causal and object-centric representation learning, open-world video understanding, deep probabilistic and generative models, deep learning geometry, and applications to histopathological analysis and to adaptive radiotherapy.
Ακολουθεί σύντομη περιγραφή
Foundation Models have taken the AI world by storm with their ability to generalize to novel, complex tasks beyond the scope of their training data. Whether they achieve this through true generalization or by ‘remembering’ vast training sets is debatable, but their performance is undeniably superior to previous approaches. With this success, the critical questions now are: What are the limitations of current large-scale learning? What frontiers remain? And what are the implications of these developments?
In this talk, I argue that Robot Learning represents one of the most exciting frontiers, with Robot General Intelligence standing as the quintessential goal for AI. However, current approaches to Robot Learning face fundamental limitations that prevent them from achieving this ideal. I will present my recent work on a novel paradigm that incorporates explicit physical and causal priors into world models, enabling real robots to perform one-shot policy learning. This approach allows robots to learn new tasks from as little as one demonstration, pushing the boundaries of what’s possible in real-world robot learning.
