Details

The lecture will be held on the 26th of January 2021 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00am CST). Please use the following zoom link to attend the lecture.


https://authgr.zoom.us/j/96526281132?pwd=azBsNDUxb2JGVGlUOEpYcFZ6SXhLZz09

Abstract

A core assumption behind most machine learning methods is that training data should be representative for the data seen at test time. While this seems almost trivial, it is, in fact, a particularly challenging condition to meet in real world applications of machine learning: the world evolves and distributions shift over time in an unpredictable way (think of changing weather conditions, fashion trends, social hypes, wear and tear, etc.).  This means models get outdated and in practice need to be re-trained over and over again.  A particular subfield of machine learning, known as continual learning, aims at addressing these issues.  The goal is to develop learning schemes that can learn from non-i.i.d. distributed data. The challenges are to realise this without storing all the training data (ideally none at all), with fixed memory and model capacity, and without forgetting concepts learned previously. In this talk, I will give an overview of recent work in this direction, with a focus on learning deep models for computer vision. 

Short CV

Tinne Tuytelaars is a full professor at KU Leuven, Belgium, working on computer vision and, in particular, topics related to image representations, vision and language, continual learning and more. She has been program chair for ECCV14 and CVPR21, and general chair for CVPR16. She also served as associate-editor-in-chief of the IEEE Transactions on Pattern Analysis and Machine Intelligence from 2014-2018. She was awarded an ERC Starting Grant in 2009 and received the Koenderink test-of-time award at ECCV16.