A detailed introduction to computer vision will be made: image/video sampling, Image and video acquisition, Camera geometry, Stereo and Multiview imaging, Structure from motion, Structure from X, 3D Robot Localization and Mapping, Semantic 3D world mapping, 3D object localization, Multiview object detection and tracking, Object pose estimation.
When we approach a Machine Learning (ML) project where we want to solve a specific task with an existing or to-be-designed ML model, we immediately think about data: their availability always, their quantity often, their quality and mode of creation less frequently. If at all, it is only at the testing stage that we may attempt to better understand the dataset used to train the model, to identify the causes for the possible model failures and the improvements that can be made to the architecture or the training process. In so doing, we may come across biases in data representation, wrong labeling, uneven performance of the model. We may question whether the inductive biases the model exploits are indeed representative of the generalization capabilities we claim the model to have. We may think of failure, of data, of ethics. Let us unwrap and take a step back. To do so, in this series of three posts, we analyze and reflect on articles published recently in the ML research community.
This first post discusses some deployment failures of AI systems, and how these failures question the way we approach data to design ML systems. These failures have been analyzed in the article of Raji et al., The Fallacy of AI Functionality.
The second post will take a more holistic perspective on data creation and expectations we may place onto ML approaches, discussing the article of Paullada et al., Data and its (dis)contents: A survey of dataset development and use in machine learning research.
The third post will reflect on how we should question our ML education practices to contribute alleviating the current AI ethics crisis, analyzed in the article of Raji et al., You Can’t Sit With Us: Exclusionary Pedagogy in AI Ethics Education.
The talk discusses the long-standing vision of creating autonomous robots capable of assisting humans in daily life. A crucial step toward this goal is enabling robots to learn tasks based on environmental cues or higher-level instructions. However, current learning techniques face challenges in scaling to high-dimensional manipulator or humanoid robots. The speaker presents a general… Continue reading Robot Learning
A central challenge to contemporary AI is to integrate learning and reasoning. The integration of learning and reasoning has been studied for decades already in the fields of statistical relational artificial intelligence and probabilistic programming. StarAI has focussed on unifying logic and probability, the two key frameworks for reasoning, and has extended this probabilistic logics… Continue reading Probabilistic Logics to Neuro-Symbolic Artificial Intelligence
The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. I will discuss various learning techniques we developed that enable robots to have complex interactions with their environment and humans. Complexity arises from dealing with high-dimensional input data, non-linear dynamics in general and contacts in particular, multiple reference frames,… Continue reading Robots Learning (Through) Interactions
Europe has taken a clear stand that we want AI, but we do not want just any AI. We want AI that we can trust. This talk will present the European approach to Trustworthy AI and give an overview of some of the interesting research problems related to Trustworthy AI. Special emphasis will be placed… Continue reading Towards Trustworthy AI – Integrating Reasoning and Learning