Foundations of Trustworthy AI

Foundations of Trustworthy AI

Level
Foundation, Broad, Theory, Methodological.
This topic covers the dimensions of Trustworthy AI: (i) Explainability, (ii) Safety, (iii) Fairness, (iv) Accountability and Reproducibility, (v) Privacy, and (vi) Sustainability.
Foundations of Trustworthy AI

Learning outcomes

Content /
Knowledge

Students should be able to: 

  • Understand how can we guarantee user trust in AI systems through explanation. How to formulate explanations as Machine-Human conversation depending on context and user expertise? 
  • Understand how to bridge the gap from safety engineering, formal methods, verification as well as validation to the way AI systems are built, used, and reinforced. 
  • Understand how can we build algorithms that respect fairness constraints by design through understanding causal influences among variables for dealing with bias-related issues. 
  • Understand how to uncover accountability gaps w.r.t. the attribution of AI-related harming of humans.
  • Understand how we guarantee privacy while preserving the desired utility functions. 
  • Understand if there are any chance to reduce energy consumption for a more sustainable AI and how can AI contribute to solve some of the big sustainability challenges that face humanity today (e.g. climate change).
  • Understand how to deal with properties and tensions of the interaction among multiple dimensions. For instance, accuracy vs. fairness, privacy vs. transparency, convenience vs. dignity, personalisation vs. solidarity, efficiency vs. safety and sustainability.
Methodological
skills
Students should be able to:
  • Apply their critical and analytical faculties on specific case studies, in order to argue about the need and content of AI trustworthiness issues.
Transferrable/
Application
Students should be able to:
  • Work effectively with others in an interdisciplinary and/or international team.
  • Clearly and succinctly communicate their ideas to technical and non-technical audiences.