Students should be able to:
- Explore the Motivation and Definition of Explainability
- Understanding the importance of explainability in AI systems.
- Defining related terms such as interpretability and transparency.
- Achieve a solid background on accounts of Explanation in Social Sciences Literature
- Examining the primary perspectives on explanation in social sciences.
- Drawing insights from existing theories and frameworks.
- Understand local and Global Explanation Methods
- Differentiating between local and global explanation techniques.
- Exploring model-agnostic methods for generating explanations.
- Interpret Models and Post-hoc Explanations
- Investigating the concept and utility of interpretable models.
- Understanding post-hoc explanations and their role in XAI.
- Explain One-shot Decision and Sequential Decision Making Models
- Contrasting explainability in one-shot decision scenarios.
- Analysing the unique challenges of sequential decision making models.
- Explain in Embodied and Non-Embodied AI Systems
- Investigating the role of explainability in AI systems with physical embodiment.
- Analysing the implications of explainability in non-embodied AI systems.
- Understand the Role of Causality and Interactivity in XAI
- Understanding how causality influences explainability in AI systems.
- Exploring the significance of interactivity for generating explanations.