T4R - Learning journey

Learning with Microlearning Units

Trust in Data and Data Handling – Explainability

FRAMEWORK:
ETHICS, INCLUSION, DEMOCRATIZATION

MODULE: 
Trust your city’s twin

EQF 5

EID-103

In an era driven by artificial intelligence, understanding the decisions made by AI systems is crucial. The challenge lies in enhancing transparency and explainability, ensuring stakeholders can make informed decisions free of biases. Explainable AI offers a way to understand complex processes, ultimately fostering trust and accountability among users and decision-makers. Addressing this challenge involves implementing explainability methods to provide insights into AI decisions. This encompasses using visual tools like decision trees, which help demystify the “black box” intricacies of AI systems. Tailoring transparency to different stakeholders’ needs ensures that even non-experts can comprehend and trust the system’s outcomes. This MLU mediates the principles of AI explainability by demonstrating various methods and tools. Learners engage in practical exercises to visualize decision processes and learn how to discuss about this topic. An activitiy includes using haptic tools and story-based scenarios, which enable learners to grasp and present AI decision-making transparently and effectively.

EID-100

second loop

Trust in Data and Data Handling - Explainability

Learner knows explainability as a concept. Learner assigns facts about explainability in AI coherently within the context of urban planning. Learner is able to sketch methods of decision-making processes for exchanging ideas with colleagues.

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.