The integration of AI in Local Digital Twins (LDTs) transforms these tools into dynamic decision-support systems capable of simulation, prediction, and intervention recommendations. However, this potential comes with signifi cant challenges, including risks of bias and opacity, which can erode public trust. Especially in highly scrutinized urban environments, if AI decisions appear unfair or lack transparency, the willingness of citizens to accept these tools decreases. To address these challenges, the learning unit emphasizes a structured approach to Ethical AI, encompassing standards like ISO/IEC TR 24027 for fairness and the EU AI Act for governance. The methodology includes Triple Loop Learning, fostering critical refl ection on technical biases and societal values. Essential components of this solution framework involve traceability, accountability, and ensuring transparency across all stages of AI model use and decision-making processes. The unit’s content focuses on teaching these solutions through practical exercises and tools from the EU LDT Toolbox. Participants engage in exercises that stress ethical considerations, transparency in AI decisions, and co-governance, using relatable urban scenarios. Emphasizing that ethics strengthens legitimacy, the unit aligns technological innovation with broader societal needs, ensuring AI-driven LDTs are trusted and resilient.




