
Overview
Generative AI presents an exciting opportunity to enhance learning, but how can we ensure it is reliable, transparent, and aligned with user needs?
In this hands-on workshop, we will explore the SAGE-Responsible AI (RAI) project—an initiative funded by Responsible AI UK investigating responsible AI assistants for education.
Unlike general-purpose AI, SAGE-RAI uses Retrieval-Augmented Generation (RAG) to provide responses grounded in carefully controlled content, reducing hallucination risks and improving trust.
This session will offer participants the chance to compare the experience of using a general foundation model with a task-specific AI assistant tutor, highlighting the challenges and benefits of each approach.
We will also discuss the broader implications for AI transparency, linking to insights from the ODI’s AI Data Transparency Index and the challenges of data governance in foundation model development.
Key Learning takeaways from this session:
✅ Understand the benefits and limitations of task-specific AI assistants
✅ Gain hands-on experience comparing foundation models vs. AI tuned for specific tasks
✅ Learn about AI data transparency and its implications for responsible AI
✅ Explore future directions, including user-controlled AI interactions with Solid
Who Should Attend?
This workshop is ideal for educators, policymakers, AI developers, and anyone interested in the practical application of AI in learning and beyond. Participants will leave with hands-on experience, a better understanding of responsible AI practices, and insights into how AI assistants could be designed with greater transparency and user control.
Workshop outline (1.5hrs)
Introduction to SAGE-RAI (15 min)
- Overview of the project and its objectives
- The role of RAG in reducing misinformation and improving trust in AI-generated responses
- Key challenges, including dataset transparency, content retrieval, and user control
Hands-on Task: Comparing AI Approaches (45 min)
- Participants will complete a structured task with support from:
- A general foundation model
- The task-specific SAGE-RAI AI assistant tutor
- Discussion on the differences in responses, accuracy, and user experience
- Reflection on where task-specific AI assistants could be valuable beyond education
Future Possibilities: AI, Transparency, and Data Control (30 min)
- Linking to ODI’s work on AI Data Transparency and the need for clearer insights into AI training datasets
- Exploring the potential of Solid (Tim Berners-Lee’s initiative) to give users more granular control over the data AI assistants can access and also to consider joining up
- Possible applications / integrations using model context protocol (open standard promoted by frontier model Anthropic) to improve user experiences, reliability and enable connection of SME from multiple sources
- Open discussion on the future of AI assistants and how to make them more responsible and user-centric
About the event
The event will be held on Zoom, and you will be sent the link before the event. Please ensure you can access Zoom on the device you will be using.