- January 24, 2026
- Hybrid
- 9:00 am - 6:00 pm
This hands-on workshop focuses on the core aspects of building, evaluating, and testing AI and GenAI systems. Participants will engage with engineering workflows, prompt evaluation, reliability assessments, and practical approaches to AI quality in real-world scenarios.
Through this programme, attendees will gain insights into the end-to-end design, deployment, and testing of AI and GenAI systems within live project settings. The curriculum encompasses model development workflows, data pipeline management, prompt engineering techniques, and evaluation frameworks. Key test strategies are explored, including validation for accuracy, bias, safety, performance, and observability. Participants will learn to validate AI outputs, assess associated risks, implement governance controls, and apply structured testing methodologies commonly used in leading enterprises.
Agenda
AI Engineering Foundations
AI versus GenAI: Understanding architectures and workflows
Exploration of data pipelines, tokenisation, prompts, and embeddings
Model selection and key evaluation concepts
AI Testing Techniques
Validating output quality and checking for hallucinations
Testing for bias, safety, and reliability
Developing prompt testing strategies and implementing guardrails
Practical Implementation
Participate in hands-on validation scenarios
Utilise testing dashboards and evaluation metrics
Review a real-world case study followed by a Q&A session
This event has expired