About the course
As artificial intelligence systems become increasingly embedded in the core operations of businesses and public services, the stakes are rising—especially when these systems influence critical decisions, safety outcomes, or legal responsibilities. In domains such as healthcare, finance, transportation, and government, the deployment of AI introduces not only immense opportunity, but also complex challenges related to compliance, governance, ethics, and risk management.
This workshop addresses the urgent need for cross-functional understanding of AI implementation in high-stakes contexts. It equips decision-makers and practitioners with the knowledge and frameworks needed to navigate evolving regulatory landscapes (such as the EU AI Act and NIST AI RMF), anticipate operational and reputational risks, and design systems that are not only effective, but also transparent, auditable, and aligned with organizational values and legal obligations.
Rather than focusing purely on technical models or data science, this session explores organizational readiness, design principles for responsible AI, and the human roles essential to keeping AI trustworthy in critical environments.
We are happy to tailor a workshop with respect to your specific domain / industry, for instance AI compliance in Healthcare, or Risk Management and AI.
We're happy to offer custom AI training online; in-person at our London training centre, or at your offices. Please get in touch to find out about flexible options to suit your team.
-
- Understand key risks of AI in mission-critical and life-impacting domains
- Explore governance and compliance frameworks (e.g., EU AI Act, ISO/IEC 42001, NIST AI RMF)
- Learn practical strategies for implementing AI responsibly and safely
- Engage in case-based discussion and risk scenario planning
-
This workshop is designed for a cross-disciplinary audience involved in the planning, deployment, oversight, or governance of AI systems, especially in regulated or mission-critical domains. Ideal participants include:
Chief Technology Officers (CTOs) and AI/ML Leads driving enterprise AI adoption
Compliance Officers and Risk Managers overseeing regulatory and ethical adherence
Product Managers and Solution Architects responsible for designing AI-powered systems
Data Governance Teams and Information Security Officers managing data integrity and access
Legal Counsel and Policy Specialists concerned with AI regulation, liability, and fairness
Healthcare, Finance, Transport, and Public Sector Leaders deploying AI in high-trust domains
By bringing together professionals from across the organisation, the workshop fosters a shared vocabulary and a risk-aware culture, enabling teams to move forward with AI innovation that is both ambitious and responsible.
-
No previous experience with using AI / LLM platforms is assumed or required. We welcome open minds, any and all questions, and the chance to discuss your challenges.
-
This AI training course is available for private / custom delivery for your team - face-to-face, on-site at your location of choice, or remotely via MS Teams or your own platform - get in touch to find out how we can deliver tailored training which focuses on your project requirements and learning goals.
-
Welcome & framing
The high-stakes AI landscape
Why governance is not optional
Quick poll: Where are you using or considering AI?
Understanding the risk landscape
Categories of risk: operational, ethical, legal, existential
Case studies: AI in healthcare, finance, transportation
Key failures and what they teach us
AI Governance and Compliance Frameworks
Overview of current and emerging regulations (EU AI Act, ISO, NIST)
Risk classification of AI systems
The role of documentation, traceability, and human oversight
Group exercise: AI risk scenario simulation
Breakout into small groups
Analyse a fictional AI deployment in a critical setting (e.g., automated triage system in emergency healthcare)
Identify potential risks, compliance requirements, and mitigation strategies
Share back to main group
Patterns of responsible AI design
Tools: Model cards, data statements, red teaming, model monitoring
Organizational roles: AI risk officer, ethical review board, cross-functional governance teams
Open discussion: What works, what doesn't?
Wrap-up & takeaways
Key principles to remember
Checklist: AI readiness in regulated environments
Q&A, next steps, and further reading
Optional add-ons:
Custom scenario planning based on your sector (health, finance, gov, etc.)
Extended version with hands-on work using governance toolkits or AI explainability tools
Follow-up consulting
-
The EU Artificial Intelligence Act: https://artificialintelligenceact.eu/
NIST AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework
Centre for the Governance of AI: https://www.governance.ai/
Trusted by



