The EU Artificial Intelligence Act (AI Act) establishes the EU’s first comprehensive, risk‑based legal framework for AI. This two-day, in-person course provides a practical, hands-on exploration of the Act, specifically designed for marketing professionals.
Through interactive workshops, real-world case studies, and group exercises, participants will learn how to:
The course combines legal concepts, technical controls, and organisational governance in an accessible, job-relevant format. By the end of the two days, learners will be able to apply these principles directly within their marketing teams and campaigns, ensuring responsible and compliant use of AI.
Participants who complete the course and pass the final assessment will receive a Certificate of Completion, documenting:
The course is delivered as a series of practical, hands-on modules. Each module combines interactive discussions, real-world case studies, group exercises, and scenario-based activities. Participants work with downloadable templates, checklists, and worksheets to apply what they learn immediately to their marketing projects and campaigns.
|
Module |
What We Cover |
Key Learning Outcomes |
Interactive Session |
|
Day 1: Understanding AI Risks and the EU AI Act |
|||
|
1.1 Overview of the EU AI Act and its Scope |
Structure of the AI Act, risk-based approach, key definitions, intersection with GDPR and other EU rules |
Understand the Act’s scope, risk-based framework, and roles of providers, deployers, and users |
Group discussion: Identify AI systems used in your marketing campaigns and map roles/responsibilities |
|
1.2 Definition and Examples of High-Risk Systems |
What constitutes high-risk AI, examples in marketing, HR, finance, health; classification criteria |
Identify high-risk AI systems in practice; differentiate high-risk from low/limited-risk systems |
Workshop: Classify AI tools used by participants into risk categories |
|
1.3 Principles of Risk Management for AI |
Core principles of AI risk management, mitigation strategies, human oversight requirements |
Apply risk management principles to marketing AI projects; understand prohibited practices |
Case study: Evaluate a marketing AI tool for risks and propose mitigation steps |
|
1.4 Detailed Steps in the Risk Management Process |
Risk identification, assessment, mitigation, monitoring, and documentation |
Develop a step-by-step risk management workflow for AI systems |
Exercise: Create a mini risk management plan for an AI-driven campaign |
|
Day 2: Responsible AI Use and Entity Obligations |
|||
|
2.1 Obligations for Providers of High-Risk AI Systems (HRAIS) |
Compliance requirements, technical documentation, logging, transparency obligations |
Recognize provider obligations and plan for documentation and reporting |
Group task: Draft a sample transparency notice or user label |
|
2.2 Establishing and Maintaining a Quality Management System (QMS) |
Importance of QMS, planning, processes, responsibilities, monitoring |
Understand how a QMS supports AI governance and regulatory compliance |
Workshop: Map QMS processes to marketing AI workflows |
|
2.3 Key Components of a QMS |
Risk management integration, documentation, audits, continuous improvement |
Identify key QMS elements and how they apply to AI projects |
Activity: Create a checklist of QMS components relevant to participants’ teams |
|
2.4 Balancing Innovation and Compliance |
Strategies to innovate responsibly, scaling AI while maintaining compliance, internal governance |
Apply compliance principles without stifling marketing creativity |
Scenario discussion: Propose an AI campaign balancing creativity and regulatory obligations |
Module structure (16 hours total based on your speed)
Each learner receives:
After completing the course, participants will be able to: