At a glance

  • Hands-on workshop on algorithmic bias, half-day or full-day
  • Onsite or online, up to 20 participants
  • Audience: HR, marketing, leadership, product, innovation
  • No technical prerequisites required
  • Analysis framework, checklists, and a mini-guide provided to each participant
  • Grounded in CNIL references and AI Act considerations

Official references

  • CNIL guidance for responsible AI use — AI practical sheets
  • AI Act topics: transparency, governance, non-discrimination, AI literacy — official text on EUR-Lex
  • These principles translated into concrete steps: testing, review, criteria, documentation

AI bias in companies: a subtle risk, but very real

An AI bias appears when an artificial intelligence system produces, reinforces, or normalises unfair treatment. In companies, this bias does not always come from intent. It can come from the data used, a poorly formulated prompt, overly vague business criteria, or insufficient human oversight.

The effects are often subtle at first. Yet they can affect very concrete uses: CV screening, scoring, marketing targeting, content generation, internal chatbots, request prioritisation, or decision support. When a result looks "objective", it is sometimes less questioned. That is precisely what makes these algorithmic biases hard to spot.

This workshop helps your teams recognise warning signs: results that change depending on prompt wording, recommendations skewed toward certain profiles, criteria that are difficult to explain, indirectly discriminatory proxy variables, automation adopted too quickly because it "seemed objective".

The goal is not to dramatise AI use. On the contrary. It is about installing simple, useful reference points tailored to your context. Your teams learn to test, question, document, and fix earlier. They leave with an analysis framework, checklists, and safeguards that can be reused in day-to-day work.

To take the reflection further, see our use cases on ethical AI and algorithmic bias and explore our company AI usage diagnostic.

Which warning signs should alert your teams?

  • Results change depending on prompt wording — for no obvious reason
  • Recommendations always favour the same profiles or segments
  • AI-driven decisions are hard to explain or justify
  • Some indirect variables seem to act as discriminatory criteria
  • Automation was adopted quickly because it "seemed objective"
  • No one on the team knows what data the tool was trained on

What the workshop covers in practice

Each case is chosen based on your sector and your tools. The workshop never starts from generic examples.

AI-assisted recruitment

CV screening, candidate scoring, proxy criteria, HR prompt design. Selection biases are among the best documented — and the most avoidable.

Marketing and content generation

Stereotypes in generated visuals, advertising targeting, personalisation, representation bias in automatically produced copy.

Chatbots and internal assistants

Unequal responses depending on the user profile, discriminatory tone, lack of explainability, side effects on customer or employee experience.

Steering and decision support

Automated dashboards, false sense of objectivity, decisions that are hard to audit. The workshop helps document and challenge these uses.

What this workshop changes in practice

Even when AI "suggests", it already shapes decisions. Bias can slip into data, a prompt, or an internal process — without anyone noticing.

Your teams

Fewer decisions made without reflection. A shared language and shared reflexes with AI tools — applicable the very next day.

Your customers

More consistency, more fairness, fewer side effects. Trust strengthened at every interaction.

Your brand

A responsible stance without empty talk. You show you take the topic seriously — with concrete, documented action.

Your AI governance

Better oversight, therefore more reliable business use cases. Teams better prepared for the AI Act's phased requirements.

What your teams will experience

"Blind-spot-proof glasses." That's exactly the spirit of this algorithmic bias workshop.

1

Everyday examples

Cases close to your reality. HR: screening, scoring. Marketing: targeting, generated content. Leadership: arbitration, reporting.

2

Hands-on work

Your teams test, compare, and challenge. They observe how bias is created and reinforced — and learn to fix it without breaking everything.

3

Durable tools

Algorithmic bias analysis framework, checklists, mini self-assessment guide. The workshop becomes a starting point, not a box to tick.

AI bias workshop for companies — algorithmic bias analysis and team awareness

A flexible format

  • Duration Half-day or full-day, depending on your needs
  • Delivery Onsite or online — both work
  • Participants Up to 20 people per session
  • Audience HR, marketing, leadership, product, innovation — no technical prerequisites

What you take away

  • Complete, clear, shareable materials
  • Algorithmic bias analysis framework
  • Reusable mini self-assessment guide
  • Exercises based on your prompts and real cases
  • Role-based ready-to-use checklists
  • Curated resources to go further

By the end, your teams can

Identify bias

Selection, representation, confirmation, automation — recognise them in a prompt, a score, or a process

Ask the right questions

Before using an AI tool in production: what to verify, what to challenge, and when to escalate to an audit

Set safeguards

Review, double validation, edge-case testing, documentation — simple, realistic reflexes

This workshop is for you if…

  • You already use AI tools without a clear framework or validation process
  • You want to train non-technical teams in operational reflexes
  • You have a concrete challenge in HR, marketing, customer relations, or governance
  • You are looking for a pedagogical intervention that is immediately applicable
  • You want to prepare your AI use cases for the AI Act's phased requirements
  • You are an SME or mid-sized company starting with AI and prefer to start right

Who facilitates this workshop?

The workshop is led by Dieneba Kouyaté-Maillard, founder of Prompt & Pulse. She supports organisations on responsible AI use, governance, and reducing bias in business practices.

The approach is pedagogical, concrete, and adapted to non-technical teams as well as more advanced profiles.

What people often ask us

Who is the AI bias workshop for?
For HR, marketing, leadership, product, and innovation teams who use — or evaluate — AI tools and want to limit bias in their decisions. No technical prerequisites are required.
What format and duration should we plan for?
Half-day or full-day depending on how deep you want to go. The workshop runs onsite or online, with a maximum of 20 people per session to keep it genuinely interactive.
What do participants take away?
A complete, shareable pack, an algorithmic bias analysis framework, ready-to-use checklists, and a reusable mini self-assessment guide for their own tools.
Do we need AI prerequisites?
No. The workshop starts from concrete, everyday examples and builds simple reflexes you can apply immediately — whatever participants' familiarity with AI.
How can you detect algorithmic bias in a company AI tool?
Several warning signs help: recommendations skewed toward certain profiles, results that change depending on prompt wording, or decisions that are hard to explain. The workshop provides a structured analysis framework and exercises to identify bias in your own tools.
How does this workshop help prepare your teams for the AI Act requirements?
The workshop is grounded in the governance, transparency, and non-discrimination expectations for high-risk systems. It helps your teams build the AI literacy reflexes required since 2 February 2025 — with concrete steps adapted to your context, not a list of additional risks.
Which AI tools can you analyse during the workshop?
Recruitment screening or decision-support tools, conversational assistants, content generation tools, scoring systems, or internal automations. The workshop adapts to your real-world uses — not generic examples.
Is the workshop suitable for an SME that is just getting started with AI?
Yes. No technical maturity is required. The workshop helps you set simple reference points before uses become hard to control. That is often the best time to intervene.

Ready to equip your teams?

You describe your context. We propose a format that fits. No commitment.

Discover our complementary offers: