Why critical thinking is the real AI skill

Bias starts in the prompt

What you ask AI shapes what it produces. A poorly worded prompt, an overly leading prompt, or a culturally biased prompt can generate outputs that look reliable — and are not.

Outputs are not checked automatically

Teams using AI day to day do not always have the reflexes to spot an error, a hallucination, or a representation bias in an AI result.

Decisions made with AI create accountability

Recruitment, communications, data analysis, writing — if AI influences a decision, responsibility remains human. Without a clear framework, it becomes blurry.

When to seek ethical AI support

Do you recognise one of these situations?

Your teams use AI, but without distance

The tools are there. The uses are there. But no one has asked yet: is what AI produces reliable, neutral, verifiable?

You have doubts about output quality

The results look good. But you do not know exactly why — or how to detect when that stops being true.

You want clear rules

Who uses AI for what? With which limits? How do we validate an AI output before using it? These questions deserve concrete answers.

You need to understand the regulatory landscape in France

The EU AI Act entered into force in August 2024. In France, the CNIL adds a specific layer for AI systems that involve personal data. Understanding what applies to your uses — without legal jargon — is exactly where we start.

EU AI Act + CNIL — what applies in France: France follows the EU AI Act framework, with progressive obligations from 2024 to 2026. The CNIL adds a specific layer for AI uses that involve personal data and GDPR compliance. For international teams operating in France, both reference points are relevant. The consulting sessions cover the key watch points for your context in plain language — they do not replace legal advice. If your situation requires a lawyer specialised in AI regulation or data protection, I can help you find one in France. Learn more: EU AI Act (European Commission) · CNIL AI practical guides

How the approach works

Workshop, ethical AI diagnosis, and tailored governance support are not three separate services. They are three entry points into the same consulting approach — depending on where you are and what you need.

Not sure where to start?

The ethical AI diagnosis maps your current AI uses, identifies the main ethical risk areas, and clarifies which component makes sense next. It is the right starting point when you want a clear picture before committing.

Ready to train your teams?

The algorithmic bias workshop — half day or full day — gives teams the language, reflexes, and tools to identify bias in their AI uses. It works as a standalone session or as a step within broader support.

Need lasting governance guardrails?

Tailored support builds the framework your organisation keeps: an AI usage charter, a role-specific prompt library, an evaluation protocol, and regular check-ins. It is the component for teams that want structured, long-term change — not a one-off session.

All three components can be combined. Many organisations start with a diagnosis, run a workshop, and then build governance from there. Others start directly with tailored support. The entry point depends on your context, not on a fixed programme.

Formats — workshop, full-day, or tailored support

Workshop — half day

Bias and practical AI reflexes

A first level of awareness and practical tools, directly applicable to your team's AI uses.

  • Identify and name the main biases: selection, representation, confirmation, automation
  • Recognise warning signs in a prompt, a score, or an AI-enabled process
  • Ask the right questions before using an AI result in a decision, a piece of content, or a process
  • Put simple, lasting reflexes in place to safeguard decisions
Workshop — full day

Bias, governance and action plan

A full day to analyse your real AI uses and build a collective governance framework adapted to your business context.

  • Spot bias risks in your team's AI uses
  • Distinguish algorithmic bias, quality errors and hallucinations
  • Analyse an AI use case with a structured framework
  • Run a bias test on your own prompts and processes
  • Install guardrails adapted to the team's context
  • Contribute to a collective continuous-improvement action plan
  • Understand the key EU AI Act and CNIL reference points for your context
Tailored support

Ethical framing and governance for AI uses

For organisations that want to go further: governance framework, AI usage charter, and long-term follow-up.

  • Mapping of existing AI uses and key watch points
  • AI usage charter adapted to your business context
  • Ethical evaluation framework for outputs
  • Role-specific prompt library reviewed to reduce identified bias
  • Regular follow-up check-ins at your pace

What you leave with

🔍 An algorithmic bias analysis framework adapted to your uses
⚠️ Practical reflexes to detect warning signs in AI outputs
📋 An AI usage charter adapted to your organisation
💬 A role-specific prompt library reviewed to reduce identified bias
🗺️ A collective continuous-improvement action plan
⚖️ Key EU AI Act and CNIL reference points for your context of use in France

What this support is not

Not advice on choosing AI tools or technical architecture
Not legal validation or a formal compliance audit — but if your situation requires a lawyer specialised in AI regulation or data protection, I can help you find one in France
Not a promise that AI will be perfect after the workshop
Not a theoretical talk disconnected from your real uses

Who is this for?

SME leaders

Typical situation: you use AI, or your teams have started using it, without a shared framework.

You want to understand real risks — not theoretical ones — and put rules in place that hold over time.

Operational teams

Typical situation: you use AI daily and want to know when to trust it and when not to.

You want concrete reflexes, not a lecture on ethics.

HR and recruitment teams

Typical situation: AI is involved in CV screening, job descriptions, or candidate analysis.

You need to identify selection and representation bias before it influences decisions.

Marketing and communications teams

Typical situation: AI produces text, visuals, or data analysis that you use without always checking.

You want to build a critical eye on what AI generates for your brand.

International teams working with French-speaking markets

Typical situation: your teams are anglophone but operate in France or with French-speaking clients, partners, or data.

You need to understand how EU AI Act and CNIL obligations apply to your AI uses — without getting lost in French regulatory language.

What your teams can do afterwards

They can name what they see

Selection bias, representation bias, confirmation bias, automation bias — they have the language to identify what is really happening in an AI result.

They ask the right questions

Before using an AI output in production, they know what to check, what to question, and when human validation is required.

They safeguard decisions

Guardrails are in place. Rules are clear. AI remains a tool they steer — not an authority they follow.

They know the useful regulatory reference points

EU AI Act, CNIL, accountability for use — they understand the essentials that apply to their context in France, without legal jargon. And if they need a lawyer specialised in AI regulation, they know where to start looking.

AI does not replace human judgement. It influences it — as long as we do not notice.

Who leads the sessions

Dieneba LESDEMA — Founder of Prompt & Pulse

Algorithmic bias and ethical AI specialist · Certified prompt engineer (Jedha Bootcamp) · Member of SheLeadsAI · Member of Hub France IA

I support organisations that want to use AI with critical thinking — not those looking for one more tool. My approach draws on 25 years of international corporate experience (Sanofi, Baxter, Aga Khan Academies) across France, Zimbabwe, South Africa, and the UK — and one simple conviction: what we give AI shapes what it returns, and what it returns influences our decisions.

I work at both levels — inputs and outputs — because bias operates everywhere. That international background is also what sharpens my eye for cultural blind spots in AI systems.

Frequently asked questions

It covers three connected areas: bias detection in prompts and outputs, team training to build critical thinking around AI, and governance framing — AI usage charter, guardrails, and regulatory watch points. The starting point and depth depend on your context: a half-day workshop, a full-day session, or tailored ongoing support.
The algorithmic bias workshop is one component of the broader consulting approach. It focuses on awareness and practical reflexes for teams. The full consulting offer adds governance framing, an AI usage charter, a role-specific prompt library, and long-term follow-up. Not sure where to start? The ethical AI diagnosis is designed for that.
Yes. Remote sessions are available regardless of location. The consulting approach is particularly relevant for international or anglophone teams operating in France or working with French-speaking markets, where the EU AI Act and CNIL frameworks apply to their AI uses.
Yes, depending on your uses and your role in the value chain. The EU AI Act entered into force in August 2024 and applies progressively. In France, the CNIL adds a specific layer for AI systems that involve personal data. Together, these frameworks define the key watch points for your AI uses. The consulting sessions cover those reference points in plain language. They do not replace legal advice — but if your situation requires a lawyer specialised in AI regulation or data protection, I can help you find one in France.
No. The consulting covers ethical governance and practical regulatory awareness — not legal validation or compliance certification. That said, if your situation requires a lawyer specialised in AI regulation or data protection, I can help you identify and connect with the right person in France. I am not a lawyer, but I know the field well enough to point you in the right direction.
Depending on the format: an algorithmic bias analysis framework adapted to your uses, practical reflexes to detect AI warning signs, an AI usage charter adapted to your organisation, a role-specific prompt library reviewed to reduce identified bias, a collective continuous-improvement action plan, and key EU AI Act and CNIL reference points for your context.
Yes. Workshops and support sessions adapt to your constraints — on-site at your premises or remote via video conference. Both formats are available for teams based anywhere in France, and for international teams working remotely with French-speaking markets.

Want your teams to use AI with critical thinking?

Let's talk about your AI uses, the bias you want to prevent, and the guardrails to put in place. The goal is not to use more AI. The goal is to use AI with more discernment.

Explore the full Prompt & Pulse approach