Why critical thinking is the real AI skill
Bias starts in the prompt
What you ask AI shapes what it produces. A poorly worded prompt, an overly leading prompt, or a culturally biased prompt can generate outputs that look reliable — and are not.
Outputs are not checked automatically
Teams using AI day to day do not always have the reflexes to spot an error, a hallucination, or a representation bias in an AI result.
Decisions made with AI create accountability
Recruitment, communications, data analysis, writing — if AI influences a decision, responsibility remains human. Without a clear framework, it becomes blurry.
When to seek ethical AI support
Do you recognise one of these situations?
Your teams use AI, but without distance
The tools are there. The uses are there. But no one has asked yet: is what AI produces reliable, neutral, verifiable?
You have doubts about output quality
The results look good. But you do not know exactly why — or how to detect when that stops being true.
You want clear rules
Who uses AI for what? With which limits? How do we validate an AI output before using it? These questions deserve concrete answers.
You need to understand the regulatory landscape in France
The EU AI Act entered into force in August 2024. In France, the CNIL adds a specific layer for AI systems that involve personal data. Understanding what applies to your uses — without legal jargon — is exactly where we start.
How the approach works
Workshop, ethical AI diagnosis, and tailored governance support are not three separate services. They are three entry points into the same consulting approach — depending on where you are and what you need.
Not sure where to start?
The ethical AI diagnosis maps your current AI uses, identifies the main ethical risk areas, and clarifies which component makes sense next. It is the right starting point when you want a clear picture before committing.
Ready to train your teams?
The algorithmic bias workshop — half day or full day — gives teams the language, reflexes, and tools to identify bias in their AI uses. It works as a standalone session or as a step within broader support.
Need lasting governance guardrails?
Tailored support builds the framework your organisation keeps: an AI usage charter, a role-specific prompt library, an evaluation protocol, and regular check-ins. It is the component for teams that want structured, long-term change — not a one-off session.
All three components can be combined. Many organisations start with a diagnosis, run a workshop, and then build governance from there. Others start directly with tailored support. The entry point depends on your context, not on a fixed programme.
Formats — workshop, full-day, or tailored support
Bias and practical AI reflexes
A first level of awareness and practical tools, directly applicable to your team's AI uses.
- Identify and name the main biases: selection, representation, confirmation, automation
- Recognise warning signs in a prompt, a score, or an AI-enabled process
- Ask the right questions before using an AI result in a decision, a piece of content, or a process
- Put simple, lasting reflexes in place to safeguard decisions
Bias, governance and action plan
A full day to analyse your real AI uses and build a collective governance framework adapted to your business context.
- Spot bias risks in your team's AI uses
- Distinguish algorithmic bias, quality errors and hallucinations
- Analyse an AI use case with a structured framework
- Run a bias test on your own prompts and processes
- Install guardrails adapted to the team's context
- Contribute to a collective continuous-improvement action plan
- Understand the key EU AI Act and CNIL reference points for your context
Ethical framing and governance for AI uses
For organisations that want to go further: governance framework, AI usage charter, and long-term follow-up.
- Mapping of existing AI uses and key watch points
- AI usage charter adapted to your business context
- Ethical evaluation framework for outputs
- Role-specific prompt library reviewed to reduce identified bias
- Regular follow-up check-ins at your pace
What you leave with
What this support is not
Who is this for?
SME leaders
Typical situation: you use AI, or your teams have started using it, without a shared framework.
You want to understand real risks — not theoretical ones — and put rules in place that hold over time.
Operational teams
Typical situation: you use AI daily and want to know when to trust it and when not to.
You want concrete reflexes, not a lecture on ethics.
HR and recruitment teams
Typical situation: AI is involved in CV screening, job descriptions, or candidate analysis.
You need to identify selection and representation bias before it influences decisions.
Marketing and communications teams
Typical situation: AI produces text, visuals, or data analysis that you use without always checking.
You want to build a critical eye on what AI generates for your brand.
International teams working with French-speaking markets
Typical situation: your teams are anglophone but operate in France or with French-speaking clients, partners, or data.
You need to understand how EU AI Act and CNIL obligations apply to your AI uses — without getting lost in French regulatory language.
What your teams can do afterwards
They can name what they see
Selection bias, representation bias, confirmation bias, automation bias — they have the language to identify what is really happening in an AI result.
They ask the right questions
Before using an AI output in production, they know what to check, what to question, and when human validation is required.
They safeguard decisions
Guardrails are in place. Rules are clear. AI remains a tool they steer — not an authority they follow.
They know the useful regulatory reference points
EU AI Act, CNIL, accountability for use — they understand the essentials that apply to their context in France, without legal jargon. And if they need a lawyer specialised in AI regulation, they know where to start looking.
AI does not replace human judgement. It influences it — as long as we do not notice.
Who leads the sessions
Dieneba LESDEMA — Founder of Prompt & Pulse
Algorithmic bias and ethical AI specialist · Certified prompt engineer (Jedha Bootcamp) · Member of SheLeadsAI · Member of Hub France IA
I support organisations that want to use AI with critical thinking — not those looking for one more tool. My approach draws on 25 years of international corporate experience (Sanofi, Baxter, Aga Khan Academies) across France, Zimbabwe, South Africa, and the UK — and one simple conviction: what we give AI shapes what it returns, and what it returns influences our decisions.
I work at both levels — inputs and outputs — because bias operates everywhere. That international background is also what sharpens my eye for cultural blind spots in AI systems.
Frequently asked questions
Want your teams to use AI with critical thinking?
Let's talk about your AI uses, the bias you want to prevent, and the guardrails to put in place. The goal is not to use more AI. The goal is to use AI with more discernment.
Explore the full Prompt & Pulse approach
Ethical AI consulting for SMEs in France
Governance, bias awareness and practical guidance to use AI responsibly — without complexity.
Explore ethical AI consulting →Algorithmic bias workshop for businesses
Build concrete reflexes to detect bias and set ethical guardrails across real use cases.
See the algorithmic bias workshop →Ethical AI diagnosis
Map your AI uses, identify ethical risk areas, and prioritise actions — a clear starting point.
Start with the AI diagnosis →