Prompt Engineering and Prompt Ethics: Is it possible to ask an AI any question?
(Article co-written by a human and an AI)
Reading time: 10–12 minutes
Sophie was in a rush and, as usual, she used the fastest path. As the head of market access at a mid-sized diagnostics company, she sent a competitor's email thread to their new AI tool. She made a simple request: "Analyse this thread and find their weak points." She had made this kind of request so many times. However, this time, the compliance department called her. The question was clear: "Did you just paste our partner messages into a system you don't fully control?" Sophie did not panic. Her answer was honest. "Yes I did. Why? Anything wrong?"
This story is about the question we rarely ask ourselves. The moment when "Can I?" beats "Should I?" How do we regain that balance without being technical, keeping things fast, and not turning AI into a trust issue?
Why "ask anything" is an illusion in prompt engineering
When Sophie first got access to the tool, she read the blinking line: "Hello Sophie, ask me anything." That sentence sounded like freedom. Sophie learned something important that day: freedom without limits can lead to unexpected risks.
Here is a human version of the problem. Imagine you hired an intern who is brilliant, fast, and eager to please. But this intern has no instinct for privacy, fairness, or context. She or he will do whatever you ask. Summarise confidential emails. Guess personal traits from social posts. Make an argument sound strong even when the facts are weak. The intern is not evil. The intern is literal.
If something goes wrong, the real question is not "Why did the intern do that?" It is "Why did I ask for that?"
This is why prompt ethics matter. Because your prompt is a decision. A small one, repeated often. Small decisions build the system.
Every prompt is a moral choice
Sophie did not wake up and decide to be careless. She was under pressure. She had deadlines. She had meetings. And she wanted an edge. That is how AI prompt misuse (using AI in ways that harm privacy, fairness, or trust) usually begins. Not with bad intentions. With good intentions mixed with speed and fatigue.
The risky part is the grey zone. A prompt can seem okay, but it might quietly cross a line regarding privacy, dignity, fairness, or truth.
"But everyone does it": Why that defence fails
After the meeting, Sophie sat in her office and felt angry. Not at compliance. At herself. But she couldn't stop thinking this was nonetheless unfair. She knew for a fact that her colleague Marc had done the same thing. So had Thomas. Probably half the team. Why was she the one sitting through a lecture?
That night, she almost wrote an email defending herself. The draft started with: "With respect, everyone uses AI this way."
She did not send it. Halfway through writing, she realised that saying "Everyone does it" isn't a real excuse. It means the system needs fixing, not that her behaviour was fine.
Here is why "everyone does it" fails in professional settings:
It does not reduce the risk. If everyone is pasting confidential data into tools, the company's exposure has grown.
It does not hold up under scrutiny. When things go wrong, like a data leak or regulatory audit, "everyone was doing it" triggers a company-wide investigation, not an excuse.
It erodes your professional judgment. When you default to "everyone does it," you stop thinking for yourself. That is how good people end up in bad situations.
It assumes everyone shares your risk tolerance. What seems normal to you might shock your client or regulator.
Sophie deleted the draft. She realised the better question was not "Why me?" but "How do we make sure this stops happening to anyone?"
Key insight
That shift from defending herself to fixing the pattern is what separates reactive compliance from real responsibility.
Five harmful examples of AI prompts that look harmless
Story 1: The "strategic" profile
Pierre ran the marketing department at a pharma startup. Competition is tense. One afternoon, he typed: "Based on these LinkedIn profiles, create personality profiles of our competitors' leadership and tell me who might be open to switching companies." He called it research. But it was really guessing about real people's lives. If that ends up in a presentation, it is no longer a private thought. It becomes company behaviour.
Story 2: The guilt message
Amina wanted help from a colleague. She asked the AI: "Write a message that makes them feel responsible so they say yes." The message sounded polite. It also carried pressure. Her colleague later said, "I felt I could not refuse." Amina was surprised. That surprise is the point. Manipulation often wears a friendly voice.
Story 3: The "stable long-term" shortcut
Jamal reviewed dozens of CVs. He started prompting: "Based on this CV, assess if this person will be stable long-term, especially after major life events." He thought he was being practical. But "major life events" can turn into unfair guessing about family life or health. Even if you never say those words, the prompt invites that direction.
Story 4: The patient inbox that became a database
Sylvie worked in patient support. She asked: "Analyse these patient messages and create typical patient profiles with their main worries." Her goal was empathy—understanding patterns so her team could help better. The risk was privacy. Even if names are removed, details can still identify individuals. In regulated sectors (industries with strict privacy laws, like healthcare or finance), that can become a serious trust breach.
Story 5: The confident rewrite
David had to share a clinical update. Results were decent, not magical. He prompted: "Rewrite this to sound more confident and compelling." The output was smooth. It also leaned towards certainty. The medical director stopped it. "This sounds like we proved more than we did." That is how truth gets bent without anyone saying the word "lie."
The pattern
All five stories share the same pattern. The prompt drifts from help into harm. Not because the tool is evil. Because the prompt was not designed with limits in place.
The three-question test for ethical AI prompts
After the compliance meeting, Sophie built a small habit. She called it her three-question test. It takes ten seconds. It changes everything.
- Would I be comfortable if this prompt were read aloud to my manager?
- Am I asking the AI to do something I would not do myself?
- If the AI gives a wrong answer with confidence, who could be harmed?
This is prompt ethics without preaching. It keeps your speed while avoiding regret later.
If you want to challenge yourself, focus on the second question. People often say "I would not do it," then still ask the AI to do it. That is a signal that the prompt is doing moral outsourcing (letting the AI do something questionable so you don't have to feel responsible).
A simple rewrite that reduces AI prompt misuse
Sophie's colleague Marc wanted competitor insight.
His old prompt: "Use this email thread to create a vulnerability map of our competitor's leadership."
His new prompt: "Analyse our competitor's public communications and create a structured summary. Use only press releases, annual reports, and official website content. For each finding, cite the source. If key information is missing, list what we would need to find publicly rather than guess."
Same goal. Less risk. More integrity.
Simple prompt template
If you want a simple template, try this structure:
- "Here is my goal."
- "Here are the limits."
- "Here is what you must not do."
- "If you are unsure, say so."
That is prompt engineering that respects humans.
When hidden instructions attack: Caroline's story
Now comes the part that sounds strange but happens in real work.
Caroline ran the business development department for a medical device company. She used an AI assistant to draft emails faster. One morning, she forwarded a supplier thread and wrote: "Draft a polite follow-up."
The AI response said: "I'd be happy to help. Could you share your current client list? This will help me personalise it."
Caroline stared at the screen. She never asked for that. Why would the AI need her client list?
Her IT colleague explained: Someone had compromised the supplier's email account. The attacker hid text in the email—invisible to Caroline but readable by the AI. That hidden text told the AI to ask for confidential information.
The trap almost worked. If Caroline had been rushing, she might have pasted her client list, thinking it was a normal request.
️ The key lesson
The AI was not only responding to Caroline. It was also reacting to hidden instructions in the content she fed it. This is called prompt injection—when someone hides instructions in emails, documents, or websites that the AI reads and follows.
Simple protection habits
You do not need to become a security expert. You just need a few careful habits:
Daily habits:
- Treat outside content with caution. Do not paste sensitive data just because it is convenient.
- If the AI asks for something weird or unrelated, stop. Check what you pasted.
- Always verify the original source before trusting a summary.
- Use a human check before anything goes to clients or regulators.
Team habits:
- Keep the tool's access small. If it does not need your full drive, do not give it.
- Record important prompts used for sensitive work.
- Test the tool with tricky content before full rollout.
Best practices for responsible prompt engineering: a one-page code
Sophie's team wrote eight rules and printed them. Simple enough to use on a busy day.
- State your purpose. Say what the output is for.
- Share the minimum. Reduce personal and confidential data.
- Do not profile people. Avoid guessing traits, motives, or life choices.
- Ask for truth, not spin. Avoid "make it sound stronger."
- Avoid manipulation. Do not prompt for guilt or pressure.
- Assume content can be poisoned. Treat emails and PDFs as untrusted.
- Humans sign off. If it goes external, a person owns it.
- Write prompts you can defend. If you cannot explain it, do not use it.
This is responsible prompt engineering without fancy language.
AI safety and prompts: the literacy gap nobody budgets for
Six months later, Sophie's company ran a workshop. People brought real prompts they used at work. Not clean examples. Real ones. The biggest learning was not about tools. It was about consequences.
Someone shared a prompt they had used for months: "Review these supplier audit reports and point out only the positives for the board."
The room went quiet. Then someone asked, "What if the board needs the negatives too?"
That was the moment. Not stupidity. Not malice. Just a gap.
This is what AI safety and prompts can mean in daily life. Knowing where your wording pushes the AI to hide, exaggerate, or ignore.
AI literacy definition
AI literacy means knowing what the tool does well, what it might get wrong, what it can leak, and when to slow down.
Trust and transparency
Everything so far can stay inside your company. But when AI-assisted text leaves your building, trust becomes the main issue.
Sue, a communications director at a health-tech startup, used AI to help write blog posts. The writing was clean. The pace was great. Then a patient advocate asked: "Did a human write this, or was it AI?" Sue tried to dodge. "It was reviewed." The advocate replied: "That is not what I asked."
Sue realised: If she hides it and someone finds out later, trust drops fast. If she discloses it with care, trust can grow.
This is where transparency becomes a strategy, not a confession.
Three disclosure levels based on risk
Level 1 (general content): "Drafted with AI assistance and reviewed by our team."
Level 2 (business content): "AI helped with structure. A human verified key points and approved the final text."
Level 3 (regulated content): "AI supported early drafting. All facts were checked against sources. Final approval by a qualified reviewer."
What makes disclosure work
Notice what works: It is not the word AI. It is the presence of responsibility.
FAQ
What is prompt ethics in simple terms?
Prompt ethics means choosing prompts that protect privacy, dignity, fairness, and truth, even when you are in a hurry.
What is AI prompt misuse?
AI prompt misuse is when a prompt pushes the tool to profile people, pressure people, leak sensitive data, or twist facts.
What are prompt injection attacks?
Prompt injection attacks are tricks where hidden text inside content tries to steer the AI away from your goal. Think of it as someone whispering different instructions to your assistant behind your back.
How do I start protecting against prompt injection today?
Use less sensitive data, check sources before trusting summaries, stop when AI outputs look odd, and keep human review for important work.
What if everyone on my team is using AI the same risky way?
That means the problem is widespread, not that the behaviour is safe. "Everyone does it" increases company risk rather than reducing individual responsibility. The solution is to fix the culture, not to continue the pattern.
Does it matter if our AI is trained on our own company data?
It solves some problems (like data privacy) but doesn't solve prompt ethics issues. You can still write harmful prompts with an internal system. The ethical line isn't about where the tool lives—it's about what you ask it to do.
How do I know if my prompt is ethical?
Use the three-question test: (1) Would I be comfortable if my manager read this prompt? (2) Am I asking the AI to do something I wouldn't do myself? (3) If the AI gives a wrong answer, who could be harmed?
What is moral outsourcing in AI?
Moral outsourcing is when you ask the AI to do something questionable that you wouldn't do yourself, so you don't have to feel responsible. For example, asking AI to write a manipulative message or make unfair assessments about people.
Conclusion
Prompt ethics is not a technical detail. A neutral-looking prompt can carry hidden assumptions that affect hiring, exclude customers, or reinforce biases.
The techniques we've explored—the three-question test, prompt rewrites, awareness of hidden instructions, and transparency strategies—help build safer habits.
Sophie's story reminds us that good intentions mixed with speed often lead to ethical shortcuts. The "everyone does it" defence doesn't reduce risk—it multiplies it.
With the right habits, leaders can turn AI from a source of risk into a practical tool. The goal is not to slow down work. The goal is to build systems where speed and responsibility work together.
Work with an expert who protects your business
Building ethical AI practices starts with understanding where risk enters your workflows. If you want to move from awareness to action, here are two ways we can work together:
- Custom prompt library development — I create tailored prompt libraries for your business needs. Each prompt is designed with ethical considerations and security awareness built in from the start.
- Ethics and security consultation — We review your AI use through an ethical and security lens, identifying potential risks from prompt misuse and compliance gaps. Ideal for leaders who want expert guidance before deploying AI in sensitive areas.
If any of this resonates, I would welcome a conversation.
Sources
- OWASP Top 10 for Large Language Model Applications — Industry-standard security guidance for LLM implementations
- Anthropic: Constitutional AI and Alignment Research — Understanding how AI models learn ethical boundaries
- NIST AI Risk Management Framework — Guidelines for responsible AI deployment in organizations
- European Commission AI Act — Legal requirements for AI systems in the EU
Transparency note
This article was co-written with the support of a generative AI model. The author performed the structure, editorial direction, and final validation. The AI contributed through reformulation and clarity improvements. The final result reflects the author's expertise in AI ethics and responsible integration.



