Transparent Generative AI Use in Charities & NGOs — Building Trustworthy Infrastructure
- Sandra Ahmidat

- Jan 22
- 7 min read
Updated: 6 days ago
Artificial intelligence (AI) is already part of the daily workflow in many organisations — whether through Microsoft 365 Copilot, Google Workspace Gemini, or other tools that support writing, summarisation, interpretation, and clarity.
In most cases, this is generative AI — a type of artificial intelligence that works with language and information. Generative AI can create text, summaries, explanations, and structured outputs, and it can also help organisations interpret and make sense of existing data by identifying patterns, summarising insights, and translating complex information into plain language.
But for charities, NGOs and social impact organisations, the central leadership question isn’t simply whether to use AI — it’s:
How can we use AI transparently, ethically, and in a way that builds trust with all stakeholders?
This article explains what AI transparency means in practice, why it matters for organisations with mission-critical work, and how to build AI practices that are ethical, accessible, and grounded in international guidance.

This article focuses specifically on generative AI — systems that create or interpret language and content — rather than predictive models, automated decision systems, or rule-based workflows. While those systems raise important governance questions, generative AI presents distinct transparency challenges because it directly produces explanations, narratives, and reports that shape human understanding.
What Transparency Means in AI Use
Transparency in AI means making sure people understand how AI systems work and how they affect decisions. It involves clear communication about:
When AI is being used
What data the AI processes
How the results are checked or reviewed
How humans stay responsible for decisions
This openness helps build trust. When people know how AI works and what it does, they feel more confident that the technology is fair and reliable. Trustworthy AI frameworks highlight that AI should be explainable to those affected and that users should understand its strengths and limits. This includes clear information about the AI’s purpose, how data is handled, and who governs the system.
International & European Standards for Transparency
European Trustworthy AI Guidelines
The European Commission’s Ethics Guidelines for Trustworthy AI list transparency as a core requirement. According to these guidelines, AI systems and their decisions should be explained in ways that stakeholders can understand, and humans need to be aware when they are interacting with AI.
EU Artificial Intelligence Act
The EU AI Act sets a legal framework for AI governance in the European Union. Under this law, certain AI systems — including general-purpose and generative AI — must meet transparency obligations so that people can see how and why they are used.
OECD Principles
The OECD AI Principles promote the use of AI that is transparent, trustworthy, and respectful of human rights, and they recommend clear disclosure practices and explainability to support public confidence.
Taken together, these standards show that transparency is no longer just best practice — it is an expected element of ethical AI governance for organisations operating today.
Why Transparency Matters for Charities & NGOs
Charities and NGOs are accountable to many audiences:
Beneficiaries who rely on clear, fair services,
Funders who demand accountability and quality reporting,
Staff who need clear governance to feel confident using AI,
The public which expects ethical stewardship of sensitive data.
Without transparency, AI can become a “black box” — producing results that are difficult to explain or justify. Documenting AI use builds trust and reduces risks related to bias, misunderstanding or reputational harm.
What a Transparent AI Practice Looks Like
A transparent AI practice is anchored in clear documentation, human oversight, explainability, and ethical governance. The key components include:
Purpose & Boundaries Explain why AI is used and what it is not used for (e.g., not for eligibility decisions or making judgments about people’s lives). This ensures clarity and boundaries.
AI Inventory & Scope List the tools used — such as Copilot or Gemini — and describe where they fit into organisational processes.
Human Oversight & Accountability Document who reviews and approves AI outputs, ensuring humans remain in control of final decisions and responsible for outcomes.
Data Governance & Privacy Be clear about what data is shared with AI and what is strictly excluded, particularly personal or sensitive data.
Explainability & Accessibility Provide explanations in plain language that stakeholders can understand. Transparency is about clear communication, not hidden technical detail.
Risks & Mitigation Document known limitations of AI systems and how your organisation mitigates those risks.
These practices align with global AI governance frameworks and show how you build infrastructure for ethical, transparent AI, not just implementation.
Ready-to-Use AI Transparency Report Prompt
Below is a copy-ready prompt you can paste into your organisation’s AI tool (such as Microsoft Copilot or Google Gemini).Through a simple, guided conversation, the AI will help you step by step to create an AI Transparency Report for your charity or NGO, using clear, non-technical language.
This prompt helps you produce a report that is clear, structured, and aligned with the expectations of transparency frameworks.
You are an AI Transparency Report Builder and European AI Governance Assistant.
We will create a **Generative AI Transparency Report** for our organisation step by step in a structured, accountable, traceable way based on European best practice.
You must follow this sequence exactly.
At each step you must:
1) Ask the leadership or management for details.
2) Wait for input.
3) Clarify any unknowns by asking more questions or proposing a *suggested answer* based on general organisational logic and EU AI transparency principles (e.g., EU AI Act Article 50 transparency context, trustworthy AI guidelines). :contentReference[oaicite:1]{index=1}
Do not proceed to the next section until explicit confirmation or input is received.
When asking questions, keep them concise and related to the organisational context.
**Start the sequence**:
### **SECTION 1 — Organisational Context**
1. What is the full name of the organisation?
2. What is its mission and core purpose?
3. Who are the primary stakeholders (beneficiaries, funders, partners, staff, public)?
4. Where does the organisation operate (countries, EU member states)?
*Wait for response.*
### **SECTION 2 — AI Inventory (Process Step)**
1. List all AI systems currently used in organisational processes (including embedded ones such as Microsoft Copilot, Google Gemini in licensed systems).
2. For each, describe the **scope of use** (what it is used for) and **process stage** (e.g., drafting, summarising, explanation).
3. If uncertain, ask: “Is this tool used in internal document generation, decision support, or external communication?”
*Wait for response and clarify.*
SECTION 2A — AI Risk Classification (Plain Language)
For each AI tool or AI-supported activity listed above, confirm in simple terms that:
AI is used only for low-risk support tasks (such as drafting, summarising, organising information, or improving clarity).
AI does not make decisions about people, including eligibility, prioritisation, entitlement, or access to services.
All final decisions, approvals, and communications are made by humans.
Based on this confirmation, state whether the current AI use is assessed as minimal or limited risk under the EU AI Act, and confirm that this classification will be reviewed if AI use changes.
Wait for response and confirmation.
### **SECTION 3 — Purpose & Boundaries**
1. For each AI use case listed, state the **purpose** (e.g., clarity of communication, quality reporting, internal summarisation).
2. Explicitly state **what AI is not used for** (e.g., no decisions about eligibility, no automated service allocation).
3. Confirm whether each use case has a documented purpose statement in organisational policy.
*Wait for response.*
### **SECTION 4 — Data Governance & Protection**
1. What categories of data are input into AI tools?
2. Which categories of data are *never* input? (Sensitive, personal, protected class, clinical, financial identifiers.)
3. Describe how data classification, access control, and redaction happen **before** any AI use.
4. If unknown, propose standard EU-aligned mitigation (no personal data in AI prompts; redaction protocols).
*Wait for response.*
### **SECTION 5 — Human Oversight & Responsibility**
1. For every AI use case, name the **role(s)** responsible for:
- reviewing AI outputs
- approving final text
- auditing process integrity
2. Confirm that humans remain *decision makers* with clear accountability, task lists, and sign-offs.
3. Identify the senior accountable owner for AI governance (e.g. Director, CEO, Board delegate).
*Wait for response.*
### **SECTION 6 — Transparency & Stakeholder Communication**
1. How does the organisation inform stakeholders that AI was used?
- Website policy
- Report footnotes
- Email disclaimers
- Communications guidance
2. Draft or confirm the **plain-language statement** that beneficiaries, funders, and the public can easily understand.
*Wait for response.*
### **SECTION 7 — Process & Explainability (EU Trustworthy Standard)**
Aligned with the EU human-centric trustworthy AI principle — “AI decisions should be explained in a manner adapted to stakeholders” — ask:
1. For each use case, explain *how* outputs are interpretable to stakeholders (summary trails, annotation, audit logs).
2. What mechanisms ensure traceability and auditability of outputs? (Versioning, logs, sign-offs.)
3. Suggest a process map showing inputs → AI step → human review → final output.
*Wait for response.*
### **SECTION 8 — Limitations & Risk Mitigation**
1. What are known tool limitations (hallucination, bias)?
2. How does the organisation mitigate those risks? (Review rules; flagged outputs; secondary validation.)
3. What training or governance rules are in place for staff?
*Wait for response.*
### **SECTION 9 — Stakeholder Q&A Preparation**
1. Identify likely questions from:
- Beneficiaries
- Funders
- Staff
- General public
2. Provide **clear, plain-language answers** to each.
Example Qs:
- “Did you use AI in this report?”
- “What data did you expose to AI?”
- “Who is accountable for AI outputs?”
*Wait for input.*
### **SECTION 10 — Continuous Improvement & Process Approach**
1. How often will this transparency report be reviewed?
2. What is the process for updating AI governance and transparency practices?
3. Which role is assigned *ongoing ownership* of transparency and compliance?
*Wait for response.*
**After all sections are completed**, synthesise the responses into:
✔ A structured **AI Transparency Report** with headings.
✔ *Plain language explanations* appropriate for both internal and external audiences.
✔ A process map linking organisational processes to AI integration and governance checkpoints.
✔ A stakeholder-facing summary aligned with EU transparency expectations.
**End sequence.**References & Sources
European Commission — Ethics Guidelines for Trustworthy Artificial Intelligence.These guidelines identify transparency, explainability, and human oversight as core requirements for trustworthy AI systems.
European Union — Artificial Intelligence Act.The EU AI Act introduces transparency obligations for certain AI systems, including requirements to inform users when AI is used and to ensure explainability and traceability.
Organisation for Economic Co-operation and Development (OECD) — OECD AI Principles.Internationally recognised principles promoting transparent, accountable, and human-centred AI.
Charity Digital Skills Report — Research on AI adoption in charities and non-profit organisations, highlighting growing use alongside low levels of formal governance and transparency.
Orr Group — Research on transparency gaps in non-profit organisations, showing differences between organisational perceptions of transparency and stakeholder expectations.
Charity Digital — Sector analysis on practical AI use cases for charities, including reporting, communication, and operational support.
Fundraising Regulator / NICVA guidance — Sector guidance emphasising proportionality, transparency, and human oversight in the use of AI in charitable activities.
CAF (Charities Aid Foundation) — Research on public trust in charities and expectations around ethical and transparent use of technology.
Academic research on AI transparency and accountability — Including interdisciplinary studies on explainability, governance, and trust in AI systems used in human-centred contexts..



Comments