With the rise of generative AI (GenAI) and its increasing accessibility, applying a human-centered lens to assess the impacts of these technologies has become essential. While some civil society organisations (CSOs) may find emerging tools, like AI-powered chatbots, useful — capable of producing impressive outputs across a wide range of topics that could enhance aspects of their work — it’s important to recognise that the use of artificial intelligence (AI) is neither inevitable nor necessary. Organisations should not feel pressured to adopt these technologies simply because they are widespread or trending. For those who do choose to explore them however, this blog post offers reflections on how to do so thoughtfully based on our own learning experience here at ECNL.
As we explored how GenAI might fit into our own work, we found it helpful to establish clear conditions for how GenAI tools can be used for work-related activities in a safe and responsible manner—one that does not compromise our organisation, staff, mission or reputation. Large language models (LLMs), in particular, come with significant risks that cannot be overlooked. In a rapidly evolving tech landscape, it has become clear that carefully assessing these risks and developing a robust policy are essential to protecting the integrity of our work.
Our recommendation for organisations on a similar journey is to begin by adopting clear definitions of AI tools to support a more grounded and shared understanding of what they entail. Here are suggested definitions for AI and AI-powered chatbots:
- Artificial Intelligence (AI): Article 3(1) EU AI Act defines AI systems as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
- Generative AI (GenAI) systems: These create new content that resembles its training data, which can include text, image, or video formats. It’s an umbrella term that includes LLMs as a type of GenAI model. An AI-powered chatbot (such as OpenAI’s ChatGPT or Google’s Gemini) is a computer programme that uses AI and natural language processing (NLP) to provide users with responses to questions across a wide range of subjects.
9 key considerations for creating an internal AI use policy
Purpose-driven, rights-based approach to AI
Any use of AI in the workplace should be grounded in a rights-based approach and accompanied by proactive measures to mitigate potential harms. Organisations should ensure that AI tools are used only when they are safe, strategic, and aligned with core values. AI should never be seen as a replacement for human workers, but rather as a tool to complement human expertise and enhance, not undermine, meaningful work. Efficiency should never come at the expense of labour rights, human rights, or organisational integrity.
Optional use, never required
The use of AI should never feel mandatory but rather be offered as an optional tool to enhance and elevate work. At the same time, organisations can support their workers in experimenting with these tools where they can save time and resources, provided that the risks and limitations associated with using these tools are thoroughly considered.
Human oversight and accountability
AI chatbots’ outputs are not always accurate or reliable. Therefore, it is essential to verify any AI-generated content against credible, independent sources before relying on it. Furthermore, AI chatbots often produce biased outputs that can be difficult to navigate and identify. The data used to train these models is predominantly Western-centric and biased, which can result in outputs that are inaccurate, incomplete, or biased—particularly toward marginalised or underrepresented communities and those in the Global Majority. To address this, organisations should actively identify and correct bias in AI-generated outputs and ensure alignment with their diversity, equity, and inclusion (DEI) policies and human rights standards. This approach helps promote fairness, accountability, and responsible use of technology in line with organisational values.
Plagiarism and originality standards
AI-powered chatbots often fail to cite sources for the information they provide. For this reason, their outputs may inadvertently include copyrighted material, potentially exposing users to significant legal liability for copyright infringement. To minimise these risks, organisations should prohibit copying or reproducing entire outputs or substantial portions of content generated by AI chatbots.
Privacy, data protection, and security
Data input into AI-powered chatbots may be used to train the AI model and could appear in responses provided to other users. Therefore, confidential organisational information must never be input into these systems. This includes, but is not limited to:
- Business relationships and personnel: details about partners, contractors, and other organisation contacts, finances, employees, managers or distributors of the organisation (whether current or prospective);
- Security information: Information relating to the security of the organisation’s premises, computers, telephone and communications systems;
- Organisational data: Commercial, financial, marketing, business development or business planning information; and
- Proprietary assets: Partner, contractor and supplier lists, technical information and know-how, including trade secrets and any information designated as confidential or reasonably expected to be confidential.
In addition, personal data relating to individuals within the organisation or external partners should not be entered into AI chatbots. This includes information that could identify individuals indirectly, including references to racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, health information, sex life or sexual orientation, genetic and biometric data. Organisations should ensure these guidelines align with their existing data protection policies and regulatory requirements.
Context-based use cases (original works, derivative materials, internal use)
To better visualise scenarios where AI can enhance work, incorporating context-based use cases into internal policies can provide a clearer understanding and practical guidance. Regardless of context, however, all AI-generated content intended for original work must be thoroughly fact-checked, carefully edited for tone, and reviewed to prevent issues such as plagiarism, factual errors, or copyright infringement. We suggest categorising use cases as follows:
Original Works:
- For key deliverables like reports, advocacy briefs, and public statements, AI should be limited to tasks like brainstorming and research assistance (such as literary reviews or summarising long articles).
- AI can also be used to improve phrasing, formatting, or repetitive tasks (citation formatting or adhering to style guides). Where strategic and harmless, generative AI can be useful in creating data visualisations and imagery, provided the outcome is clearly labelled as AI-generated.
Derivative Works:
- For derivative materials (such as social media posts, newsletters, summaries, grant writing assistance, translations, transcription of interviews and meetings, social media management or communications support, etc), AI can be used more broadly, and scrutiny doesn’t need to be as rigorous as for original works. AI tools can assist organisations to summarise their original content, suggesting titles or headlines, or drafting preliminary versions of materials (slide decks, quiz questions based on their original work, infographics, and other format conversions). Final quality control and alignment with programmatic goals must always be ensured.
- Internal Use: AI tools can assist with internal tasks like summarising meeting notes, organising documents, and analysing data. However, inputting sensitive information into these systems should only occur with appropriate approval.
All AI use in the workplace should involve clear and open communication with colleagues and managers regarding its purpose and application. This transparency becomes especially critical when questions arise about the suitability of content being input into these tools.
Staff Training
Organisations should provide comprehensive and ongoing staff training on how to use AI tools effectively and ethically, including guidance on the specific challenges of working across multiple languages and diverse cultural contexts.
Continuous learning and adaptation
As AI tools are constantly evolving, so should organisations’ approach to ensuring responsible experimentation. Organisations should remain committed to exploring new use cases for AI and regularly updating policies as the technology evolves, while ensuring alignment with their human rights and civic space missions and worker rights.
Additionally, organisations should designate a policy holder or responsible officer to whom staff can turn with questions about generative AI use and establish regular policy review cycles to maintain relevance and effectiveness.
Informed consent with external partners
When using AI, especially generative AI tools (such as AI-assisted transcription, note-taking, summarisation), in meetings or conversations involving external partners, explicit consent must be obtained in advance. Before using AI tools, partners should be clearly informed:
- That a generative AI tool will be utilised during the interaction;
- About the specific purpose and scope of AI use (transcription, summarisation, follow-up actions); and
- How the generated content or data will be stored, shared or processed.
To ensure complete transparency, we recommend documenting all obtained consent in writing through formal channels such as email confirmations or meeting minutes, ensuring clear records of acknowledgment and acceptance from all parties involved. In cases where external partners withhold consent or do not provide explicit consent, alternative non-AI-based methods must be used.
Creating a living policy
We've come to view a GenAI usage policy not as a fixed rulebook, but as a living framework—open to evolution through ongoing dialogue and collective insight. At its heart is an unwavering commitment to protecting worker rights and well-being, upholding privacy, ensuring safety, and defending human rights. Ideally, AI, including GenAI, should be a tool that empowers teams, amplifying their abilities rather than replacing their roles. Its use should remain optional, supportive, and rooted in consent—not something imposed as a condition of work. Above all, we see this as a people-first policy: one that champions workers, supports their growth and helps the organisation thrive.