Digital Spaces
Digital Spaces for Action
Technology profoundly shapes people’s fundamental rights and the civic and democratic spaces they inhabit—both online and offline. Emerging technologies, especially artificial intelligence and neurotechnology, offer new opportunities for individuals and civil society to initiate and participate in collective action. These possibilities must be actively explored and expanded. However, the same technologies are also wielded by governments and non-state actors to restrict rights and freedoms. It is therefore essential to embed human rights principles into the development and deployment of technology. Yet, civil society - particularly marginalised groups most affected by these technologies - are often excluded from both policy-making processes and the development and deployments of technologies. To address this, we will work around four sub-thematic pathways:
- Research for stronger safeguards: Conduct rigorous, evidence-based research on the intersection of human rights, civic space, participatory democracy, justice, environmental defence and emerging technologies (in areas such as internet governance, artificial intelligence (AI) regulation, the impacts of AI - including generative AI and the security-technology nexus.
- Modelling technology for civic action: Explore and co-develop civic technology solutions (for example, for digital fundraising, deliberative democracy, organising) to allow for broader inclusion, influence and civic engagement and pilot them with partners in specific countries or cities.
- Protect action in tech and AI regulation: Provide expertise and advocacy to shape AI-related regulations, develop best practices to support implementation of global and regional policies, and support strategic litigation addressing the impact of technologies on civic space. Advocate for robust procedural safeguards as human rights impact assessments, expose harms linked to emerging technologies, and engage the private sector to prevent, mitigate and remedy these impacts.
- Learning and engagement: Support human rights and civic actors to engage in technology policy by enabling knowledge sharing and collaboration with digital rights groups. Promote best practices for inclusive participation of civil society and marginalised communities in AI governance. Partner with AI developers to design tools prioritising these groups; form cross-sector alliances—including investors, journalists, trade unions, consumer organisations—to advance rights-based AI.