Source: Amnesty International –
With the widespread use of Artificial Intelligence (AI) and automated decision-making systems (ADMs) that impact our everyday lives, it is crucial that rights defenders, activists and communities are equipped to shed light on the serious implications these systems have on our human rights, Amnesty International said ahead of the launch of its Algorithmic Accountability toolkit.
The toolkit draws on Amnesty International’s investigations, campaigns, media and advocacy in Denmark, Sweden, Serbia, France, India, United Kingdom, Occupied Palestinian Territory (OPT), the United States and the Netherlands. It provides a ‘how to’ guide for investigating, uncovering and seeking accountability for harms arising from algorithmic systems that are becoming increasingly embedded in our everyday lives specifically in the public sector realms of welfare, policing, healthcare, and education.
Regardless of the jurisdiction in which these technologies are deployed, a common outcome from their rollout is not “efficiency” or “improving” societies—as many government officials and corporations claim—but rather bias, exclusion and human rights abuses.
“The toolkit is designed for anyone looking to investigate or challenge the use of algorithmic and AI systems in the public sector, including civil society organizations (CSOs), journalists, impacted people or community organizations. It is designed to be adaptable and versatile to multiple settings and contexts.
“Building our collective power to investigate and seek accountability for harmful AI systems is crucial to challenging abusive practices by states and companies and meeting this current moment of supercharged investments in AI. Given how these systems can enable mass surveillance, undermine our right to social protection, restrict our freedom to peaceful protest and perpetuate exclusion, discrimination and bias across society,” said Damini Satija, Programme Director at Amnesty Tech.
The toolkit introduces a multi-pronged approach based on the learnings of Amnesty International’s investigations in this area over the last three years, as well as learnings from collaborations with key partners. This approach not only provides tools and practical templates to research these opaque systems and their resulting human rights violations, but it also lays out comprehensive tactics for those working to end these abusive systems by seeking change and accountability via campaigning, strategic communications, advocacy or strategic litigation.
One of the many case studies the toolkit draws on is Amnesty International’s investigation into Denmark’s welfare system, exposing how the Danish welfare authority Udbetaling Danmark (UDK)’s AI-powered welfare system fuels mass surveillance and risks discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalized racial groups through its use of AI tools to flag individuals for social benefits fraud investigations. The investigation could not have been possible without the collaboration with impacted communities, journalists and local civil society organisations and in that spirit, the toolkit is premised on deep collaboration between different disciplinary groups.
The toolkit situates human rights law as a critically valuable component of algorithmic accountability work, especially given this is a gap in the ethical and responsible AI fields and audit methods’. Amnesty International’s method ultimately emphasises collaborative work, while harnessing the collective influence of a multi-method approach. Communities and their agency to drive accountability remains at the heart of the process.
“This issue is even more urgent today, given rampant unchecked claims and experimentation around the supposed benefits of using AI in public service delivery. State actors are backing enormous investments in AI development and infrastructure and giving corporations a free hand to pursue their lucrative interests, regardless of the human rights impacts now and further down the line,” said Damini Satija.
“Through this toolkit, we aim to democratize knowledge and enable civil society organizations, investigators, journalists, and impacted individuals to uncover these systems and the industries that produce them, demand accountability, and bring an end to the abuses enabled by these technologies.”
