Algorithmic Accountability Toolkit

Source: Amnesty International –

Utilizing the International Human Rights Legal framework enables harms from digital systems to be situated within clear human rights language and opens up the possibility to challenge states based on binding legal provisions to which they are party. When states deploy AI systems in the provision of services, they put a wide range of human rights at risk. This chapter describes some of them alongside case study examples.

This chapter sets out a non-exhaustive list of human rights research methods that can be helpful when investigating algorithmic systems. It then discusses the common human rights risks and violations that algorithmic systems cause, alongside case studies.

Human Rights Research Methods and Data Sources

Conducting human rights research approaches on an algorithmic system requires an in-depth, multi-faceted and context-specific understanding of the complex issues that surround the use of public sector algorithms on people’s rights, including an understanding of the politics of the system. Some of the available methods and potential primary and secondary data sources include:    

  • Testimonies and focus groups with impacted communities: testimonial evidence sits at the heart of any human rights research. Conducting interviews or focus groups with impacted individuals and communities forms the backbone of any evidence of algorithmic harms. This can be combined with participatory methods, or also be implemented in such a way that enables communities to do their own peer research.     
  • Legal analysis: analysing relevant international human rights law instruments and standards, relevant reports and studies by the UN, domestic interpretations of international standards, and analysing the local laws that govern the public sector agencies deploying the algorithmic system (such as law enforcement, social protection). Other data sources may include court transcripts and transcripts of decisions by equality bodies and ombudspersons.   
  • Discourse analysis: analysing the sociopolitical environment in which the algorithmic system is deployed. Consider looking at media reporting on the issue, media interviews conducted by government officials, government policy documents, and official statements. Interview those working on social justice issues locally to understand the context.
  • Survey data: consider running short surveys with people subject to the technology or system. In Amnesty International’s research into the UK government’s use of technology in social protection, surveys were used to understand welfare claimants’ experiences.

Sources of Human Rights Law

While human rights may have several bases in international law, most are reflected in international and regional treaties, which are binding on states that are party to them.

At the international level, these include

Information on whether a particular state is party to a specific treaty can be found online.

What these rights mean in practice evolves over time, and reference should be made to forms of “soft law” which may aid in this interpretation. Sources of soft law include resolutions and declarations of United Nations organs, and reports from experts, including the General Comments and other works of Treaty Bodies charged with interpretation of specific treaties, and the reports of UN thematic mandate holders (“Special Procedures”).

Many states are also bound by regional human rights treaties. These include the American Convention on Human Rights (in the Inter-American System), the African Charter on Human and Peoples’ Rights (in the African Human Rights System), as well as the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights, (in the European Union and the Council of Europe systems, respectively). Regional courts and treaty bodies, such as the Inter-American Commission and Court of Human Rights, the African Court on Human and Peoples’ Rights, the European Court of Justice and the European Court of Human Rights consider cases and issue judgements interpreting the implementation of these standards, and regional systems also often have their own thematic mandate holders. Sub-regional Courts, such as the East African Court of Justice or the ECOWAS Court of Justice may also issue judgments interpreting regional treaties.

Beyond human rights treaties, international or regional data protection treaties and regulations may also contain relevant safeguards. These include the Convention for the protection of individuals with regard to the processing of personal data (“Convention 108+”, which is open to signatories outside the Council of Europe), the African Convention on Cyber-Security and Personal Data Protection, and the General Data Protection Regulation (GDPR) of the EU.

In addition, human rights are – or should be – protected under domestic law, including in the decisions of domestic courts.

Right to Privacy

The Right to Privacy is a guaranteed right under the International Covenant on Civil and Political Rights, a core and binding human rights treaty to which have been ratified 174 of the 193 UN member states, as well as under regional treaties and the domestic law of many states. To comply with human rights law and standards, restrictions on the right to privacy must meet the principle of legality, serve a legitimate aim, and be necessary and proportionate to that aim.

Strategies used to detect fraud within digital welfare states can undermine the right to privacy. Digital welfare states often require the merging of multiple government databases in order to detect possible fraud within the welfare system. This often amounts to mass-scale extraction and processing of personal data, which undermines the right to privacy. Some welfare systems utilize both the processing of personal data alongside “analogue” forms of surveillance, including asking neighbours and friends to report on people they suspect of welfare benefit fraud. This further exacerbates the violation of the right to privacy. This combined analogue and digital surveillance demonstrates the importance of taking a holistic approach to human rights research.

Case Study: Denmark

Amnesty International’s research on Denmark’s social benefits system, administered by the public authority Udbetaling Danmark (UDK, or Pay Out Denmark) and the company Arbejdsmarkedets Tillægspension (ATP), demonstrates pervasive surveillance in the welfare system and undermines the right to privacy.

The research found that the Danish government implemented legislation that allows mass-scale extraction and processing of personal data of social benefits recipients for fraud detection purposes. This includes allowing the merging of government databases and the use of fraud control algorithms on this data, and the unregulated use of social media and the reported use of geolocation data for fraud investigations. This data is collected from residents in receipt of benefits and their household members without their consent. This collecting and merging of large amounts of personal data contained in government databases effectively forces social benefits recipients to give up their right to privacy and data protection. The collection and processing of large amounts of data – including sensitive data which contains characteristics that could reveal race and ethnicity, health, disability, sexual orientation – and the use of social media, are highly invasive and disproportionate methods to detect fraud. Moreover, Amnesty International’s research showed that the use of this data only amounted to 30% of fraud investigations, which raises concerns regarding the necessity of processing this data.

Benefits applicants and recipients are also subjected to “traditional” or “analogue” forms of surveillance and monitoring for the purposes of fraud detection. Such methods include the persistent reassessment of eligibility by municipalities, fraud control cases or reports from other public authorities, including tax authorities and the police, and anonymous reports from members of the public. These analogue forms of monitoring and surveillance, when coupled with overly broad methods of digital scrutiny, create a system of pernicious surveillance which is at odds with the right to privacy.

Right to Equality and Non-Discrimination

The Right to Non-Discrimination and the Right to Equality are both guaranteed rights under the International Covenant on Civil and Political Rights (ICCPR), as well as most other international and regional treaties, as well as domestic law in most states. The UN Human Rights Committee (HRC) defines discrimination as “any distinction, exclusion, restriction or preference, which is based on any ground such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status, and which has the purpose or effect of nullifying or impairing the recognition, enjoyment or exercise by all persons, on an equal footing, of all rights and freedoms.” The ICCPR states that “all people are equal before the law” and “prohibits any discrimination and guarantee to all persons equal and effective protection against discrimination on any ground such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.”

Digitization and the introduction of automation and algorithmic decision-making can have a disproportionate negative impact on certain communities, resulting in a violation of the rights to equality and non-discrimination. As the UN Special Rapporteur on racism has noted, AI systems can lead to discrimination when they are used to classify, differentiate, rank and categorize because they “reproduce bias embedded in large-scale data sets capable of mimicking and reproducing implicit biases of humans, even in the absence of explicit algorithmic rules that stereotype”. The Special Rapporteur stated that “digital technologies can be combined intentionally and unintentionally to produce racially discriminatory structures that holistically or systematically undermine enjoyment of human rights for certain groups, on account of their race, ethnicity or national origin, in combination with other characteristics [and] digital technologies [are] capable of creating and sustaining racial and ethnic exclusion in systemic or structural terms”. The Special Rapporteur called on states to end “not only explicit racism and intolerance in the use and design of emerging digital technologies, but also, and just as seriously, indirect and structural forms of racial discrimination that result from the design and use of such technologies”

Overall, the use of AI and automated decision-making systems within the distribution of social security can entrench discriminatory practices towards already marginalized groups.

In the context of AI and algorithmic decision-making, it’s particularly important to note the distinction between direct and indirect discrimination.

  • Direct discrimination is when an explicit distinction is made between groups of people that results in individuals from some groups being less able than others to exercise their rights. For example, a law that requires women, and not men, to provide proof of a certain level of education as a prerequisite for voting would constitute direct discrimination.
  • Indirect discrimination is when a law, policy, or treatment is presented in neutral terms (i.e. no explicit distinctions made) but disproportionately disadvantages a specific group or groups. For example, a law that requires everyone to provide proof of a certain level of education as a prerequisite for voting has an indirectly discriminatory effect on any group that is less likely to have proof of education to that level (such as disadvantaged ethnic or other social groups, women, or others, as applicable).

Whilst some algorithmic systems have included protected characteristics which causes the system to directly discriminate between groups of people. Others have been found to indirectly discriminate, often by the inclusion of proxy inputs.

A proxy is an input or variable, such as an individual quality defining human beings, that is used by an AI system to make distinctions between individuals and/or social groups. A proxy may appear to be an innocuous piece of data to be included in an algorithm. Yet, where it directly or indirectly correlates with a protected characteristic such as gender, age, race or ethnicity, a proxy leads to biased decisions being generated by the AI system. For example, when an input such as postcode is included within an algorithm, it is often correlated with, and becomes a proxy for, socioeconomic status and race. It may therefore indirectly discriminate against certain racial or ethnic groups due to historical residential segregation.


Case Study: Serbia

The Social Card law entered into force in March 2022 and introduced automation into the process of determining people’s eligibility for various social assistance programmes. A backbone of the Social Card law is the Social Card registry, a comprehensive, centralized information system which uses automation to consolidate the personal data of applicants and recipients of social assistance from a range of official government databases.

The introduction of the Social Card Law and the Social Card registry cannot be isolated from the social and historical contexts into which they are introduced. Whilst laws in Serbia, including the Social Card Law, do guarantee formal equality for all individuals, the practical implementation of the Social Card Law and the Social Card registry does not provide substantive or de facto equality.

Gaps and imbalances in data processed by automated or semi-automated systems can lead to discrimination. A social worker told Amnesty International that before the Social Card registry was introduced, and especially when working with marginalized communities such as Roma, social workers knew that some data was inaccurate or out of date. For example, multiple cars registered to someone living in extreme poverty would not be considered important assets for social assistance eligibility, but rather, would be understood as vehicles sold for scrap metal or that otherwise no longer existed.

Serbia’s Ministry of Labour insisted that laws governing social security, including the Social Card Law, did not treat Roma or any other marginalized groups differently. The Ministry also claims that it has the legitimate right to use “true and accurate data which are necessary for the enjoyment of social security rights”. The Ministry did not recognize the fact that the seemingly innocuous and objective datasets being used as indicators of socio-economic status often ignored the specific context of a community’s marginalization, such as their living conditions, barriers to employment, and their particular needs.

Due to Serbia’s historical and structural context, many individuals from marginalized backgrounds have persistently low literacy and digital literacy levels. They therefore face challenges when interacting with administrative departments to keep their paperwork up to date or to appeal their removal from the social assistance system. In this way, the Social Card registry represents yet another barrier to accessing social assistance, which can amount to indirect discrimination.

Amnesty International’s research found that the Social Card registry is not designed to factor in the challenges and barriers faced by those communities most critically dependent on social assistance, including Roma, people with disabilities and women. Women, who are represented across all groups, are more likely to receive social protection and may also face additional intersectional barriers to accessing their rights.


Beyond the rights to equality, privacy and non-discrimination

The use of automated tools within welfare states can have clear impacts on the right to privacy and the right to non-discrimination. However, moving our analysis beyond these rights can provide a deeper understanding of how these systems impact communities.

Right to Social Security and Adequate Standard of Living

The International Covenant on Economic, Social and Cultural Rights (ICESCR) requires states to respect, protect and fulfil a broader set of human rights which are focused on the need for states to provide for the welfare and well-being of their populations. These rights are also protected under numerous regional treaties and the domestic law of many states.

Key ICESCR provisions relevant to automated welfare systems are the Right to Social Security and the Right to an Adequate Standard of Living. The Right to an Adequate Standard of Living incorporates “adequate food, clothing and housing”; failure to provide social security payments puts the ability to access these basic needs at risk. For example, automated welfare can also reduce access to health or disability related benefits, which can have a direct impact on the right to an adequate standard of living and the right to health.

Right to freedom to peaceful assembly and of association

For years, civil society has warned that states are enjoying a “golden age of surveillance,” as more and more of our online and offline lives are accessible to a growing array of new tools designed to track us. Amnesty International has documented numerous types of technology whose use impacts human rights, notably the rights to freedom of peaceful assembly and of association, which is protected under Articles 21 and 22 of the ICCPR, as well as under the CRC, CRPD and numerous regional treaties, as well as the domestic law of many states.

The use of facial recognition technology (FRT), which is fundamentally incompatible with human rights, is becoming worryingly commonplace. Amnesty International has documented abuses linked to FRT in the Occupied Palestinian Territories, Hyderabad, and New York City. In France, authorities proposed a system of AI-powered video surveillance in the run-up to the Paris Olympics. In the Netherlands, the under-regulated use of cameras at peaceful protests and the accompanying lack of transparency have created chilling effects around the exercise of protest rights. In Hungary, legal changes allowing the use of FRT to target, among other things, Pride marches, are a grave concern.

The use and abuse of these tools have particularly harmful impacts on marginalized communities. Migrants, in particular, are too often exempted from regulatory protections and treated as “testing grounds” for controversial technologies, including biometric identification technologies. The precarious status of migrants can lead to their being targeted for their protected protest rights, including through the use of surveillance and social media monitoring software, as Amnesty International highlighted in the United States.