IAEA and George Washington University Law School Launch Partnership to Educate Next Generation of Nuclear Law Students

Source: International Atomic Energy Agency (IAEA) –

Photo: The George Washington University Law School. 

The IAEA in collaboration with George Washington University Law School is launching a Summer School on the international legal frameworks for the safe, secure and peaceful uses of nuclear energy. 

This course will expand educational opportunities in nuclear law at a pivotal moment when more and more countries are turning to nuclear power to address energy security. Nuclear programmes require more than technology and infrastructure; they also require an advanced legal architecture and professionals to build and maintain it. 

The joint initiative builds upon the IAEA Partnership Programme on Nuclear Law launched by IAEA Director General Rafael Mariano Grossi to increase educational and professional development opportunities for students and aspiring professionals in international and national nuclear law. The Summer School will bring together world-class expertise from around the globe in a two-week virtual programme scheduled for 16 to 25 June 2026.

In announcing the initiative, Director General Rafael Mariano Grossi said that “the quality of nuclear law education today will directly affect the quality of our nuclear legal framework in the future. By strengthening legal education now, we are investing in the infrastructure that will support nuclear energy for decades to come.”

Dean Dayna Bowen Matthew, the Dean of the George Washington University Law School, noted, “GW Law is proud to contribute to this essential work, which is fundamentally tied to our institutional history. From the moment nuclear fission was announced on our campus, GW has played a pivotal role in teaching nuclear law since 1954.”

IAEA Director General Mariano Grossi at the signing of the collaboration with George Washington University Law School on 9 December 2025. (Photo. M.Magnaye / IAEA).

The Legal Foundation of Nuclear Power

Nuclear law often operates behind the scenes, yet it provides the foundation that makes nuclear power possible. It creates the legal foundation for safety and security measures, safeguards against misuse, and liability frameworks throughout the entire lifecycle of nuclear facilities. Without these legal frameworks, even the most advanced technology cannot be deployed safely and securely. The field bridges international governance, national legislation and highly technical standards— an intersection that makes nuclear law both essential and complex.

The Summer School: What to Expect 

Daily sessions are designed to transform how students understand nuclear energy from a legal perspective. The first week lays the groundwork by introducing the international legal architecture, key institutions and the core instruments that govern nuclear activities. The second week focuses on cutting-edge topics such as small modular reactors, fusion energy, space applications, maritime uses and the intricate legal considerations of financing and contracting nuclear projects.

Host Institution Background

GW’s foundational involvement in the nuclear field makes it a well-positioned partner for the Summer School, the first course of its kind.  In 1939, George Washington University hosted the Fifth Washington Conference on Theoretical Physics, where physicist Niels Bohr first publicly announced the discovery of nuclear fission on 26 January 1939. This pivotal event marked the beginning of the “atomic age” and was commemorated with a plaque at GW in 1945. Passage of the USA’s Atomic Energy Act in 1954 marked the transition of nuclear power from military to civilian uses, in part by breaking the government monopoly over the technology and enabling private ownership and innovation.  In response to this global shift, GW Law recognized the urgent need for a specialized legal discipline to govern this powerful new technology. Consequently, GW Law began teaching nuclear law in the 1954-55 academic year, becoming a pioneer in the field and establishing a legacy of expertise that continues today.

John Lach, Interim Provost and Executive Vice President for Academic Affairs signs the partnership agreement. 

Eligibility

The program targets graduate law students from IAEA member countries, with limited places reserved for students in related technical disciplines such as engineering and physics. 

Participants who complete the rigorous programme will earn a joint certificate from the IAEA and GW Law School. Applications will open in the New Year. Detailed information will be available on both institutions’ websites.

More information is found here.

Related News

Related resources

IAEA and Algeria Sign a Joint Statement to Reinforce Cooperation on Nuclear Science and Energy

Source: International Atomic Energy Agency (IAEA) –

Monika Shifotoka, IAEA Office of Public Information and Communication

IAEA Director General Rafael Mariano Grossi, Algeria’s Ambassador Larbi Latroch and Algeria’s Minister of State, Minister of Foreign Affairs, National Community Abroad and African Affairs Ahmed Attaf at the virtual signing ceremony on 8 December 2025. (Photo:H.Shaffer).

The IAEA and the People’s Democratic Republic of Algeria have agreed to strengthen their partnership in the peaceful uses of nuclear science and technology, focusing on energy security and water resource management. 

The agreement, signed virtually on 8 December by IAEA Director General Rafael Mariano Grossi and Algeria’s Minister of State, Minister of Foreign Affairs, National Community Abroad and African Affairs Ahmed Attaf, reinforces the growing partnership between the two sides and marks an important step in supporting the country’s national development goals. 

“This partnership reflects our shared commitment to harnessing nuclear innovation for sustainable development and to building a future where science serves people and progress,” said Mr Grossi. 

“The IAEA will support Algeria as it explores its nuclear energy options — including small modular reactors for electricity generation and water desalination — and expands the use of nuclear techniques to strengthen water resource management.”

Mr. Ahmed Attaf, Algeria’s Minister of State, Minister of Foreign Affairs, National Community Abroad and African Affairs said:

“Today, we’re putting pen to paper on this Joint Declaration. Honestly, it feels like we’re opening a new chapter with the International Atomic Energy Agency: a bigger, bolder, more exciting one. It’s the door wide open to new areas of cooperation: small modular reactors for seawater desalination, smarter water management with nuclear tech, and game-changing applications in agriculture.”

The signing follows the IAEA DG Grossi visit to Algiers in October, during which he and Minister Attaf discussed ways to expand cooperation in several areas including nuclear power, water management and food security. In a message following the visit, DG Grossi noted that “this visit marks the beginning of a new dynamism in our partnership,” highlighting Algeria’s commitment to leveraging nuclear science for progress.

Algeria expressed interest in developing nuclear power as part of its long-term energy strategy, including the use of Small Modular Reactors (SMRs) for both electricity generation and water desalination.  Nuclear energy provides continuous baseload power, enhancing grid stability and resilience. This technology helps the country meet growing energy demands while addressing water scarcity challenges.

The agreement signed today builds on the IAEA technical cooperation project – Pre-Feasibility Studies and Capacity Development for Introducing Nuclear Power, which supports Algeria in developing the institutional, regulatory and technical infrastructure required under the IAEA Milestones Approach. 

The Director General offered to dispatch an expert mission to Algeria to support the country’s preparation for developing nuclear power programme, particularly in assessing the feasibility of SMR applications including their integration into national infrastructure and energy planning. 

A follow-up mission is planned for 2026 to expand collaboration on nuclear techniques for water resource management and agricultural applications, reinforcing Algeria’s efforts to improve food security and sustainable water use.

Algeria operates two research reactors – NUR reactor used for training and research and Es-Salem reactor, used for scientific research and producing radioisotopes. 

The country is also an active partner in the IAEA’s efforts to expand access to cancer care. The University Hospital Centre of Bab El-Oued and Pierre and Marie Curie Cancer Centre were among the first five IAEA’s Anchor Centres under the Rays of Hope Initiative helping to strengthen and expand access to cancer care in Algeria and across the region. 

Mhamed Hali:“Despite the dangers, bringing smiles to the faces of forgotten victims makes it worth continuing”

Source: Amnesty International –

Mhamed Hali is a Sahrawi lawyer and human rights defender living and working in the occupied territories of Western Sahara. He is a Doctor of Law and International Humanitarian Law, and Secretary General of the Association for the Protection of Sahrawi Prisoners in Moroccan Jails. From a young age and despite the many challenges he continues to face, including being banned by the Moroccan state from practicing law as punishment for his human rights activism, he has never given up the fight for justice. On International Human Rights Defenders’ Day, he shares his story, his hopes for the future, and some advice for those thinking about joining the fight for human rights.  

I was born in Laayoune, the largest city in Western Sahara, in 1987. I spent my childhood hearing stories of the grave violations committed against the Sahrawi people, my people, after the Moroccan military invasion of the region in 1975.  

Since then, we have been fighting for our right to self-determination, as backed up by international law in the ruling of the International Court of Justice. But the Moroccan authorities do not tolerate any activity or movement that seeks to empower us or defend our rights. Over the years, they have targeted many human rights defenders, journalists and students by harassing, attacking and arresting them as punishment for their work.  

Global: Amnesty International launches an Algorithmic Accountability toolkit to enable investigators, rights defenders’ and activists to hold powerful actors accountable for AI-facilitated harms 

Source: Amnesty International –

With the widespread use of Artificial Intelligence (AI) and automated decision-making systems (ADMs) that impact our everyday lives, it is crucial that rights defenders, activists and communities are equipped to shed light on the serious implications these systems have on our human rights, Amnesty International said ahead of the launch of its Algorithmic Accountability toolkit.  

The toolkit draws on Amnesty International’s investigationscampaigns, media and advocacy in DenmarkSwedenSerbiaFranceIndiaUnited KingdomOccupied Palestinian Territory (OPT), the United States and the Netherlands. It provides a ‘how to’ guide for investigating, uncovering and seeking accountability for harms arising from algorithmic systems that are becoming increasingly embedded in our everyday lives specifically in the public sector realms of welfare, policing, healthcare, and education. 

Regardless of the jurisdiction in which these technologies are deployed, a common outcome from their rollout is not “efficiency” or “improving” societies—as many government officials and corporations claim—but rather bias, exclusion and human rights abuses. 

“The toolkit is designed for anyone looking to investigate or challenge the use of algorithmic and AI systems in the public sector, including civil society organizations (CSOs), journalists, impacted people or community organizations. It is designed to be adaptable and versatile to multiple settings and contexts.  

“Building our collective power to investigate and seek accountability for harmful AI systems is crucial to challenging abusive practices by states and companies and meeting this current moment of supercharged investments in AI. Given how these systems can enable mass surveillance, undermine our right to social protection, restrict our freedom to peaceful protest and perpetuate exclusion, discrimination and bias across society,” said Damini Satija, Programme Director at Amnesty Tech. 

The toolkit introduces a multi-pronged approach based on the learnings of Amnesty International’s investigations in this area over the last three years, as well as learnings from collaborations with key partners. This approach not only provides tools and practical templates to research these opaque systems and their resulting human rights violations, but it also lays out comprehensive tactics for those working to end these abusive systems by seeking change and accountability via campaigning, strategic communications, advocacy or strategic litigation.    

One of the many case studies the toolkit draws on is Amnesty International’s investigation into Denmark’s welfare system, exposing how the Danish welfare authority Udbetaling Danmark (UDK)’s AI-powered welfare system fuels mass surveillance and risks discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalized racial groups through its use of AI tools to flag individuals for social benefits fraud investigations.  The investigation could not have been possible without the collaboration with impacted communities, journalists and local civil society organisations and in that spirit, the toolkit is premised on deep collaboration between different disciplinary groups. 

The toolkit situates human rights law as a critically valuable component of algorithmic accountability work, especially given this is a gap in the ethical and responsible AI fields and audit methods’. Amnesty International’s method ultimately emphasises collaborative work, while harnessing the collective influence of a multi-method approach. Communities and their agency to drive accountability remains at the heart of the process. 

“This issue is even more urgent today, given rampant unchecked claims and experimentation around the supposed benefits of using AI in public service delivery. State actors are backing enormous investments in AI development and infrastructure and giving corporations a free hand to pursue their lucrative interests, regardless of the human rights impacts now and further down the line,” said Damini Satija. 

“Through this toolkit, we aim to democratize knowledge and enable civil society organizations, investigators, journalists, and impacted individuals to uncover these systems and the industries that produce them, demand accountability, and bring an end to the abuses enabled by these technologies.” 

Indonesia: Police beat protesters and unlawfully used tear gas to crush protests – new investigation

Source: Amnesty International –

Indonesian police used unlawful force against protesters, including beatings and the improper use of water cannon and tear gas grenades, during mass demonstrations that swept the country earlier this year, according to new investigation released today by Amnesty International.

Thirty-six videos authenticated by Amnesty International’s Evidence Lab, along with interviews with five victims and witnesses, detailed the police’s use of unlawful force during rallies between 25 August and 1 September 2025. This included firing water cannon at protesters at close range, beating people with batons and using a dangerous model of tear gas grenade known to cause serious injuries, including loss of limb.

“Video evidence, alongside victims and eyewitnesses’ testimonies, reveal that Indonesian police ruthlessly and violently cracked down on a movement that began with peaceful marches against low wages, tax hikes and lawmakers’ pay. The authorities’ excessive and unlawful use of force lays bare a policing culture that treats dissent as a threat rather than a right,” said Erika Guevara-Rosas, Amnesty International’s Senior Director for Research, Advocacy, Policy and Campaigns.

According to information aggregated from various NGOs and legal aid organizations, at least 4,194 protesters were arrested between 25 August and 1 September, a figure confirmed to Amnesty International by local and national police. As of 27 September, the police had charged 959 of these individuals, while the rest were released without charge.

At least 12 of those charged are activists or human rights defenders who, according to the police, are “accused of inciting people to take part in violent protests”. The police confirmed media reports that 295 of those charged were children at the time of arrest.

NGOs and legal aid groups also documented that at least 1,036 people were victims of violence during the protests, recorded in 69 separate incidents in 19 cities. While some protesters were involved in violent acts, the majority of these cases involved police use of unnecessary and excessive force.

Despite calls from civil society organizations, President Prabowo Subianto’s government has failed to establish an independent team to investigate the violent crackdown on the protests.

Algorithmic Accountability Toolkit

Source: Amnesty International –

Utilizing the International Human Rights Legal framework enables harms from digital systems to be situated within clear human rights language and opens up the possibility to challenge states based on binding legal provisions to which they are party. When states deploy AI systems in the provision of services, they put a wide range of human rights at risk. This chapter describes some of them alongside case study examples.

This chapter sets out a non-exhaustive list of human rights research methods that can be helpful when investigating algorithmic systems. It then discusses the common human rights risks and violations that algorithmic systems cause, alongside case studies.

Human Rights Research Methods and Data Sources

Conducting human rights research approaches on an algorithmic system requires an in-depth, multi-faceted and context-specific understanding of the complex issues that surround the use of public sector algorithms on people’s rights, including an understanding of the politics of the system. Some of the available methods and potential primary and secondary data sources include:    

  • Testimonies and focus groups with impacted communities: testimonial evidence sits at the heart of any human rights research. Conducting interviews or focus groups with impacted individuals and communities forms the backbone of any evidence of algorithmic harms. This can be combined with participatory methods, or also be implemented in such a way that enables communities to do their own peer research.     
  • Legal analysis: analysing relevant international human rights law instruments and standards, relevant reports and studies by the UN, domestic interpretations of international standards, and analysing the local laws that govern the public sector agencies deploying the algorithmic system (such as law enforcement, social protection). Other data sources may include court transcripts and transcripts of decisions by equality bodies and ombudspersons.   
  • Discourse analysis: analysing the sociopolitical environment in which the algorithmic system is deployed. Consider looking at media reporting on the issue, media interviews conducted by government officials, government policy documents, and official statements. Interview those working on social justice issues locally to understand the context.
  • Survey data: consider running short surveys with people subject to the technology or system. In Amnesty International’s research into the UK government’s use of technology in social protection, surveys were used to understand welfare claimants’ experiences.

Sources of Human Rights Law

While human rights may have several bases in international law, most are reflected in international and regional treaties, which are binding on states that are party to them.

At the international level, these include

Information on whether a particular state is party to a specific treaty can be found online.

What these rights mean in practice evolves over time, and reference should be made to forms of “soft law” which may aid in this interpretation. Sources of soft law include resolutions and declarations of United Nations organs, and reports from experts, including the General Comments and other works of Treaty Bodies charged with interpretation of specific treaties, and the reports of UN thematic mandate holders (“Special Procedures”).

Many states are also bound by regional human rights treaties. These include the American Convention on Human Rights (in the Inter-American System), the African Charter on Human and Peoples’ Rights (in the African Human Rights System), as well as the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights, (in the European Union and the Council of Europe systems, respectively). Regional courts and treaty bodies, such as the Inter-American Commission and Court of Human Rights, the African Court on Human and Peoples’ Rights, the European Court of Justice and the European Court of Human Rights consider cases and issue judgements interpreting the implementation of these standards, and regional systems also often have their own thematic mandate holders. Sub-regional Courts, such as the East African Court of Justice or the ECOWAS Court of Justice may also issue judgments interpreting regional treaties.

Beyond human rights treaties, international or regional data protection treaties and regulations may also contain relevant safeguards. These include the Convention for the protection of individuals with regard to the processing of personal data (“Convention 108+”, which is open to signatories outside the Council of Europe), the African Convention on Cyber-Security and Personal Data Protection, and the General Data Protection Regulation (GDPR) of the EU.

In addition, human rights are – or should be – protected under domestic law, including in the decisions of domestic courts.

Right to Privacy

The Right to Privacy is a guaranteed right under the International Covenant on Civil and Political Rights, a core and binding human rights treaty to which have been ratified 174 of the 193 UN member states, as well as under regional treaties and the domestic law of many states. To comply with human rights law and standards, restrictions on the right to privacy must meet the principle of legality, serve a legitimate aim, and be necessary and proportionate to that aim.

Strategies used to detect fraud within digital welfare states can undermine the right to privacy. Digital welfare states often require the merging of multiple government databases in order to detect possible fraud within the welfare system. This often amounts to mass-scale extraction and processing of personal data, which undermines the right to privacy. Some welfare systems utilize both the processing of personal data alongside “analogue” forms of surveillance, including asking neighbours and friends to report on people they suspect of welfare benefit fraud. This further exacerbates the violation of the right to privacy. This combined analogue and digital surveillance demonstrates the importance of taking a holistic approach to human rights research.

Case Study: Denmark

Amnesty International’s research on Denmark’s social benefits system, administered by the public authority Udbetaling Danmark (UDK, or Pay Out Denmark) and the company Arbejdsmarkedets Tillægspension (ATP), demonstrates pervasive surveillance in the welfare system and undermines the right to privacy.

The research found that the Danish government implemented legislation that allows mass-scale extraction and processing of personal data of social benefits recipients for fraud detection purposes. This includes allowing the merging of government databases and the use of fraud control algorithms on this data, and the unregulated use of social media and the reported use of geolocation data for fraud investigations. This data is collected from residents in receipt of benefits and their household members without their consent. This collecting and merging of large amounts of personal data contained in government databases effectively forces social benefits recipients to give up their right to privacy and data protection. The collection and processing of large amounts of data – including sensitive data which contains characteristics that could reveal race and ethnicity, health, disability, sexual orientation – and the use of social media, are highly invasive and disproportionate methods to detect fraud. Moreover, Amnesty International’s research showed that the use of this data only amounted to 30% of fraud investigations, which raises concerns regarding the necessity of processing this data.

Benefits applicants and recipients are also subjected to “traditional” or “analogue” forms of surveillance and monitoring for the purposes of fraud detection. Such methods include the persistent reassessment of eligibility by municipalities, fraud control cases or reports from other public authorities, including tax authorities and the police, and anonymous reports from members of the public. These analogue forms of monitoring and surveillance, when coupled with overly broad methods of digital scrutiny, create a system of pernicious surveillance which is at odds with the right to privacy.

Right to Equality and Non-Discrimination

The Right to Non-Discrimination and the Right to Equality are both guaranteed rights under the International Covenant on Civil and Political Rights (ICCPR), as well as most other international and regional treaties, as well as domestic law in most states. The UN Human Rights Committee (HRC) defines discrimination as “any distinction, exclusion, restriction or preference, which is based on any ground such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status, and which has the purpose or effect of nullifying or impairing the recognition, enjoyment or exercise by all persons, on an equal footing, of all rights and freedoms.” The ICCPR states that “all people are equal before the law” and “prohibits any discrimination and guarantee to all persons equal and effective protection against discrimination on any ground such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.”

Digitization and the introduction of automation and algorithmic decision-making can have a disproportionate negative impact on certain communities, resulting in a violation of the rights to equality and non-discrimination. As the UN Special Rapporteur on racism has noted, AI systems can lead to discrimination when they are used to classify, differentiate, rank and categorize because they “reproduce bias embedded in large-scale data sets capable of mimicking and reproducing implicit biases of humans, even in the absence of explicit algorithmic rules that stereotype”. The Special Rapporteur stated that “digital technologies can be combined intentionally and unintentionally to produce racially discriminatory structures that holistically or systematically undermine enjoyment of human rights for certain groups, on account of their race, ethnicity or national origin, in combination with other characteristics [and] digital technologies [are] capable of creating and sustaining racial and ethnic exclusion in systemic or structural terms”. The Special Rapporteur called on states to end “not only explicit racism and intolerance in the use and design of emerging digital technologies, but also, and just as seriously, indirect and structural forms of racial discrimination that result from the design and use of such technologies”

Overall, the use of AI and automated decision-making systems within the distribution of social security can entrench discriminatory practices towards already marginalized groups.

In the context of AI and algorithmic decision-making, it’s particularly important to note the distinction between direct and indirect discrimination.

  • Direct discrimination is when an explicit distinction is made between groups of people that results in individuals from some groups being less able than others to exercise their rights. For example, a law that requires women, and not men, to provide proof of a certain level of education as a prerequisite for voting would constitute direct discrimination.
  • Indirect discrimination is when a law, policy, or treatment is presented in neutral terms (i.e. no explicit distinctions made) but disproportionately disadvantages a specific group or groups. For example, a law that requires everyone to provide proof of a certain level of education as a prerequisite for voting has an indirectly discriminatory effect on any group that is less likely to have proof of education to that level (such as disadvantaged ethnic or other social groups, women, or others, as applicable).

Whilst some algorithmic systems have included protected characteristics which causes the system to directly discriminate between groups of people. Others have been found to indirectly discriminate, often by the inclusion of proxy inputs.

A proxy is an input or variable, such as an individual quality defining human beings, that is used by an AI system to make distinctions between individuals and/or social groups. A proxy may appear to be an innocuous piece of data to be included in an algorithm. Yet, where it directly or indirectly correlates with a protected characteristic such as gender, age, race or ethnicity, a proxy leads to biased decisions being generated by the AI system. For example, when an input such as postcode is included within an algorithm, it is often correlated with, and becomes a proxy for, socioeconomic status and race. It may therefore indirectly discriminate against certain racial or ethnic groups due to historical residential segregation.


Case Study: Serbia

The Social Card law entered into force in March 2022 and introduced automation into the process of determining people’s eligibility for various social assistance programmes. A backbone of the Social Card law is the Social Card registry, a comprehensive, centralized information system which uses automation to consolidate the personal data of applicants and recipients of social assistance from a range of official government databases.

The introduction of the Social Card Law and the Social Card registry cannot be isolated from the social and historical contexts into which they are introduced. Whilst laws in Serbia, including the Social Card Law, do guarantee formal equality for all individuals, the practical implementation of the Social Card Law and the Social Card registry does not provide substantive or de facto equality.

Gaps and imbalances in data processed by automated or semi-automated systems can lead to discrimination. A social worker told Amnesty International that before the Social Card registry was introduced, and especially when working with marginalized communities such as Roma, social workers knew that some data was inaccurate or out of date. For example, multiple cars registered to someone living in extreme poverty would not be considered important assets for social assistance eligibility, but rather, would be understood as vehicles sold for scrap metal or that otherwise no longer existed.

Serbia’s Ministry of Labour insisted that laws governing social security, including the Social Card Law, did not treat Roma or any other marginalized groups differently. The Ministry also claims that it has the legitimate right to use “true and accurate data which are necessary for the enjoyment of social security rights”. The Ministry did not recognize the fact that the seemingly innocuous and objective datasets being used as indicators of socio-economic status often ignored the specific context of a community’s marginalization, such as their living conditions, barriers to employment, and their particular needs.

Due to Serbia’s historical and structural context, many individuals from marginalized backgrounds have persistently low literacy and digital literacy levels. They therefore face challenges when interacting with administrative departments to keep their paperwork up to date or to appeal their removal from the social assistance system. In this way, the Social Card registry represents yet another barrier to accessing social assistance, which can amount to indirect discrimination.

Amnesty International’s research found that the Social Card registry is not designed to factor in the challenges and barriers faced by those communities most critically dependent on social assistance, including Roma, people with disabilities and women. Women, who are represented across all groups, are more likely to receive social protection and may also face additional intersectional barriers to accessing their rights.


Beyond the rights to equality, privacy and non-discrimination

The use of automated tools within welfare states can have clear impacts on the right to privacy and the right to non-discrimination. However, moving our analysis beyond these rights can provide a deeper understanding of how these systems impact communities.

Right to Social Security and Adequate Standard of Living

The International Covenant on Economic, Social and Cultural Rights (ICESCR) requires states to respect, protect and fulfil a broader set of human rights which are focused on the need for states to provide for the welfare and well-being of their populations. These rights are also protected under numerous regional treaties and the domestic law of many states.

Key ICESCR provisions relevant to automated welfare systems are the Right to Social Security and the Right to an Adequate Standard of Living. The Right to an Adequate Standard of Living incorporates “adequate food, clothing and housing”; failure to provide social security payments puts the ability to access these basic needs at risk. For example, automated welfare can also reduce access to health or disability related benefits, which can have a direct impact on the right to an adequate standard of living and the right to health.

Right to freedom to peaceful assembly and of association

For years, civil society has warned that states are enjoying a “golden age of surveillance,” as more and more of our online and offline lives are accessible to a growing array of new tools designed to track us. Amnesty International has documented numerous types of technology whose use impacts human rights, notably the rights to freedom of peaceful assembly and of association, which is protected under Articles 21 and 22 of the ICCPR, as well as under the CRC, CRPD and numerous regional treaties, as well as the domestic law of many states.

The use of facial recognition technology (FRT), which is fundamentally incompatible with human rights, is becoming worryingly commonplace. Amnesty International has documented abuses linked to FRT in the Occupied Palestinian Territories, Hyderabad, and New York City. In France, authorities proposed a system of AI-powered video surveillance in the run-up to the Paris Olympics. In the Netherlands, the under-regulated use of cameras at peaceful protests and the accompanying lack of transparency have created chilling effects around the exercise of protest rights. In Hungary, legal changes allowing the use of FRT to target, among other things, Pride marches, are a grave concern.

The use and abuse of these tools have particularly harmful impacts on marginalized communities. Migrants, in particular, are too often exempted from regulatory protections and treated as “testing grounds” for controversial technologies, including biometric identification technologies. The precarious status of migrants can lead to their being targeted for their protected protest rights, including through the use of surveillance and social media monitoring software, as Amnesty International highlighted in the United States.

Global: New Amnesty toolkit arms activists to hold states and tech giants accountable for harmful AI

Source: Amnesty International –

The toolkit is a practical guide for uncovering AI harms in welfare, policing, healthcare, and education

‘Building our collective power to investigate and seek accountability for harmful AI systems is crucial to challenging abusive practices by states and companies’ – Damini Satija, Amnesty Tech 

Amnesty International is launching its Algorithmic Accountability toolkit, aiming to equip rights defenders, activists and communities to shed light on the serious implications that Artificial Intelligence (AI) and automated decision-making systems (ADMs) have on our human rights.

The toolkit draws on Amnesty’s investigations, campaigns, media and advocacy in the United Kingdom, Denmark, Sweden, Serbia, France, India, Occupied Palestinian Territory (OPT), the United States and the Netherlands. It provides a ‘how to’ guide for investigating, uncovering and seeking accountability for harms arising from algorithmic systems that are becoming increasingly embedded in our everyday lives specifically in the public sector realms of welfare, policing, healthcare, and education.

Regardless of the jurisdiction in which these technologies are deployed, a common outcome from their rollout is not “efficiency” or “improving” societies -as many government officials and corporations claim – but rather bias, exclusion and human rights abuses.

Damini Satija, Programme Director at Amnesty Tech said:

“The toolkit is designed for anyone looking to investigate or challenge the use of algorithmic and AI systems in the public sector, including civil society organisations (CSOs), journalists, impacted people or community organisations. It is designed to be adaptable and versatile to multiple settings and contexts. 

“Building our collective power to investigate and seek accountability for harmful AI systems is crucial to challenging abusive practices by states and companies and meeting this current moment of supercharged investments in AI, given how these systems can enable mass surveillance, undermine our right to social protection, restrict our freedom to peaceful protest and perpetuate exclusion, discrimination and bias across society.”

The toolkit introduces a multi-pronged approach based on the learnings of Amnesty’s investigations in this area over the last three years, as well as learnings from collaborations with key partners. This approach not only provides tools and practical templates to research these opaque systems and their resulting human rights violations, but it also lays out comprehensive tactics for those working to end these abusive systems by seeking change and accountability via campaigning, strategic communications, advocacy or strategic litigation.   

One of the many case studies the toolkit draws on is Amnesty’s  investigation into Denmark’s welfare system, exposing how the Danish welfare authority Udbetaling Danmark (UDK)’s AI-powered welfare system fuels mass surveillance and risks discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalised racial groups through its use of AI tools to flag individuals for social benefits fraud investigations. 

The investigation could not have been possible without the collaboration with impacted communities, journalists and local civil society organisations and in that spirit, the toolkit is premised on deep collaboration between different disciplinary groups. The toolkit situates human rights law as a critically valuable component of algorithmic accountability work, especially given this is a gap in the ethical and responsible AI fields and audit methods. Amnesty’s method ultimately emphasises collaborative work, while harnessing the collective influence of a multi-method approach. Communities and their agency to drive accountability remains at the heart of the process.

“This issue is even more urgent today, given rampant unchecked claims and experimentation around the supposed benefits of using AI in public service delivery. State actors are backing enormous investments in AI development and infrastructure and giving corporations a free hand to pursue their lucrative interests, regardless of the human rights impacts now and further down the line.” said Damini Satija.

“Through this toolkit, we aim to democratise knowledge and enable civil society organisations, investigators, journalists, and impacted individuals to uncover these systems and the industries that produce them, demand accountability, and bring an end to the abuses enabled by these technologies.”

Highlighting how these systems are already harming people in the UK, Alba Kapoor, racial justice lead at Amnesty International UK, said:

“Increasingly, we’re seeing the UK state rely on AI as a silver bullet to improve ‘efficiency’ and cut costs. Yet time and time again, these technologies prove to be flawed and harmful, violating our rights to privacy and to equality and non-discrimination.

“This is happening across the board – from the rise of so-called ‘predictive policing’ used by UK police forces with little regard for people’s rights, to the DWP’s automated welfare systems that exclude people from accessing the support they need. Add to this the police’s use of facial recognition technology, which has been shown to misidentify Black people at dramatically higher rates than white people. Scrutiny of these technologies is more vital than ever.”

Tanzania: Authorities must protect right to protest ahead of nationwide demonstrations

Source: Amnesty International –

© Reuters

Protests are expected following post-election violence last month

The Tanzanian authorities must respect and protect the rights to freedom of peaceful assembly and expression during planned nationwide protests set for tomorrow (9 December), and guarantee that the protests are facilitated and protected, Amnesty International said today.

Tigere Chagutah, Amnesty International’s Director for East and Southern Africa, said:

“The police must refrain from violating protesters’ rights, including through unnecessary and excessive use of force.

“The authorities must also refrain from blanket internet shutdowns, as witnessed during the electoral period, which violate the right to access information and obstructs crucial monitoring and reporting of human rights violations.

“The Tanzanian authorities must ensure an independent, thorough, and impartial investigation into allegations of human rights violations committed by state security officers during the post-elections protests, with those suspected of responsibility brought to account in fair proceedings.”

Election protests

Amnesty has documented how state security officers used unlawful force against protestors after the 29 October elections. Between 29 October and 3 November, The Tanzanian authorities imposed a nationwide Internet shutdown during which security forces committed various human rights violations, including unlawful killings and enforced disappearances. The shutdown made it difficult to monitor and document those violations.                                                        

View latest press releases

Cambodia/Thailand: Both sides must prevent further risk to civilians from renewed hostilities

Source: Amnesty International –

Responding to reports of renewed armed clashes along the border of Cambodia and Thailand on Monday, Amnesty International’s Regional Research Director Montse Ferrer said:

“The resumption of hostilities around the Thailand/Cambodia border risks civilian lives, mass displacement and the destruction of essential civilian infrastructure.

“The Cambodian and Thai governments must take all the necessary steps to protect civilians in line with international humanitarian law and prevent any further risks to civilians.

“Amid concerning reports of civilian casualties on Monday, we urge the international community to pressure both governments to adhere to their obligations to minimize the impact of the conflict on civilians and civilian objects.”

EU: Ramping up surveillance, raids, detentions and deportations will cause ‘deep harm’

Source: Amnesty International –

EU Return Regulation policy discussed by home affairs ministers from across the EU today 

Ministers proposed making detention of up to two-and-a-half years the default for people issued deportation decisions  

The authorities would be allowed to raid a person’s home or ‘other relevant premises’, paving the way for vast surveillance, discriminatory policing and racial profiling 

‘Today, the Council has… opted to introduce new punitive measures, dismantling safeguards and weakening rights further, rather than advancing policies that promote dignity, safety and health for all’ – Olivia Sundberg Diez 

Responding to EU home affairs ministers’ position on the EU Return Regulation agreed in Brussels today, Olivia Sundberg Diez, EU Advocate on Migration and Asylum at Amnesty International, said: 

“EU ministers’ position on the Return Regulation reveals the EU’s dogged and misguided insistence on ramping up deportations, raids, surveillance, and detention at any cost. These punitive measures amount to an unprecedented stripping of rights based on migration status and will leave more people in precarious situations and legal limbo. 

“In addition, EU member states continue to push for cruel and unworkable ‘return hubs’, or offshore deportation centres outside of the EU – forcibly transferring people to countries where they have no connection and may be detained for long periods, violating protections in international law. This approach mirrors the harrowing, dehumanising and unlawful mass arrests, detention and deportations in the US, which are tearing families apart and devastating communities. 

“Today, the Council has taken an already deeply flawed and restrictive Commission proposal and opted to introduce new punitive measures, dismantling safeguards and weakening rights further, rather than advancing policies that promote dignity, safety and health for all. They will inflict deep harm on migrants and the communities that welcome them. 

“Amnesty International urges the European Parliament, which is yet to adopt its final position on the proposal, to reverse this approach and place human rights firmly at the centre of upcoming negotiations.” 

Dangerous and undignified 

At today’s Justice and Home Affairs Council meeting, ministers from EU member states agreed on a negotiating position for the Council on new rules on returns or deportations at EU level, which the European Commission proposed in March 2025.  

Amnesty said at the time that this proposal marked a “new low” for Europe’s treatment of migrants, and joined over 250 organisations in calling for its rejection in September. 

Ministers now propose making detention of up to two-and-a-half years the default for people issued deportation decisions. The proposal would also expand obligations, surveillance and sanctions on people subject to deportation, including unreasonable requirements with which many will be unable to comply, if they lack identity documents or a fixed residence, for example. New measures would allow the authorities to raid the person’s home or “other relevant premises” and to seize their belongings, paving the way for vast surveillance, discriminatory policing and racial profiling practices. 

It would allow for indefinite detention of people posing a so-called threat to “public policy” or “public security” circumventing criminal justice – as well as limiting challenges to deportation orders, and independent monitoring of respect for human rights in deportation procedures. Meanwhile, countries have insisted on leaving the door open for further sanctions, obligations, and grounds for detention in national law. 

The European Parliament is also negotiating its position on the proposal, paving the way for interinstitutional negotiations in the coming months. 

Home affairs ministers also reached agreement on two proposals under negotiation relating to the ‘safe third country’ concept in EU asylum law and to an EU list of ‘safe countries of origin’. Amnesty has warned that these three proposals would seriously undermine territorial asylum in Europe as well as human dignity.