From “AI will save us” to AI apocalypse, recent narratives around AI safety and regulation raise questions about who can and should participate in the discussion. Interest in generative AI techniques and large language models (LLMs) has shifted the discussion from real-world harm to so-called “existential risks,” mirroring the tech industry’s narrative about the “existential threat” that AI poses to humanity. In May, a paper in Science magazine highlighted the extreme risks and shortcomings of new governance initiatives. An open letter in 2023 called on all AI labs to “immediately pause the training of AI systems more powerful than GPT-4 for at least six months.” Ironically, many of the signatories of the letter are the architects of these AI technologies and continue to develop and invest in them.
However, this discussion of AI risks does not take into account the perspectives and experiences of those most affected by AI systems, particularly marginalized groups, people of color, women, non-binary people, LGBTQIA+, immigrants and refugees, people with disabilities, children, older people, and those with lower socio-economic status. When talking about AI risks, we also need to consider the broader harms that AI poses to society, the environment, and human rights, including the concentration of power in the hands of a few corporations.
In 1990, Cherice Glendenning published “Memorandum Toward a Neo-Luddite Manifesto,” in which she praised the Luddite movement for its condemnation of laissez-faire capitalism for enabling “a growing fusion of power, resources, and wealth justified by an emphasis on ‘progress.'” We argue that a technology-agnostic Neo-Luddite approach is paramount to countering the power accumulated by AI’s designers.
Reclaiming stories about neglected perspectives
As human rights activists, we have viewed AI systems as potentially harmful emerging technologies since the mid-2010s, for example due to the expansion of algorithm-driven risk assessment tools for criminal justice and facial recognition. Many of us are committed to researching, exposing, and preventing AI harms, including through working for outright bans of some tools that disproportionately impact marginalized groups, such as AI-driven surveillance technologies like facial recognition. We have also repeatedly pushed back against AI hype and “techno-solutionism,” the belief that technology can solve every social, political, and economic problem.
Fast forward to the present day, and we aim to reclaim the narrative around AI safety and regulation, imagining a future where social justice and human rights take precedence over technological goals. We believe we need to shape that future together with civil society and affected communities, taking into account the need to completely disassociate the space captured by the hyped-up AI discourse from the dominant AI narrative.
For example, is climate justice even possible given the enormous energy consumed by large language models (LLMs)? Can public-good tech companies survive the domination of a handful of giants in the US, Western Europe and China that monopolise computing power, AI chips and infrastructure?
There is nothing fundamentally inevitable about the labor relations that underpin AI today, which is why scholars, activists, and practitioners like Joana Varon, Sasha Costanza-Chock, and Timnit Gebru have called for a shift toward a common federated AI ecosystem that:
“It is characterised by consensus-based community and public stewardship of data; decentralised, local and federated development of small-scale, task-specific AI models; worker cooperatives for properly remunerated and dignified data labelling and content moderation work; and ongoing attention to minimising the environmental footprint and socio-economic and environmental damage of AI systems.”
Reaching alternative human futures requires a political imagination, and centering human rights is part of protecting our ability to define such futures on our terms.
AI-related human rights violations
Until recently, AI governance efforts have focused on the harms that civil society and experts have documented over the past few years, including calls for greater transparency and accountability and bans or moratoriums on algorithmic biometric surveillance. Examples of such real-world harms include the use of AI tools in social welfare that threaten people’s rights to essential public services, and facial recognition technology in public places that restrict and violate human rights and civic space. Today, Israel is reportedly using the technology and AI-enabled systems for target identification in Gaza, accelerating the pace of killings.
Recent legislative initiatives (e.g., EU AI Act, Council of Europe AI Treaty, US Executive Order on AI, G7 Hiroshima AI Process) are ineffective or have too broad exemptions for the uses that will cause the most egregious harm. These exceptions are driven in part by national security concerns and a sense of inevitability for widespread use of AI (what some scholars call periodic bouts of “automation fever”). Rather than stepping in and interrupting the trajectory that AI is carving out for us, we must find ways to make our coexistence with AI more tolerable.
The accumulation of harms that have set us on a path of steady but opaque violations of human rights has been building over time. Our everyday acceptance of the specific deployment and use of AI described above has created a permissive operating environment and led to acceptance of the core logic on which AI is designed. There are several reasons to resist this trend. First, surveillance is an essential feature of many AI models, meaning that some degree of privacy violation will always occur. Second, the output of AI systems, including generative AI, reflects the highly biased, colonialist, patriarchal, misogynistic, and disinformation-filled hierarchies of knowledge on which AI systems are trained in the first place. Finally, while the output of AI systems is often considered a prediction, it is often a problematic “estimate” at best, based on unreliable and inappropriate logic and datasets (as the computer science proverb goes, “garbage in, garbage out”).
Towards a human rights-based approach
A human rights-based approach to AI governance would provide the most widely accepted, applicable, and comprehensive set of tools to address these global harms. Algorithm-driven systems do not necessarily warrant new rights or entirely new approaches to governance, especially when the only change is that data-driven technologies are being deployed with unprecedented speed and scale, amplifying existing human rights risks.
There is no need to reinvent the wheel when regulating the design, development, and use of AI. Policymakers should apply existing international human rights standards in the context of AI and respect democratic processes. In March 2024, the UN General Assembly adopted non-binding resolution A/78/L.49, calling on states to “prevent harm to individuals caused by artificial intelligence systems and to refrain from or cease using artificial intelligence applications that are unable to operate in compliance with international human rights law.”
Substantive human rights as enshrined in the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights are particularly important in assessing the positive and negative human rights impacts of AI systems. These include rights to privacy, dignity, non-discrimination, freedom of expression and information, freedom of assembly and association (including the right to protest), economic and social rights, and many more. Nine core international human rights instruments focus on the specific needs and rights of marginalized groups, including women, people of color, migrants and refugees, children, and persons with disabilities. These should be used as a blueprint for putting these groups at the center of AI governance.
Moreover, we view procedural rights as a foundation for effective AI governance. These are non-negotiable first principles. For example, any human rights restrictions in the development and use of AI must be based on a legal basis, have a legitimate purpose, be proportionate and necessary. In accordance with the UN Guiding Principles on Business and Human Rights, AI developers and adopters must further conduct human rights due diligence (including human rights impact assessments). This responsibility applies throughout the entire lifecycle of an AI system: from the design phase to post-deployment monitoring and potential discontinuation.
Mandatory transparency is no longer a matter of debate; it is fundamental to enable effective access to redress and accountability. What is still missing is an outline of the contours of such transparency, which should be crafted with input from civil society and affected communities to ensure it is meaningful in practice. An appropriate AI governance framework should further include provisions for accountability and redress, enforced through oversight bodies, judicial mechanisms and dispute resolution processes. Finally, engagement of external stakeholders, especially civil society and marginalized groups, should be mandatory throughout the AI lifecycle. Meaningful engagement requires capacity building, access to information and adequate resources.
***
“AI safety” can only be a truly worthy goal if it prioritizes the safety of all groups. Currently, this concept distracts from the fact that AI will not affect everyone equally, and will have the greatest impact on already marginalized groups.
We must collectively understand and examine how the narrative of AI safety, under the guise of efficiency, convenience, and security, promotes, conceals, and reinforces violence. We must also draw the line at areas where the development and deployment of AI is entirely unjust and unwelcome, drawing on Glendening’s memo.
Our work must be centered on imagining how things might change. What if the enthusiasm and resources spent on AI were redirected towards health and social programs? How would communities’ lives be improved if funds used for automated policing were invested in justice and reparations? What if water consumed by data centers was returned to Indigenous communities? What if we had the right to opt out of rapid datafication and meaningful choices about which aspects of our daily lives we want to digitally engage with?
Civil society activism often forces us to respond to immediate problems, leaving little space to imagine and build alternative futures that are not dominated by technology. We urgently need more space to dream and create these visions.
Image: A side-by-side comparison of stylized representations of humans and artificial intelligence. (Photo by Ecole Polytechnique via Flickr, CC BY-SA 2.0)
Source link