Susie Allegret is an international human rights lawyer and author from the Isle of Man, who in recent years has focused on technology and its impact on human rights. As a legal expert, she has advised Amnesty International, the United Nations and other organizations on issues such as counter-terrorism and anti-corruption. Her first book, Freedom to Think, published in 2022 and shortlisted for the Christopher Brand Prize, covers the history of legal freedom of thought. Her new book, Human Rights, Robot Wrongs, looks at how AI threatens our rights in areas such as war, sex and creativity, and what we can do to fight back.
What inspired you to write this book?
It started with two things. The first was the sudden explosion of ChatGPT and how now everyone can be a novelist, that AI will do everything for them and there will be no need for human creators. It was totally depressing. The second was the story of a Belgian man who took his own life after six weeks of intense relationship with an AI chatbot. His widow felt that he would have been there for her and her children if it hadn’t been for this relationship that distorted his worldview. That started me thinking that this is really about the right to life, family life, freedom of thought, freedom from manipulation. And how do we think about AI and its very serious impact on our human rights?
You don’t really believe in the threat of an AI apocalypse.
I think that’s a distraction. What we should be worried about is limitations being put on how AI is developed, sold, and used by people. And ultimately, behind the technology are people who make choices about the design phase, especially the marketing, and how it’s used.
Everything we hear about AI suggests that it is advancing at an incredible speed, and that the models operate at a level of complexity that even their creators cannot comprehend. How can regulators keep up?
I think there’s a whole lot of smoke and mirrors. It’s like in The Wizard of Oz, when Toto pulls back the curtain and you can see what’s going on behind the curtain. So you don’t have to believe that everything is inevitable and perfect. You can make choices, you can ask questions, and there are areas where you shouldn’t use it if something is too complicated to explain.
Civilian technology is infiltrating people’s lives and replacing human relationships, and that’s extremely dangerous.
Do you think existing legal systems and human rights charters are capable of handling AI, or do you think new frameworks need to be created?
I don’t think we need a new framework, but what we really need is access to justice. There may be some legal instruments that need to be developed. But one of the really fundamental challenges is how to resist, how to enforce regulation. This is what we’ve seen with the big tech companies. Their activities have been found illegal, they’ve been hit with huge fines, and they’re still operating.
There is a very interesting chapter on sex robots and chatbots. What are the main concerns?
This is an area I hadn’t really thought about before, and it scared me a lot to realize how widespread the use of AI bots to replace human companionship is. The reason I’m concerned is because this is a civilian technology replacing human relationships, and it’s being introduced into people’s lives and is very dangerous in terms of social control. This isn’t a question of morality, but rather what this means for human society and our ability to cooperate and connect.
Isn’t AI good news for people who can’t afford to hire a lawyer?
It depends. If it’s a very basic dispute where it’s just important to know the rules, technology can improve access. But when you look at more complex issues, the problem is that generative AI doesn’t actually know what the law is and may give you old nonsense. If a machine tells you something in an authoritative tone, it’s very hard for people to doubt it.
What happened when I asked ChatGPT “Who is Suzi Alegre?”
It said that Susie Allegret doesn’t exist, or at least doesn’t appear on the Internet. I was a bit angry, considering that my first book had been published a year earlier. I asked who the author of “Freedom to Think” was, and the first person to come up was a male biologist. After asking multiple times, I got 20 names, all of them except one, men. It seemed completely unthinkable to ChatGPT that a woman could write a book about thinking.
What do you think about the “fair use” defense that chatbot companies are using to justify collecting words and images to feed into their AI?
I’m not an American copyright lawyer, so I have no expertise on that, but I think it will be very interesting to see how the litigation plays out in different jurisdictions. The United States has a very different approach than most of the world when it comes to the issue of freedom of expression and how it is used to help the tech industry develop. Regardless of the legality of the “fair use” defense, it raises big questions about the future of human creativity, journalism, and the information space. And at the root of it all is the fundamental issue of creators being significantly less compensated. The overall trend is toward taking away economic incentives from creators.
If everything proposed in the book in terms of regulation were to come to fruition, wouldn’t there be a risk that it would stifle innovation?
The idea that regulation stifles innovation is a bit of a stretch. Regulation encourages innovation to develop in certain directions and blocks those that are extremely harmful. In fact, I think there is a risk of the opposite: that we will lose our ability to innovate as AI becomes so dominant that it impairs our ability to think for ourselves and regain our attention.
He points out that AI is not a harmless cloud floating above our heads. It is based on material extraction and exploitation of workers, mostly in the Global South, and creates incredible amounts of pollution when it operates. But much of it is invisible. How do we address these impacts?
This is a big question, and one way to address it is to look at the issue of AI adoption from an ESG perspective. [environmental, social and governance] Let’s change our perspective. The devices we use, the phone we’re talking to, are all made from minerals extracted from conflict zones and with child labor. Recognizing that could change societal demands and consumer habits. We can use generative AI to make funny memes, but how much water and energy does it consume? Wouldn’t picking up a pencil actually give you a sense of satisfaction?
Do you sometimes wish you could put AI back on the shelf?
It’s not a question of banning AI or incorporating it into every aspect of our lives. It’s a question of choosing what we want to use AI for. Being critical and questioning doesn’t mean we’re against AI, we’re just against the hype around it.
Human Rights, Robot Wrongs by Suzy Allegret is published by Atlantic Books (£12.99). To support the Guardian and Observer, order your copy at guardianbookshop.com. Delivery costs may apply.