On a dank Friday night in Oxford, an ambassador from the Vatican addresses a gathering of philosophers. The location is St Luke’s Chapel, its walls crisp and whitewashed, its glass windows stained. The father ponders Aristotle’s insights on the good life.
But for the invocation for audience members to tweet about the event, this might be a scene from the 15th century. Yet the subject matter under discussion could not be more contemporary: robots and what to do about them.
Father Paolo Benanti, the speaker, is the Pope’s adviser on artificial intelligence. His urgent message to the assembled boffins, fellows of the newly created Institute for Ethics in AI at Oxford University, is that we are in danger of becoming an “algocracy”, a society ruled by algorithms. “We must recognise how order is made, how power is shifted by these new tools,” Benanti pleads. “What rights do we want to sacrifice in the name of efficiency?”
These questions posed by Benanti are anything but academic. Much of our life, from what books we buy to what news we consume, is already heavily influenced by algorithms. But the past two years have seen a quantum leap in the quality and sophistication of artificial intelligence. We have entered what many experts are now describing as a golden age of AI.
In recent months OpenAI, one of the leading companies in the field, has twice lit up social media by giving us gorgeous, terrifying glimpses of the future. Over the summer it released DALL-E 2, which takes simple instructions — “show me Henry VIII eating a hot dog” — and generates startlingly good images. Then in November it released ChatGPT, a chatbot whose ability to give instant, human-seeming responses to a dizzying array of questions left many wondering whether it had just killed off the undergraduate essay. Universities are still scrambling to respond.
In 2020 DeepMind, perhaps the most influential AI company of all, used its AlphaFold system to upend the field of molecular biology. AlphaFold effectively solved the so-called “protein folding” problem that had stumped scientists for 50 years, using its technology to predict the 3D structure of a protein just from seeing its one-dimensional amino acid sequence. This could transform medical research and the development of drugs and vaccines.
The future is here. Possibilities that once seemed a decade or more away have arrived. The new buzzword is “generative” AI, which can create instant content from art to music to (dare I say it) newspaper articles with a set of simple instructions. These bots are still in their adolescence and have plenty of limitations — they are very bad at being funny, for instance — but for the first time the general public is gaining a sense of how good they might become. If you’re not paying attention, you should be, because it will soon be paying attention to you.
The dawning of AI’s golden age poses all manner of tricky questions. If we allow machine intelligence to do our jobs and clean our houses, pick our music and television, generate our art and essays, judge our legal cases and diagnose our illnesses, what will be left for us to do? What’s so special about being human in the age of advanced artificial intelligence? What’s so special about being human at all?
Which brings us back to Father Benanti and the Oxford boffins. Because these are weighty questions. To build AI we need engineers, but to interpret it, to decide how it should be applied and what kind of world we want to live in requires philosophers, historians and lawyers. If we have learnt anything from the internet age so far, we surely know by now that it is a terrible idea to let a few hundred unregulated Silicon Valley programming dorks decide what the human future should look like. And yet they are ploughing ahead at warp speed, coding the 21st century and beyond as we speak. It is time the rest of us had a say.
Fortunately, somewhat belatedly, a philosophical fightback is under way, much of it happening at Oxford. In 2019 the university was given a thumping £150 million by the billionaire Blackstone financier Stephen Schwarzman to spend on the humanities and — in particular — a new Institute for Ethics in AI. He topped it up with another £25 million last year, so none of this institute’s fellows have to worry about compromising themselves by taking “soft” funding from big tech.
Schwarzman gave the donation — the largest gift to the university since the Renaissance — after he was approached by Oxford’s outgoing vice-chancellor, Louise Richardson. He’d already given a mammoth $350 million to the Massachusetts Institute of Technology to fund their study of computer sciences, but had become concerned about how rapidly AI was developing and wanted to donate to the humanities to balance out his previous gift. He chose Oxford because of its venerable 800-year-old philosophical tradition.
“We’re going to get very quickly to questioning what the value of humans is,” Schwarzman tells me. “The potential on the negative side, for machines replacing humans faster than humans can adapt, is pretty worrying.”
Fifteen years ago, Oxford seemed like a rather backwards-looking place, good at producing glib prime ministers but a bit lost in its oak-panelled history. Under the leadership of Richardson, however, it has reinvented itself as the epicentre of intellectual futurism, home to a cluster of institutes concerned with how technology is transforming the planet.
The director of the new Institute for Ethics in AI is John Tasioulas, a Greek-Australian philosopher who has become a robust, thoughtful presence around the university. He’s particularly concerned with the decline in humanities, which has seen the number of students studying history, literature or philosophy plummet in recent years. What do these archaic traditions have to tell us about the digital future?
“Humanities are in a precarious condition, philosophy is in a precarious condition,” says Tasioulas, 58. “But they are going to be critical for the 21st century.”
Here’s an example why. We might soon be able to build an AI that could assess a legal case more efficiently than a human judge. It would have an instant recall of case history. It would be able rationally to weigh up a mountain of evidence. It would not tire or become irritated with counsel or make grumpy decisions when it is hungry before lunch, as judges have been known to do.
This AI judge could give us “fairer” results and would be much quicker at clearing the vast case backlog currently clogging Britain’s legal system. But would we be comfortable submitting ourselves to algorithmic judgment?
That’s really a question for Tasioulas, a legal thinker, and not for Luminance or Lex Machina or whichever tech company ends up building the best robot judge. “It’s about the process, not just the outcome,” Tasioulas says. “The fact that another entity is going to take the decision matters.”
Using a lawbot throws up an array of ethical issues: a human judge may have the intuition to spot a lying witness, whereas a machine may not. Judges are also obliged to explain and account for their decisions, something a machine could struggle to do. In this country we have juries for criminal cases, because we value the civic act of participating in the justice system. Robots cannot replicate this. Yet given their likely capabilities, the case for at least partially integrating them into the justice system remains quite strong. One suggestion is that litigants might have the option to submit to an AI mediator before entering a human courtroom.
In the past few years there has been a conspicuous attempt by the large AI companies to get ahead of these questions. The likes of Microsoft, Google (which owns DeepMind) and OpenAI have all appointed in-house ethicists. If you ask the latest GPT chatbot to say something racist or homophobic, it will generally refuse. This was not the case with some earlier chatbots.
But there’s an enduring sense that ethics are mostly an afterthought for these fast-moving and profit-driven organisations, a compliance hoop they must jump through. Tasioulas and the crew of philosophers he has assembled are arguing that ethics should be foundational. So rather than just assessing how to ensure the chatbot isn’t racist, we might wonder whether it’s actually a good idea to try to make sentient chatbots in the first place.
“It’s not just about can we build it, but also should we build it,” says Seth Lazar, an Australian philosopher who is a fellow at the Institute for Ethics in AI.
OpenAI’s goal is to develop artificial general intelligence — a robot that can match the human brain and outperform humans at most jobs. “It’s bonkers,” says Lazar. “They want to develop systems [to whom] they can hand over the governance of human life. They look around the world and see inefficiencies everywhere and think, ‘A computer system wouldn’t have those inefficiencies.’ They’re trying to build a benevolent dictator. But we do actually care about governing ourselves.”
Much recent coverage of AI has focused on humanity sowing the seeds of its own destruction. The likes of Stephen Hawking and Elon Musk have fretted over what will happen if we create superintelligence, an AI that becomes infinitely smarter than us and might choose to destroy humanity on a whim. This has become an obsession of the effective altruism movement, which is based on a utilitarian philosophy that seeks to use evidence and reason to deliver the biggest possible benefit to humanity.
A pall has been cast over effective altruism, however, since the downfall of its biggest backer, alleged crypto swindler Sam Bankman-Fried. Meanwhile Nick Bostrom, the Swedish superbrain who is director of Oxford’s Future of Humanity Institute, which does lots of big thinking about the challenges humans will face in the next century, has recently become embroiled in a racism scandal over an email he sent in the 1990s. Bostrom’s famous example concerns stationery. If a superintelligent AI was told to make as many paper clips as possible, might it decide to liquidate humanity in order to have more space and material with which to make paper clips?
In true Oxford tradition, Tasioulas and company are at philosophical loggerheads with the effective altruists. The ethics crowd is much more focused on the shorter-term impacts: never mind whether we’re going to die at the hands of robots, how are we supposed to live with them?
“I worry that some of those existential issues become a way of distracting us from the other questions we need to address,” Tasioulas says. “There’s quite a lot of overblown hype — fear-mongering on one side, utopian bullshit on the other.”
What keeps Tasioulas up at night is not visions of The Matrix or Blade Runner, but the steady erosion of our humanity by AI that is focused on maximising Silicon Valley profits. “The scenario that really worries me is that you live in a dehumanised world where decisions that affect you are taken by automated systems,” he says. “That you don’t play an active role in this decision-making and this is sold to you as a way of getting your preferences fulfilled. That we become demoralised into thinking that human action is futile or superseded by the existence of these structures. That there is this sense of alienation and passivity and people are not able to be autonomous rational agents shaping their own future.”
There are some days, if I’m furiously failing to download a QR code, or looking mournfully at what forgettable shows Amazon wants me to watch, when I feel as if this alienated world has already arrived. If the big philosophical questions raised by AI can sometimes feel overwhelming, these daily frustrations — being manipulated by shopping algorithms, or watching hours of bland but addictive television — are the moments when AI can flatten and demoralise our daily lives.
How to stay human, then? I found one answer by drinking sauvignon blanc. Earlier this month, the Institute for Ethics in AI held a day-long seminar that culminated in a wine-tasting session conducted by the philosopher Barry Smith.
The goal of the session was to prove how popular wine apps, such as Delectable or Vivino, are distorting our tastes, and not for the better. We blind-tasted three sauvignon blancs — one cheap, French and average; the second pricier, French and pretty good; the third pricier still, from New Zealand and fairly middling.
Most of the thirsty philosophers present agreed that the second wine was the best. But, Smith informed us, most wine apps would have guided us from the first wine to the third, a Kiwi fruit bomb. Why? Because they are biased towards bigger brands, higher prices and obvious choices. They play things safe and profitable.
“What we really long for is that unusual, undiscovered wine that surprises and delights us,” Smith says. “But you rarely get that from the algorithm.” One might say something similar about Netflix or Spotify, which tend to feed us comforting mediocrity over challenging novelty.
One interesting aspect of AI ethics is how much it varies from one domain to another. Some offer huge potential benefits. Take cancer diagnosis. Few of us would object to robots becoming exceptionally good at spotting cancers and mapping the impact of their removal, which they should be able to do over the coming years. But using similar technology on the battlefield has very different moral implications.
Linda Eggert, 32, is a research fellow with the Institute for Ethics in AI. Her background is in the philosophy of war but she now focuses on the ethical morass of killer drones and autonomous weapons systems. If a fully automated weapons system reduces civilian deaths in war, she wonders, ought we to use it? Or does it matter that a human being is making the ultimate decision over life and death? Modern wars already feature machines and computers, but there are still usually people behind them. It feels somehow undignified to just be eradicated by an algorithm.
“There’s something about being seen and recognised by another person, and figuring in their decision-making process, that feels like it matters,” Eggert says. “Do we have a right for decisions, even the decision to kill us, to be made by people not machines?”
The new institute’s key challenge is to not just be an ivory tower talking shop, where chins are stroked and debates left unresolved. There are plans afoot for AI ethics boot camps to educate politicians and the general public about how it all works. Institute fellows have given presentations to parliamentary committees and big tech companies.
Will anyone in politics or the tech industry listen? Given how long it has taken us to get to grips with the ethics and implications of social media, don’t expect too many rapid developments. Tasioulas hopes that on top of smart regulation there will also be a cultural change on AI, as there has been on climate. “People in business are human beings too,” he says. “They want to believe they are doing the right thing.” To this end, the institute at Oxford has joined forces with other universities including Harvard and Stanford, which are also doing important research in this field.
Whether all these sages can actually keep up with rapidly accelerating AI will be another challenge. But their task is a critical one. Disciplines such as philosophy have spent millennia building up an idea of what it means to be a human being. What makes us different, they have often concluded, is our ability to stand back from inclination and assess what it means to live a good life; our capacity for rational insight and deep, creative thought. But if the machines we are making can do all this too, what is so unique about humanity? Perhaps nothing. Should the machines then have similar rights and protections to humans? Possibly.
At the moment, the philosophers of Oxford have more questions than answers. But at least someone is finally giving the matter some proper thought.
Our AI art experiment
Mat Gill, deputy art director of this magazine, on using DALL-E 2 to help illustrate this story
For the illustration at the beginning of this feature, we road-tested OpenAI’s DALL-E 2 generator. First, you input a description of the elements you want the image to contain. For example:
Metal cyborg arm wrestling with old professor in tweed jacket, photo realistic, classroom background, digital art, hand in hand
You can also specify details such as styling and lighting, or even ask the AI to produce it in the style of your favourite artist. DALL-E 2 then generates four images based on the description. If unsatisfied, you can tweak the description and generate four more (you get 50 attempts free when you sign up).
Is it any good? Our verdict
The possibilities at first seem endless — the barrier to entry is your grip of the English language. What might take a week of back-and-forth communication with a human illustrator takes minutes with DALL-E 2. Yet the cons quickly outweigh the pros: a person might have extra or missing limbs; unnatural textures and unrelated objects appear without explanation. There’s a clichéd, emotionless clunkiness to the whole thing. And there’s a deeper problem: in scouring the internet for thousands of source images the AI is at the mercy of visual and quantitative bias. After numerous iterations and without specifying otherwise, our “professor” was always presented as a white man — no women or ethnic minorities. For a progressive technology it’s unnervingly swayed by historic representation. Will AI replace illustrators? Not yet. There is no substitute for a person who can truly understand a problem, communicate a solution and is accountable for the result.
… and here is DALL-E 2’s attempt at a “photographic Henry VIII eating a hot dog”. Would you recognise him?