Skip to main content
AI illustration of robot and man on tennis court.
AI image created to accompany this story using Adobe Photoshop's generative AI feature. AI Illustration by Lacey Chylack

AI research: AI personhood

James Boyle, William Neal Reynolds distinguished professor of law

Humans aren’t as exceptional as they used to think.

Aristotle claimed language set humans apart. Yet whales, dolphins and other animals have their own languages, notes distinguished professor of law James Boyle. Nowadays, ChatGPT can converse on seemingly any topic. The Turing Test, designed in 1950 to test artificial intelligences for consciousness, is useless in an era of large language models that can engage in extended conversation. We know modern AI is not sentient but are running out of ways to prove it.

“We’ve got nothing,” Boyle says.

James Boyle

Hypotheticals are among the most important elements in a law classroom. In his new book “The Line: AI and the Future of Personhood,” Boyle explores an imminent one: How will we react when sentient AI is developed? Can AI have rights (like non-human animals) or even personhood (like corporations)? Boyle offers no clean answers, but rather wants people to start thinking about this – especially given humanity’s track record of prejudice.

“Historically, we’ve been very bad even at recognizing the personhood of members of our own species,” Boyle says. “Seems like this is something we should get right.”

To be clear, ChatGPT doesn’t know what it’s telling us – not like a person would. As an LLM it responds to conversation by scanning a massive amount of information and predicting the most likely response. Its speed and accuracy simply seem intentional.

“It’s a galactic-scale autocomplete machine,” he says.

A Sampling of Duke Research

Still, Boyle says, parents are already wondering if their kids are being rude to ChatGPT, Alexa or Siri because they seem like people. Tomorrow’s lawyers will handle comparable, but more complex, issues. In “The Line” Boyle considers AI personhood and rights through two lenses: empathy and efficiency.AI research: AI personhood

Boyle writes extensively about the role of empathy in the sci-fi movie “Blade Runner” and the story it’s based on, Philip K. Dick’s “Do Androids Dream of Electric Sheep?” In both, a test for empathy is used to tell humans from synthetic people called replicants – who are then killed without mercy, in the book and movie’s central irony. Fiction can play a critical role in expanding empathy, Boyle explains, noting that the 1852 novel “Uncle Tom’s Cabin” was instrumental in making white readers feel the pain of enslaved Black Americans.

As for efficiency, Boyle posits that AI rights might develop in the way corporate personhood did. Corporations can be sued and can enter into contracts. This kind of personhood is different from humanity. Google and IBM can’t vote or make friends. Time will tell whether that’s also true of AI.