Will AI lead us into a bright new future? Or will it be the end of civilization? We’re about to find out.
Artificial intelligence isn’t on the horizon anymore – it’s here, and it’s everywhere. Some people are thrilled. Some are terrified. And some are legitimately surprised that something out of science fiction is no longer fictional, even if they aren’t 100 percent sure what it actually is.
Simply put, artificial intelligence is technology that can mimic the way the human mind learns, makes decisions and processes the world. It tends to think faster than we do. Like us, it make mistakes. As of press time, no AI was sentient. In fact, ChatGPT, the popular generative AI application, doesn’t even “know” what it’s saying.
A Sampling of Duke Research
- Syntactical structure in journalism
- Barriers to diversity in education
- AI in health care
- AI personhood
Duke is meeting the AI explosion by investing a $30 million award from the Duke Endowment into new AI research and faculty. Across its campuses, Duke researchers are designing or applying algorithms in bold new ways. At the same time, professors and students are tackling the thorny questions that arise when computers try to emulate the human mind.
In this complicated, shifting field, Duke is all in. AI is here to stay. Navigating this strange new world takes advanced science, interdisciplinary cooperation and open minds. And it takes computer scientists who have been working with AI since the era of the floppy disk.
“I think a lot of us chose [AI] for our graduate work because we were told this is really long-term,” says Bruce Donald, the James B. Duke distinguished professor of computer science. “It was going to happen between 50 and 500 years [in the future].”
In 1951, computer scientist Marvin Minsky, who would go on to co-found the Massachusetts Institute of Technology AI Laboratory, created a device the size of a grand piano that could learn a maze. While computationally simple by modern standards, it was an early example of AI. By the time Donald landed in the MIT AI Lab in the 1980s, the field had been developing for decades. In fact, he took his first course on AI in 1978, and he programmed an epic poetry generator long before the world had heard of generative AI.
During the personal computer boom of the ’80s and ’90s, the “easy” parts of computer science – programs such as those for word processing and basic mathematical computation – entered the mainstream, Donald says, leaving his field to focus on trickier challenges like AI.
Mark DeLong A.M.’81, Ph.D.’87 had a major role in the radical increase in computational power at Duke in his role as director of research computing. Parallel computing, in which numerous computers are given the same task and then their results are compiled and analyzed, enabled AI research at Pratt and the School of Medicine. The use of graphics processing units – GPU computing – exploded shortly before DeLong’s retirement in 2021. These faster, more efficient devices meant that labs full of gaming computers could make AI research accessible to Duke researchers and students. Interest was high and continues to rise.
“We’re the largest major at Duke,” Donald says, proudly. “The demand is off the charts.”
Meanwhile, a type of AI called a large language model was becoming increasingly popular in everyday life. LLMs scan enormous amounts of data and can make predictions based on the patterns they find. ChatGPT, perhaps the best-known LLM, has access to the entire internet. When it responds to a person’s query, it’s guessing, based on the patterns it has seen. Its answers might be factual, or it might recommend putting glue on pizza.
“There are certain things that LLMs can do really well. It’s great at fixing your grammar,” says Cynthia Rudin, the Gilbert, Louis, and Edward Lehrman distinguished professor of computer science and an AI luminary. “There are certain things you can trust it for and certain things you can’t, and you just have to be very careful to know which one is which.”
The key – and this is true for all AI – is for humans to always make the final decision. This means never accepting ChatGPT’s responses at face value. It also means researchers and doctors double-check AI results before acting on them. In short, we can let AI help us, but we should not let it think for us.
Rudin is a proponent of transparent AI, which explains where its results come from. LLMs, by their nature, can’t do that. Even computer scientists don’t exactly know what’s happening inside them. It’s just fine to trust publicly available LLMs with concrete, low-stakes tasks. The more complex a task, the more careful users have to be with an LLM’s results.
“Even though these tools continue to impress us in many ways, they’re bad at the things we need them to be good at, or we’d like them to be good at, and they’re good at the things we don’t really want them to be good at,” says Chris Bail, professor of sociology, computer science, political science and public policy. “It struggles to do rudimentary human things, like ordering a plane ticket.”
Rudin and Bail are among the Duke researchers working toward more trustworthy AI and healthier relationships with it. Bail directs the new Society-Centered AI Initiative at Duke, focused on interdisciplinary research on how AI and human behavior influence one another.
“Instead of careening between doomerism and naive optimism, we need to think about how the good and bad go hand in hand, and how AI and society will inevitably co-evolve together,” Bail says. “The more that we have research that guides that process, the better.”
In one collaboration with Brigham Young University researchers, Bail helped create a conflict mediation tool that uses AI. Its first test: the gun control debate. The team created an AI chat assistant that offered users rephrased versions of their messages, making them more civil without changing their meaning. Users could choose to post the edited version or keep their original. The idea wasn’t to change minds, but to make conversations more productive.
“I worked with Nextdoor to implement this technique across all of Nextdoor,” Bail says. “It resulted in a 15 percent decrease in the use of toxic language on the platform, which, by the way, is one of the most toxic social media platforms.”
LLMs are only one type of AI, and not all AI research at Duke comes from the algorithm side. Boyuan Chen, assistant professor of mechanical engineering and materials science, works toward human-robot cooperation. Could robots assist nurses and wildland firefighters? He’s on it. Emily Wenger, assistant professor of electrical and computer engineering, designs invisible watermarks to keep original artwork from being used to train LLMs, thereby protecting intellectual property rights and artistic integrity. Also at Pratt, professor of electrical and computer engineering Larry Carin and Helen Li, the Marie Foote Reel E’46 distinguished professor and chair of the Electrical and Computer Engineering Department, use AI to scour public health records looking for useful patterns without violating patient privacy.
In medicine, AI’s roles run as deep as the early development of new drugs. In addition to his role within Duke, Donald is a co-founder of Ten63 Therapeutics, which uses AI to accelerate the design of new cancer treatments. Algorithms produce treatment options faster than people can, but humans still have the final say in what will be developed and tested.
“My lab is in a different position because in the last 10 years we’ve been using a combination of various kinds of AI,” Donald says. “We have had 30 drugs enter clinical trials, designed in the lab.”
At Duke Health, radiologist Joseph Lo ’88, Ph.D.’93 has developed AI-assisted breast cancer screening technology that is nearing FDA approval. Like other cancer-identifying algorithms, it highlights possibly cancerous lesions on mammograms. Unlike others, it is transparent, meaning it shows radiologists how it reaches its conclusions. It doesn’t diagnose – that and other critical decisions are still in the hands of doctors.
Unlike LLMs that scan broad sets of data, medical algorithms are exclusively trained on research. These are smaller sets of data, but everything is vetted and scientific. Jian Pei, chair of the Department of Computer Science and the Arthur S. Pearse distinguished professor of computer science, is working to provide incentives for sharing data and machine learning models among parties that don’t usually collaborate. It could improve the already accelerated processes in Donald’s lab – and everywhere.
“[AI] is absolutely changing the process of research,” Bail says. He’s wowed by its social simulations and automated text analysis. Yet when asked if he trusts it after using it for years, his tone shifts.
“Trust is a strong word,” he says.
DeLong agrees. Trusting your car to get you home is completely different from trusting another person. Figuring out how to talk and think about AI goes hand in hand with using it.
“I would argue we don’t even have an adequate vocabulary to talk about AI and its impact on human beings,” he says.
After retirement, DeLong returned to teach the Our Complex Relationships With Technology course in the Duke Initiative for Science and Society. Although he had a career in research computing, DeLong’s academic background is in theology, philosophy and medieval and Renaissance literature. In the classroom, he views AI through an inherently interdisciplinary lens.
Rather than avoid the ever-present ChatGPT with his Complex Relationships students, he assigned them several controlled ways to use it in their writing. In one lesson, half the class use it after writing an assignment, while the other half use it while writing. In another, students treat the AI as a co-author, but keep track of every deletion and substitution. It makes them more aware of where their ideas end and AI’s begin. “I think that [these assignments] move people toward a more responsible way of integrating these new technologies,” DeLong says. “I don’t think anybody really knows what the magical answer is.”