As a young composer, she wrote in that style, too, and evidently excelled. If not for the old box from her parents, Rudin would have forgotten about the 1993 letter from Al Gore, congratulating her for placing second in that year’s Young Inventors’ and Creators’ Competition’s popular music composition category.
“As you continue your education, I encourage you to continue to strive for creativity and excellence,” the vice president wrote. “As an individual with imagination and talent, you have the ability to make a significant impact on our society.”
Today, Rudin is a professor of computer science and engineering and director of the Interpretable Machine Learning Lab at Duke. She’s also the recipient of two major 2022 honors as of press time: the Association for the Advancement of Artificial Intelligence (AAAI) Squirrel AI Award and a Guggenheim Fellowship. Her friends and colleagues describe an uncommonly capable person, someone who destroys unsolved mathematical problems with elegant solutions and could achieve even greater fame if she worked in theory.
But Rudin doesn’t want to. She’s driven to apply her considerable gifts — even if she’s too humble to call them that — to the benefit of humanity.
“I want to work with doctors,” Rudin says. “I want to work with power grid engineers and people who really can impact the world.”
Growing up outside of Buffalo, New York, Rudin didn’t consider herself particularly good at anything. She was inspired by her dad, a medical physicist, and was always attracted to math and physics. She loved piano, to the extent that she also took composition lessons, but had to eventually admit that there wasn’t an audience for her favorite music — maybe in the 1890s, but not in the 1990s.
“I was still very goal-focused,” Rudin recalls. “The strange thing was, a bunch of the goals I set for myself as a kid were not natural for me.”
Rudin double-majored in mathematical physics and music theory at the University at Buffalo, but then applied mathematics caught her attention. Simply put, it would allow her to play in everyone’s backyard. While pursuing a Ph.D. at Princeton in this field, Rudin sought an advisor, but found her calling. She calls it luck. First Gary Flake, then Rob Schapire and Ingrid Daubechies, all helped to introduce Rudin to the nascent AI field.
“These guys were kind of young, enthusiastic machine learning scientists. It was like they could predict the future using data,” Rudin says. “I think Gary was building porn filters at the time — he was doing something really weird — and just his energy made me really excited about learning about the field. Rob had designed this machine learning method called boosting. It’s the most reliable machine learning element to date if you know nothing about the data set.”
At the time, Schapire was curious about the AdaBoost algorithm: It worked well, but why did it work? For Rudin’s Ph.D., he assigned her that question — a problem that had been unsolved for close to a decade. It was a tough one, Rudin admits, but she knocked it out. Schapire, now a partner researcher at Microsoft Research, was impressed.
“Until Cynthia’s work, it was not known, for instance, if AdaBoost finds the very best margins, or if it only finds approximately best margins — a difficult problem,” Schapire says.
Rudin could have stayed in the world of theory, but she was more drawn to work that directly, immediately improves the lives of real people. She went on to collaborate with New York City power company Con Edison on energy grid reliability and later with Cambridge, Massachusetts, detectives to build a code that analyzed crime patterns, a version of which the NYPD later adopted. This is where her mission — interpretable machine learning — and her nemesis — the black box — come into clear focus.
There’s a fundamental difference between the two: An interpretable AI has been programmed to explain how it reaches its conclusion; a black box just spits out a prediction.
“I personally believe you can’t have fairness without interpretability,” Rudin says. In energy grid reliability, criminal justice and health care, you have to have a why.
In this century’s first decade, interpretable machine learning was a tiny field, with Rudin a lone voice calling in the wilderness. There wasn’t even a section in the machine learning journals for the papers she wrote. The focus was on making algorithms more accurate — not making them show their work.
But she believed in what she was doing and defended her work tenaciously. Friend Nick Street, the Henry B. Tippie professor of business analytics and associate dean for research and Ph.D. programs at the University of Iowa, recalls a keynote Rudin gave at a major machine learning conference. Maybe halfway through her talk, a guy Street knows — someone who’s infamous for bullying junior faculty and graduate students, or anyone he perceives as vulnerable — stood and lined up at the microphone.
“He asks a pointless question... and she gave a one-sentence answer and said, ‘I’ll be happy to talk offline,’” Street recalls, cracking up. “He tried to continue, and before he could get his next sentence out, she said, ‘I’ll be happy to talk offline. What’s the next question?’
“I was so scared for her, and she completely annihilated the guy and sent him back to his seat with his tail between his legs,” Street continues, warmed by the memory. “She knows her stuff and she’s not to be messed with.”
Rudin came to Duke from MIT in 2017, and her work since has continued in the realm of comprehensible machine learning for the benefit of humanity. One creation of her team, called 2HELPS2B, is used in hospitals to predict seizure risk in patients after a stroke or traumatic brain injury, and all with a numerical score card that’s comprehensive at a glance.
Another project, still in the works, uses interpretable machine learning to diagnose breast cancer. It makes its diagnosis by comparing a patient’s scan to a bank of other scans, which it has been programmed to then share with the radiologist. Rather than simply accept a black box diagnosis, the radiologist can double-check the algorithm’s conclusion.
“People didn’t think this whole field was important,” Rudin says. “I kept saying, ‘Look, if you want to do anything high-stakes you need this stuff. The last couple of years, people were like, ‘Hold on a second, we actually need this stuff!’ I’ve been doing this all this time.” It’s why she won the Squirrel AI Prize and a Guggenheim Fellowship back to back.
And here Rudin is in 2022, at the top of a field that has finally caught up with her, burning the candle at both ends. Upon hearing how frenzied she seems in interviews, Street smiles a knowing smile. That’s not because of her recent awards, he says. That’s just her. There’s so much work to do, and besides, her nemesis is still out there. Her Guggenheim, her Squirrel AI Prize — if these honors are punctuation, they’re commas, because Rudin doesn’t stop. She keeps moving, keeps programming, keeps pushing for new publication avenues for interpretable AI papers and improved rigor in existing journals.
The engine at the center of this frenzy is the same mind that, decades ago, won national composition awards. Today’s even more prestigious honors stem from Rudin’s career programming and advocating for interpretable AI, though she can see the thread connecting music to machine learning.
“In both cases there’s some beauty in it,” Rudin says. “Everything’s made of patterns.”