Skip to main content
AI illustration of couple interacting.
AI image created to accompany this story using Adobe Photoshop's generative AI feature. AI Illustration by Lacey Chylack

AI research: Barriers to diversity in education

Shani B. Daily, Cue Family professor of the practice, electrical and computer engineering

If you love someone, you give them a hard time.

It’s how Shani B. Daily, professor of the practice of electrical and computer engineering, was raised. It’s how young Black men tend to talk to their friends online. Yet a University of Chicago study showed that this playful teasing can be mistaken for bullying or threats unless one understands Black culture, Daily says. Without a diverse group programming and training AI, affection can be mistaken for its opposite.

“If you leave those kinds of cultural understandings outside of the design of technologies, then you’re either harming people or you’re further marginalizing people or you’re creating situations where you’re fabricating threats that aren’t there,” Daily says.

Shani B. Daily
Shani B. Daily

So she works at the junction of equity, tech, education and policy. Daily co-founded the Alliance for Identity-Inclusive Computing Education, or AiiCE, to open computer science to broader, more equitable participation. Diversity, she says, can mean race and ethnicity, socioeconomic status, disability, neurodiversity, sexuality and gender identity, or any intersection of these. Daily demystifies AI in schools and the general public as education and workforce director of the Athena AI Institute, whose goal is to transform the design, operation and services of mobile devices and networks. She has led AI roundtables with congresswomen, briefed congressional staffers and been recognized by North Carolina governor Roy Cooper for her STEM leadership.

AI affects us all. Daily believes that through participation and literacy we can affect it back.

“Exposing people in ways where they feel like they can participate and be a part of the conversation is success to me,” says Daily.

A Sampling of Duke Research

Even if someone isn’t going to be a software engineer, she says, they can be literate enough in advanced technology to know how AI will impact their lives. And they can be literate enough to spot human error in an algorithm’s design rather than just accepting an AI’s results at face value.

As for those errors, Daily’s solution is diversity, including diversity of disciplines. Social scientists, for instance, know that lower rates of minority homeowner loans come from the racist practice of redlining. If they’re in the room when a mortgage algorithm is designed, then it will be less biased.

“If you don’t know history, you don’t understand why your data is so skewed,” she says.