With sensors around their wrists and on their fingers, a cohort of pregnant women in underserved Orange County communities will soon begin transmitting their vital statistics via smartphones to researchers at UCI who aim to stem a 30-year upward trend in the U.S. maternal mortality rate.

The research, funded by the National Science Foundation, is led by UCI’s Donald Bren School of Information & Computer Sciences, where moms and babies may seem an odd fit with algorithms and machine learning. But Nikil Dutt, Distinguished Professor of computer science and principal researcher on the Unite project, heads a multidisciplinary team that includes the schools of nursing, social ecology and education along with nonprofit agencies, hospitals and local support organizations to test the efficacy of community-based, self-managed health monitoring among expectant women.

The work is one example of how UCI’s artificial intelligence research is revolutionizing many aspects of how we live.

In 1968, when UCI faculty began delving into nascent AI, it was largely as theoretical discussions limited by primitive computers. The field has since exploded into an enterprise encompassing everything from financial services to healthcare, from shopping to education. It has also spurred an examination of the legal, ethical and social justice quandaries posed by AI. This fall, for example, the UCI School of Law is launching a first-of-its-kind policy institute devoted to AI issues.

The campus has the only school of information and computer sciences in the University of California system, and its expertise was validated earlier this year when the Hasso Plattner Institute in Germany announced that it would open its newest research school, the HPI Research Center in Machine Learning and Data Science, at UCI.

“This international collaboration creates an unparalleled research environment for exploring artificial intelligence technologies that have a positive impact on our world,” says ICS Dean Marios Papaefthymiou.

Closer to home, Dutt and his Unite team are supplying underserved local women expecting babies with devices that will monitor and report their physical activity, stress levels, sleep habits and more. This data will then prompt AI-generated suggestions on their smartphones: “Could you take more steps today?” or “Try to get more sleep.”

“The challenge is building a personalized model that incorporates all the biological and physical signals along with the changing contextual parameters that influence pregnancy directly,” says Unite member Marco Levorato, associate professor of computer science.

Amir Rahmani, an assistant professor of nursing and computer science who’s also on the team, ran a similar study in his native Finland on obesity management during pregnancy. “This model is a lot more sophisticated,” he says.

The project touches on key emerging issues in AI research: community involvement and social justice. Yuqing Guo, associate professor of nursing, has involved potential study subjects in the development of the technology itself.

She and fellow researchers interviewed 11 women and surveyed 114 more about their preferences – whether they liked the technology, what kinds of questions should be asked, etc. “In August, we launched the first part of our study, the Two Happy Hearts app, which suggests mindful breathing and safe exercise,” Guo says.

Making Machines Smarter

Not far from where the Unite team conducts its work, in the warren of computer science labs encompassing three buildings and 12 floors on campus, Annie Qu, Chancellor’s Professor of statistics, studies how to make these kinds of recommender systems more efficient.

Annie Qu, Chancellor’s Professor of statistics, who joined the UCI faculty earlier this year, is striving to make AI smarter by enabling it to integrate data from multiple sources and apply it in individualized modeling in
fields including precision medicine.

“Machines are a little bit dumb,” she says. “Right now, they have to look at everything – all the data. We want them to be quicker and more relevant, informative and accurate, which is particularly important in medical diagnosis and prognostics.”

Qu is working to enable data integration from multiple sources – for example, allowing AI technology to generalize or borrow data from a patient with more medical information and apply it to a patient with less information but certain correlations or dependencies. In addition to precision medicine and health, this individualized modeling could have applications in social media, entertainment, shopping, marketing and sales, she says.

Jeffrey Krichmar, professor of cognitive sciences, is investigating whether AI can “think” more like animals. One project aims to enhance the “vision” of self-driving cars by copying the way primates perceive and predict motion. Current AI has trouble traveling through an environment in which surrounding objects are also moving, says Krichmar, who has partnered with machine vision expert Charless Fowlkes, professor of computer science and cognitive sciences.

Another Krichmar endeavor, in collaboration with Emre Neftci, assistant professor of cognitive sciences, and Xiangmin Xu, professor of anatomy & neurobiology, seeks to design a robot navigation system that mimics how rodents create mental maps to find their way around.

Apes and rats “can achieve these feats with minimal energy usage,” Krichmar says. “The goal is to push AI systems beyond the state of the art … and the common thread is inspiration from biology.”


“Machines are a little bit dumb. Right now, they have to
look at everything – all the data. We want them to
be quicker and more relevant, informative and accurate,
which is particularly important in medical diagnosis
and prognostics.”


Padhraic Smyth, Chancellor’s Professor of computer science, agrees that AI could be smarter. Its main problem, he says, is that it doesn’t have common sense. But AI is much better in some applications than others, he adds.

“It’s operating behind the scenes in multiple fields,” Smyth says, citing criminal justice (to match fingerprints and determine bail and parole), banking (to detect fraud), insurance (to assess risk), marketing (to decipher consumer data) and medicine (“It can learn to be a radiologist pretty quickly,” he notes).

Not satisfied with the status quo, UCI is producing a new generation of researchers who strive to incorporate community desires, accountability, accessibility and social justice into AI design.

Designing Fair, Ethical AI

One of UCI’s most prolific AI innovators is Pierre Baldi, Distinguished Professor of computer science, who employs electronic brains to discover drugs and predict chemical reactions, decode circadian rhythms, detect heart disease with mammograms, identify polyps in colonoscopy videos, track climate change and even solve the Rubik’s Cube.

Many UCI students will end up working as entrepreneurs, designers and programmers for self-driving cars, drones, loan application systems and more, he says, and “it’s very important that they be aware of and sensitive to bias and other issues.”

Roderic Crooks, assistant professor of informatics, investigates how skewed technology affects Black, Latino and working-class communities, specifically in the deployment of AI by government and civic institutions. Bias is most egregious, he says, in predictive policing, which has resulted in discriminatory over-policing.

Roderic Crooks
Roderic Crooks, assistant professor of informatics, has been investigating how biases in AI impact marginalized communities and how to better include affected groups in the development of AI technology.

“Putting technology in place and assuming it will work is harmful when you haven’t involved the impacted community in the development and scoping of that technology,” he says. “And that’s difficult and time-consuming because minoritized communities have good reason to distrust academic research and technology. There’s a long history of the working class being poorly served by researchers.”

The fact that it’s difficult, however, is irrelevant, Crooks says. He hosted a conference last year on datafication and community activism that drew researchers from Data for Black Lives, The Bronx Defenders, the Stop LAPD Spying Coalition, Our Data Bodies, the Urban Institute, Measure and IRISE dedicated to fair and just development and deployment of AI technology.

“The community,” Crooks says, “should be able to say ‘I don’t want this’ and ‘We shouldn’t use it in this way.’”

That’s exactly what parents with visual impairments are saying to Kevin Storer, a Ph.D. candidate in informatics, about AI voice assistants such as Amazon’s Alexa reading to their children.

“My research has shown a strong desire by blind parents to be able to read to their children,” he says. “Current technology automates them out of that process, which is antithetical to their goals.”

With a Graduate Assistance in Areas of National Need fellowship from the U.S. Department of Education, Storer is working to develop a voice-based application that supports such parents in reading their children’s favorite stories from memory by prompting page turns, describing illustrations and forecasting what’s coming next. Co-designing the app with blind parents ensures that their voices are part of the process and that they won’t be handed another technology that’s useless or even harmful.

Storer’s fellowship is part of an $895,000 GAANN award that, combined with $222,875 in cost-sharing funds from the UCI Graduate Division, supports seven students researching socially responsible AI. Paul Dourish, Chancellor’s Professor of informatics, who led the GAANN awards application efforts, says he’s optimistic that GAANN fellowships will lead to novel research.

“By examining people’s technology experiences through the lens of cultural values and individual experiences,” Dourish says, “we can understand the role that technology plays in people’s lives and what they might want it to do next.”

Similarly, the HPI Research Center in Machine Learning and Data Science at UCI, which funded three year fellowships for 15 graduate students, is dedicated to research solving what center director Erik Sudderth, professor of computer science, calls the “black box” nature of AI and machine learning.

In loan applications, for example, data goes in and a recommendation on loan funding is produced, “but there’s nothing that says why a recommendation was reached, no transparency,” Sudderth says.


“By examining people’s technology experiences through
the lens of cultural values and individual experiences,
we can understand the role that technology plays in
people’s lives and what they might want it to do next.”

“So we need to improve the fairness of AI and machine learning systems, because even if we do a good job replicating decisions of the past that were made by humans, those humans may have been biased based on race, gender or other factors.”

The aim is not to eliminate mistakes – which are inevitable, he says, because we never have perfect information about the future – but to equalize the rate at which mistakes are made in different groups.

Once COVID-19 travel restrictions are lifted, Sudderth says, he envisions a robust exchange of ideas with German research partners that will increase UCI’s global reach.

AI and the Law

UCI’s School of Law initiated its emphasis on AI fairness shortly after Dean Song Richardson took the reins in 2018. “I wanted to think about what a legal education for the 21st century should encompass,” she says. “All these new technologies were raising profound legal and ethical questions.”

Song Richardson
Dean Song Richardson has made artificial intelligence a priority at the UCI School of Law, recently launching the AI Policy Laboratory at UCI to explore the profound legal and ethical issues raised by the new technology.

A significant number of faculty members are doing some sort of research on the topic, and first-year students are exposed to AI in every course, Richardson says. They confront such questions as: Who is liable if an autonomous vehicle causes a crash – or if AI radiology software overlooks a fatal tumor? When artificial intelligence composes music or invents new pharmaceuticals, who owns the copyright or patent?

“No matter what the application is, sooner or later it will be subject to legal oversight,” says Dan Burk, Chancellor’s Professor of law. This fall, he’s teaming up with law school lecturer Neil Sahota to lead the new AI Policy Laboratory at UCI, which will sponsor public talks; moot court competitions; and workshops for judges, legislators, legal firms, law enforcement agents and others.

Adds Richardson: “We want to make sure our students are prepared for whatever comes down the pike.”

Roy Rivenburg contributed to this article.