Artificial intelligence (AI) is no longer a shimmer in the distance. In public health, it’s already transforming the ways we predict and detect disease, deliver care, and make life-or-death decisions.
The promise is enormous: AI can scan massive datasets in seconds, spot patterns no human would catch, and deliver faster, more personalized interventions. But the risks are high. AI mistakes can lead to misdiagnosis, overlook red flags, delay treatment, widen health disparities, or harm patients.

In the College of Public Health, researchers are embracing AI’s potential while also interrogating it, testing it, and redesigning it to work better for real people. Faculty are building AI tools to detect cancer earlier, support dementia patients, guide students through biostatistics, document evidence of violence, and flag burnout in caregivers—targeting some of public health’s toughest challenges.
“AI can help humans achieve extraordinary things, but only when humans lead the way,” said Alison Evans Cuellar, associate dean of research at the college. “We want to push the boundaries while exploring what responsible and effective AI looks like in our public health research and education.”
One of HAP Professor Farrokh Alemi’s most ambitious efforts is an AI system designed to act like a digital psychiatrist, conversing with patients to gather their medical history and symptoms and recommend treatment for depression. To reduce the usual trial-and-error of prescribing antidepressants, the research team augmented ChatGPT with 2.4 billion connections among diagnoses, procedures, treatment, and outcomes, drawn from 354,400 medical records within a vast NIH database.
The results are promising: In a recent analysis of retrospective data, patients whose clinicians prescribed the same medications as the AI’s advice were 26% more likely to respond to them. “An AI visit is very similar to a visit to a clinician,” Alemi said. “But rather than guesswork at the doctor’s office, it’s grounded in real-world data. You get the antidepressant most likely to work for you. Tailored to your situation with no trial and error.”
Alemi is working on the research as a co-principal investigator alongside Kevin Lybarger of the College of Computing and Engineering, funded by the Patient Centered Outcome Research Institute (PCORI).

For the project to be effective, the team needed to tackle some of AI’s greatest flaws, including its capacity for errors. AI can frequently hallucinate, producing plausible-sounding but inaccurate information. Mimicking language from the internet, language models can also generate inappropriate comments, which could be dangerous for medical advice. By using objective data straight from medical records, Alemi’s team cut out improvisation entirely for the advice portion of its system, which follows a script that tells the AI what to ask, how to ask it, and when to stop. “Since the number of possible recommendations is limited, it’s possible to prepare the script ahead of time,” Alemi said.
Furthermore, AI can be steeped in the historical biases and blind spots found online—for example, the underrepresentation of Black and Latino patients. Again, Alemi’s team resolved this by anchoring their language model in the strict objectivity of medical records and training the advice to reflect what works for underrepresented populations.
The AI tool is constructed to rule out conditions like bipolar disorder and substance-related depression and ensures that past misdiagnoses are not an issue. A remote human, blinded to patient identity, monitors the AI and patient interactions to intervene during high-risk moments, such as when the patient expresses suicidal thoughts.
Smarter tools for public good
Hong Xue, an expert in big data and HAP associate professor, is applying AI to another stubborn public health challenge: helping young people quit tobacco use.
His team is developing an AI-powered intervention that acts like a personalized coach for youth trying to stop using tobacco or vaping products—tracking when cravings tend to hit, spotting the patterns that lead to relapse, and stepping in with support at the right moments. In low-resource settings, he notes, the 24/7 access to this kind of help could be invaluable.
Xue is also using AI and simulation modeling to analyze Virginia’s tobacco laws, generating data that could inform stronger protections for young people.
Responsible AI, Xue believes, must meet three essential tests: fairness across populations, transparency in decision-making, and strong privacy protections. "These systems can't be black boxes," he said. "Patients need to understand them. Clinicians need to trust them."
For Alemi, AI offers a proving ground for ideas that once felt out of reach, from reimagining clinical care to rethinking the classroom. Currently, he’s piloting a memory-based companion for people with early dementia, and an AI tutor that guides students through his statistics course.
“I bring AI a problem, and it comes back with something I never imagined. It keeps me learning,” he said. “It’s not replacing my work, but it's opening doors to public health solutions that I could never reach on my own.”
AI for Good: Faculty Drive Public Health Innovation
At George Mason, researchers are using artificial intelligence to take on real public health challenges, building smarter solutions and rethinking old systems. Here’s a look at some of their latest work.
Predicting Cancer
HAP Professor Farrokh Alemi is showing that AI can flag cancer risk more accurately than age alone.
Improving Depression Care
Alemi is building an AI tool that mimics clinical intake, matching patients with the right antidepressant faster.
Quitting Vaping
Hong Xue, HAP associate professor, developed a personalized, AI-powered intervention to help young people break the habit of e-cigarettes.
Identifying and Tracking Bruising
An interprofessional team led by SON Associate Professor Katherine Scafide and HAP Professor Janusz Wojtusiak have created AI tools to detect and document bruises more accurately across all skin tones.
Predicting Caregiver Burnout
Wojtusiak is harnessing machine learning to spot early signs of social isolation in Alzheimer’s caregivers, and to build smarter tools to help them.
Testing Coursework
HAP associate professor Sanja Avramovic led a study having ChatGPT taking health policy exams, showing where AI learning excels and where it falls short.