|
Neha Srivathsa
I am a third year PhD candidate in Computer Science at Stanford University. I am advised by Professor Sherri Rose and co-advised by Professor Sanmi Koyejo, and I am a member of the Health Policy Data Science Lab, led by Professor Rose.
My research interests are at the intersection of machine learning algorithms and health equity. This includes using statistical, machine learning, and mixed-methods approaches to understand how social factors impact health disparities, as well as exploring how the use of algorithms in healthcare settings can amplify or reduce health inequities. My current academic side quest examines how critical perspectives are marginalized in AI research, and how dominant research norms might be reshaped.
Previously, I received my B.S. in Computer Science from Stanford, and worked at the Broad Institute of MIT and Harvard. Outside of research, I love martial arts, dancing, and ceramics!
Email /
LinkedIn /
Google Scholar
|
|
Causal modeling of chronic kidney disease in a participatory framework for informing the inclusion of social drivers in health algorithms
Agata Foryciarz*, Neha Srivathsa*, Oshra Sedan, Lisa Goldman Rosas, Sherri Rose
Under revision; 2025
preprint
Using community based system dynamics methodology, we co-created a formal causal graph representing the social factors affecting chronic kidney disease and the relationships between them, along with patients directly impacted by the condition.
|
Perspective: Machine Learning for Health Should Consider Social Drivers of Health
Neha Srivathsa, Sherri Rose
Machine Learning for Health (ML4H), to appear; 2025
Presenting a correspondance between social drivers of health and algorithmic harm frameworks, we show how underemphasized algorithmic harms can amplify health inequities. We recommend the consideration of social factors throughout the pipeline of sociotechnical system development.
|
Evaluating anti-LGBTQIA+ medical bias in large language models
Crystal T Chang*, Neha Srivathsa*, Charbel Bou-Khalil, Akshay Swaminathan, Mitchell R Lunn, Kavita Mishra, Sanmi Koyejo^, Roxana Daneshjou^
PLOS Digital Health; 2025
paper
We evaluated the potential of LLMs to propagate anti-LGBTQIA+ medical bias and misinformation by testing four LLMs with prompts designed by LGBTQIA+ health experts, including prompts pertaining to historical and current biases. We assessed responses for appropriateness (including accuracy and bias) and clinical utility.
|
Teaching
- Course Assistant,
Ethics, Public Policy, and Technological Change (CS 182),
Stanford University, Winter 2026
- Teaching Assistant,
Biodesign for Digital Health (MED 273),
Stanford University, Fall 2021
- Head Teaching Assistant,
Biodesign Fundamentals (MED275B),
Stanford University, Spring 2020
- Teaching Assistant,
Biodesign for Digital Health (MED 273),
Stanford University, Fall 2019
- Teaching Assistant,
Biodesign Fundamentals (MED275B),
Stanford University, Spring 2019
|
|