
Ethics, Bias, and Transparency for People and Machines
About Ethics, Bias, and Transparency for People and Machines
Artificial intelligence and machine learning (AI/ML) are a collection of data-driven technologies with the potential to significantly advance scientific discovery in biomedical and behavioral research. Researchers employing these technologies must take steps to minimize the harms that could result from their research, including but not limited to addressing (1) biases in datasets, algorithms, and applications; (2) issues related to identifiability and privacy; (3) impacts on disadvantaged or marginalized groups; (4) health disparities; and (5) unintended, adverse social, individual, and community consequences of research and development. Some of the inherent characteristics of AI/ML, as well as its relative newness in the biomedical and behavioral sciences, have made it difficult for researchers to apply ethical principles in the development and use of AI/ML, particularly for basic research.
To address these issues, National Institutes of Health (NIH) Office of Data Science Strategy (ODSS) announced “Administrative Supplements for Advancing the Ethical Development and Use of AI/ML in Biomedical and Behavioral Sciences” on February 3, 2022. The goal of this notice was to make the data generated through NIH-funded research AI/ML-ready and shared through repositories, knowledgebases, or other data sharing resources.
Meetings and Reports
Closed Funding Opportunities:
- 2022: (NOT-OD-22-065) Expired April 1, 2022. Frequently Asked Questions (FAQs)
Principal Investigator | Institution | Project Title | NIH IC |
---|---|---|---|
Bui, Alex | University of California Los Angeles | PREMIERE: A PREdictive Model Index and Exchange REpository | NIBIB |
Disis, Mary L | University of Washington | Developing Community-Responsive mHealth and AI/ML: Understanding Perspectives of Hispanic Community Members in Washington State | NCATS |
Federman, Alex D | Icahn School of Medicine at Mount Sinai | Natural Language Processing and Automated Speech Recognition to Identify Older Adults with Cognitive Impairment Supplement | NIA |
Finkbeiner, Steven M | J. David Gladstone Institutes | Cell and Network Disruptions and Associated Pathogenenesis in Tauopathy and Down Syndrome | NIA |
Goldstein, Benjamin Alan | Duke University | Predictive Analytics in Hemodialysis: Enabling Precision Care for Patient with ESKD | NIDDK |
Herrington, John David | Children's Hospital of Philadelphia | Ethical Perspectives Towards Using Smart Contracts for Patient Consent and Data Protection of Digital Phenotype Data in Machine Learning Environments | NIMH |
Holder, Andre L | Emory University | Characterizing patients at risk for sepsis through Big Data | NIGMS |
Jha, Abhinav K | Washington University | A framework to quantify and incorporate uncertainty for ethical application of AI-based quantitative imaging in clinical decision making | NIBIB |
Jiang, Xiaoqian | University Of Texas Health Science Center Houston | Finding combinatorial drug repositioning therapy for Alzheimers disease and related dementias | NIA |
Kamaleswaran, Rishikesan | Emory University | EQuitable, Uniform and Intelligent Time-based conformal Inference (EQUITI) Framework | NIGMS |
Kinh, Richard Gian | Sloan-Kettering Institute Cancer Research | Development and Validation of Prognostic Radiomic Markers of Response and Recurrence for Patients with Colorectal Liver Metastases | NCI |
Langlotz, Curtis P | Stanford University | Population-level Pulmonary Embolism Outcome Prediction with Imaging and Clinical Data: A Multi-Center Study | NHLBI |
Naidech, Andrew M | Northwestern University at Chicago | Hemostasis, Hematoma Expansion, and Outcomes After Intracerebral Hemorrhage | NINDS |
Odero-Marah, Valerie | Morgan State University | Characterization of Health Disparities in African Ancestry and Reduction of Algorithmic Bias | NIMHD |
Ohno-Machado, Lucila | University of California, San Diego | Genetic & Social Determinants of Health: Center for Admixture Science and Technology | NHGRI |
Olatosi, Bankole | University of South Carolina at Columbia | An ethical framework-guided metric tool for assessing bias in EHR-based Big Data studies | NIAID |
Platt, Jodyn Elizabeth | University of Michigan at Ann Arbor | Public trust of artificial intelligence in the precision CDS health ecosystem | NIBIB |
Sabatello, Maya | Columbia University Health Sciences | Blind/Disability and Intersectional Biases in E-Health Records (EHRs) of Diabetes Patients: Building a Dialogue on Equity of AI/ML Models in Clinical Care | NHGRI |
Sjoding, Michael William | University of Michigan at Ann Arbor | Human-AI Collaborations to Improve Accuracy and Mitigate Bias in Acute Dyspnea Diagnosis | NHLBI |
Wolf, Risa Michelle | Johns Hopkins University | Autonomous AI to mitigate disparities for diabetic retinopathy screening in youth during and after COVID-19 | NEI |
Wun, Theodore | University of California at Davis | UC Davis Clinical and Translational Science Center | NCATS |
Zeng, Qing | George Washington University | Use Explainable AI to Improve the Trust of and Detect the Bias of AI Models | NIA |
Zhi, Degui | University Of Texas Health Science Center Houston | Genetics of deep-learning-derived neuroimaging endophenotypes for Alzheimer's Disease (Parent grant) | NIA |
This page last reviewed on April 27, 2023