Ethics, Bias, and Transparency for People and Machines

About Ethics, Bias, and Transparency for People and Machines

Artificial intelligence and machine learning (AI/ML) are a collection of data-driven technologies with the potential to significantly advance scientific discovery in biomedical and behavioral research. Researchers employing these technologies must take steps to minimize the harms that could result from their research, including but not limited to addressing (1) biases in datasets, algorithms, and applications; (2) issues related to identifiability and privacy; (3) impacts on disadvantaged or marginalized groups; (4) health disparities; and (5) unintended, adverse social, individual, and community consequences of research and development. Some of the inherent characteristics of AI/ML, as well as its relative newness in the biomedical and behavioral sciences, have made it difficult for researchers to apply ethical principles in the development and use of AI/ML, particularly for basic research. 

To address these issues, National Institutes of Health (NIH) Office of Data Science Strategy (ODSS) announced “Administrative Supplements for Advancing the Ethical Development and Use of AI/ML in Biomedical and Behavioral Sciences” on February 3, 2022. The goal of this notice was to make the data generated through NIH-funded research AI/ML-ready and shared through repositories, knowledgebases, or other data sharing resources.

Meetings and Reports

Closed Funding Opportunities: 

This page last reviewed on April 27, 2023