Ethics, Bias, and Transparency for People and Machines

About Ethics, Bias, and Transparency for People and Machines

Artificial intelligence and machine learning (AI/ML) are a collection of data-driven technologies with the potential to significantly advance scientific discovery in biomedical and behavioral research. Researchers employing these technologies must take steps to minimize the harms that could result from their research, including but not limited to addressing (1) biases in datasets, algorithms, and applications; (2) issues related to identifiability and privacy; (3) impacts on disadvantaged or marginalized groups; (4) health disparities; and (5) unintended, adverse social, individual, and community consequences of research and development. Some of the inherent characteristics of AI/ML, as well as its relative newness in the biomedical and behavioral sciences, have made it difficult for researchers to apply ethical principles in the development and use of AI/ML, particularly for basic research. 

To address these issues, National Institutes of Health (NIH) Office of Data Science Strategy (ODSS) announced “Administrative Supplements for Advancing the Ethical Development and Use of AI/ML in Biomedical and Behavioral Sciences (NOT-OD-22-065)” on February 3, 2022. Answers to Frequently Asked Questions regarding the NOT-OD-22-065 opportunity can be found here. The goal of this Notice is to make the data generated through NIH-funded research AI/ML-ready and shared through repositories, knowledgebases, or other data sharing resources.

This page last reviewed on February 7, 2022