South Asian Research Journal of Engineering and Technology (SARJET)
Volume-7 | Issue-01
Original Research Article
Developing Frameworks for Assessing and Mitigating Bias in AI Systems: A Case Study on Ensuring Fairness in AI Diagnostic Tools through Diverse Training Datasets to Prevent Misdiagnosis in Underrepresented Populations
Opeyemi Oluwagbenga Owolabi, Opeyemi Bilqees Adewusi, Funmilayo Arinola Ajayi, Ajoke A. Asunmonu, Ozoemezim Chukwurimazu, Jefferson Ederhion, Ohi Moses Ayeni
Published : Feb. 27, 2025
Abstract
Indeed, AI on the rise in health has given a way forward to cause revolutionary changes in diagnostic capability. With this, critical issues relating to bias and fairness arise, particularly regarding the under-representative in any given dataset. It attempts to explore the relevance of probably the leading challenge that biases in diagnostic artificial intelligence impose on health and the contribution of representative, various train datasets toward reducing such biases for guaranteeing health equity. This study is a quantitative research in which data through purposive sampling was collected with the use of survey methodology from 160 stakeholders, including clinicians, patients, and AI developers. The structured questionnaires captured perceptions related to AI bias, fairness, and the effectiveness of mitigation strategies. Results findings show great concerns over biases in AI systems, with 48.75% agreeing that bias within the training dataset is a major hindrance in clinical decision-making. A majority of 53.33% said it is the diversity in the datasets that helps in promoting fairness, while 76.88% support frequent algorithmic audits. Other key strategies recommended were collaboration among stakeholders and transparency. The current study, therefore, infers that these biases in diagnostic Artificial Intelligence tools need to be treated along the dimensions of data diversity, auditing algorithms, and collaboration with various stakeholders, to whom the transparency of processing should be allowed. Hence, there is a need for improvement in robust frameworks assurance for the equity of AI so that it could guarantee fairness, reliability, and inclusion among various populations for diagnoses.