Navigating Algorithmic Equity: Uncovering Diversity and Inclusion Incidents in Artificial Intelligence
Keywords:
Algorithmic Fairness, Artificial Intelligence Ethics, Bias in AI, Diversity and Inclusion, Discrimination in Machine Learning, Responsible AI, Algorithmic Accountability, Equity in TechnologyAbstract
As artificial intelligence (AI) systems increasingly shape decision-making in critical domains—ranging from healthcare to criminal justice—their societal impact demands careful scrutiny. Despite advancements in algorithmic performance, growing evidence points to systemic issues of bias, exclusion, and inequity embedded within AI models and datasets. This paper offers a comprehensive investigation into documented incidents where AI systems have adversely affected marginalized populations due to a lack of diversity and inclusion considerations. We explore the underlying causes, including biased data, non-representative training sets, and opaque algorithmic design. By analyzing real-world case studies and evaluating mitigation strategies, we assess the effectiveness of existing fairness frameworks and ethical guidelines. Our findings underscore the need for more robust socio-technical interventions, interdisciplinary collaboration, and proactive governance to ensure equitable AI development and deployment. This work contributes to the growing discourse on algorithmic accountability and provides practical recommendations for fostering inclusive and responsible AI systems.
References
2014. Australian Human Rights Commission: A quick guide to Australian discrimination laws. (2014).
Rusul Abduljabbar, Hussein Dia, Sohani Liyanage, and Saeed Asadi Bagloee. 2019. Applications of artificial intelligence in transport: An overview. Sustainability 11, 1 (2019), 189.
Lavisha Aggarwal and Shruti Bhargava. 2023. Fairness in AI Systems: Mitigating gender bias from language-vision models. arXiv preprint arXiv:2305.01888 (2023).
Alba Soriano Arnanz et al. 2023. Creating non-discriminatory Artificial Intelligence systems: balancing the tensions between code granularity and the general nature of legal rules. IDP. Revista de Internet, Derecho y Política 38 (2023), 1–12.
Katie Atkinson, Trevor Bench-Capon, and Danushka Bollegala. 2020. Explanation in AI and law: Past, present and future. Artificial Intelligence 289 (2020), 103387.
Muneera Bano, Didar Zowghi, Fernando Mourao, Sarah Kaur, and Tao Zhang. 2024. Diversity and Inclusion in AI for Recruitment: Lessons from Industry Workshop. arXiv preprint arXiv:2411.06066 (2024).
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2023. Fairness and machine learning: Limitations and opportunities. MIT press.
J. Brodkin. 2023. Black man wrongfully jailed for a week after face recognition error, report says. https://arstechnica.com/tech-policy/2023/01/facial-recognition-error-led-to-wrongful-arrest-of-black-man-report-says/.
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. PMLR, 77–91.
Gaelle Cachat-Rosset and Alain Klarsfeld. 2023. Diversity, equity, and inclusion in artificial intelligence: An evaluation of guidelines. Applied Artificial Intelligence 37, 1 (2023), 2176618.
Chiara Cavaglieri. 2022. Tinder’s unfair pricing algorithm exposed. https://www.which.co.uk/news/article/tinders-unfair-pricing-algorithm-exposed-adCwG8b7VRYo.
Nicole Chi, Emma Lurie, and Deirdre K Mulligan. 2021. Reconfiguring diversity and inclusion for AI ethics. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 447–457.
J. Cook. 2018. Amazon ditches AI recruitment tool that ’learnt to be sexist’. https://www.afr.com/world/europe/amazon-ditches-ai-recruitment-tool-that-learnt-to-be-sexist-20181011-h16h8p.
Daniela S Cruzes and Tore Dyba. 2011. Recommended steps for thematic synthesis in software engineering. In 2011 international symposium on empirical software engineering and measurement. IEEE, 275–284.
Andrew Van Dam. 2019. Searching for images of CEOs or managers? The results almost always show men. https://www.washingtonpost.com/business/2019/01/03/searching-images-ceos-or-managers-results-almost-always-show-men/.
Liya Ding and Aleix M Martinez. 2010. Features versus context: An approach for precise and detailed detection and delineation of faces and facial features. IEEE Transactions on Pattern Analysis and Machine Intelligence 32, 11 (2010), 2022–2038.
Sina Fazelpour and Maria De-Arteaga. 2022. Diversity in sociotechnical machine learning systems. Big Data & Society 9, 1 (2022), 20539517221082027.
Michael Feffer, Nikolas Martelaro, and Hoda Heidari. 2023. The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements. In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization. 1–11.
Xavier Ferrer, Tom Van Nuenen, Jose M Such, Mark Coté, and Natalia Criado. 2021. Bias and discrimination in AI: a cross-disciplinary perspective. IEEE Technology and Society Magazine 40, 2 (2021), 72–80.
Derek Fisk, Ben Clendenning, Philip St. John, and Jose Francois. 2024. Multi-stakeholder validation of entrustable professional activities for a family medicine care of the elderly residency program: A focus group study. Gerontology & Geriatrics Education 45, 1 (2024), 12–25.
Eduard Fosch-Villaronga and Adam Poulsen. 2022. Diversity and inclusion in artificial intelligence. Law and Artificial Intelligence: Regulating AI and Applying AI in Legal Practice (2022), 109–134.
Alex Hern. 2018. Google’s solutions to accidental algorithmic racism: ban gorillas. https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people.
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudik, and Hanna Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–16.
J Kavitha, J Sasi Kiran, Srisailapu D Vara Prasad, Krushima Soma, G Charles Babu, and S Sivakumar. 2022. Prediction and Its Impact on Its Attributes While Biasing Machine Learning Training Data. In 2022 Third International Conference on Smart Technologies in Computing, Electrical and Electronics (ICSTCEE). IEEE, 1–7.
Joanna Kerr, Katerina Hilari, and Lia Litosseliti. 2010. Information needs after stroke: What to include and how to structure it on a website. A qualitative study using focus groups and card sorting. Aphasiology 24, 10 (2010), 1170–1196.
Peter A Lichtenberg and Susan E Macneill. 2000. Prospective validity study of a triaging method for mental health problems: The MacNeill-Lichtenberg Decision Tree (MLDT). Clinical gerontologist 21, 1 (2000), 11–19.
Sean McGregor. 2021. Preventing repeated real world AI failures by cataloging incidents: The AI incident database. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 15458–15463.
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019), 447–453.
OECD. 2024. Defining AI Incidents and Related Terms. OECD Artificial Intelligence Papers, No. 16, Paris (2024).
Kevin Paeth, Daniel Atherton, Nikiforos Pittaras, Heather Frase, and Sean McGregor. 2024. Lessons for Editors of AI Incidents from the AI Incident Database. arXiv preprint arXiv:2409.16425 (2024).
J. Pepitone. 2024. AI is being built on dated, flawed motion-capture data, IEEE Spectrum. https://spectrum.ieee.org/motion-capture-standards.
Lorenzo Porcaro, Carlos Castillo, Emilia Gómez, and João Vinagre. 2023. Fairness and diversity in information access systems. arXiv preprint arXiv:2305.09319 (2023).
Valentina Pyatkin, Frances Yung, Merel CJ Scholman, Reut Tsarfaty, Ido Dagan, and Vera Demberg. 2023. Design choices for crowdsourcing implicit discourse relations: Revealing the biases introduced by task design. Transactions of the Association for Computational Linguistics 11 (2023), 1014–1032.
Pranav Rajpurkar, Emma Chen, Oishi Banerjee, and Eric J Topol. 2022. AI in health and medicine. Nature medicine 28, 1 (2022), 31–38.
Jeremy Sear. 2001. The ARL ‘Black Box’Flight Recorder–Invention and Memory. Bachelor of Arts (Honours). The University of Melbourne (2001).
Rifat Ara Shams, Didar Zowghi, and Muneera Bano. 2023. AI and the quest for diversity and inclusion: a systematic literature review. AI and Ethics (2023), 1–28.
Rifat Ara Shams, Didar Zowghi, and Muneera Bano. 2024. Diversity and Inclusion (DI)- Related AI Incidents Repository. https://doi.org/10.5281/zenodo.11639709
Donghee Shin and Emily Y Shin. 2023. Data’s Impact on Algorithmic Bias. Computer 56, 6 (2023), 90–94.
Jan Simson, Alessandro Fabris, and Christoph Kern. 2024. Lazy data practices harm fairness research. In The 2024 ACM Conference on Fairness, Accountability, and Transparency. 642–659.
RM Singari and PK Kankar. 2022. Contemporary Evolution of Artificial Intelligence (AI): An Overview and Applications. Advanced Production and Industrial Engineering: Proceedings of ICAPIE 2022 27 (2022), 130.
Roger Søraa. 2023. AI for Diversity. CRC Press.
Donna Spencer and Todd Warfel. 2004. Card sorting: a definitive guide. Boxes and arrows 2, 2004 (2004), 1–23.
Jeff C Stanley and Stephen L Dorton. 2023. Exploring Trust With the AI Incident Database. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 67. SAGE Publications Sage CA: Los Angeles, CA, 489–494.
C. Stokel-Walker. 2023. ChatGPT replicates gender bias in recommendation letters. https://www.scientificamerican.com/article/chatgpt-replicates-gender-bias-in-recommendation-letters/.
Beki Subaeki, Aedah Abd Rahman, Khaerul Manaf, Riffa Haviani Laluma, Agung Wahana, and Nur Lukman. 2022. Assessing Tax Online System Success: A Validation of Success Model with Focus Group Study. In 2022 IEEE 8th International Conference on Computing, Engineering and Design (ICCED). IEEE, 1–646.
Xianghui Tao, Lujia Li, Jianjun He, et al. 2022. Research on discrimination and regulation of artificial intelligence algorithm. In 2nd International Conference on Artificial Intelligence, Automation, and High-Performance Computing (AIAHPC 2022), Vol. 12348. SPIE, 79–84.
Violet Turri and Rachel Dzombak. 2023. Why We Need to Know More: Exploring the State of AI Incident Documentation Practices. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. 576–583.
Antje Von Ungern-Sternberg. 2021. Discriminatory AI and the Law–Legal Standards for Algorithmic Profiling. Draft Chapter, in: Silja Vöneky, Philipp Kellmeyer, Oliver Müller and Wolfram Burgard (ed.) Responsible AI, Cambridge University Press (Forthcoming) (2021).
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2021. Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Computer Law & Security Review 41 (2021), 105567.
Mengyi Wei and Zhixuan Zhou. 2022. Ai ethics issues in real world: Evidence from ai incident database. arXiv preprint arXiv:2206.07635 (2022).
George Williams. 2006. The Victorian charter of human rights and responsibilities: Origins and scope. Melb. UL Rev. 30 (2006), 880.
Xuesong Zhai, Xiaoyan Chu, Ching Sing Chai, Morris Siu Yung Jong, Andreja Istenic, Michael Spector, Jia-Bao Liu, Jing Yuan, and Yan Li. 2021. A Review of Artificial Intelligence (AI) in Education from 2010 to 2020. Complexity 2021 (2021), 1–18.
Jianlong Zhou, Fang Chen, Adam Berry, Mike Reed, Shujia Zhang, and Siobhan Savage. 2020. A survey on ethical principles of AI and implementations. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 3010–3017.
Didar Zowghi and Muneera Bano. 2024. AI for all: Diversity and Inclusion in AI. https://doi.org/10.1007/s43681-024-00485-8. , 4 pages.
Didar Zowghi and Francesca da Rimini. 2023. Diversity and Inclusion in Artificial Intelligence. arXiv preprint arXiv:2305.12728 (2023).
Frederik J Zuiderveen Borgesius. 2020. Strengthening legal protection against discrimination by algorithms and artificial intelligence. The International Journal of Human Rights 24, 10 (2020), 1572–1593.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors retain the copyright of their articles published in this journal. All articles are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly cited.