Risk Management Framework-Based Failure Mode and Effect Analysis for AI Risk Assessment

Authors

  • Yunarso Anang Department of Statistical Computing, Politeknik Statistika STIS, Jakarta, Indonesia
  • Lya Hulliyyatus Suadaa Department of Statistical Computing, Politeknik Statistika STIS, Jakarta, Indonesia
  • Lutfi Rahmatuti Maghfiroh Department of Statistical Computing, Politeknik Statistika STIS, Jakarta, Indonesia/ Department of Computer Science and Engineering, University of Yamanashi, Kofu, Japan
  • Nori Wilantika Department of Statistical Computing, Politeknik Statistika STIS, Jakarta, Indonesia
  • Masakazu Takahashi Department of Computer Science and Engineering, University of Yamanashi, Kofu, Japan
  • Yoshimichi Watanabe Department of Computer Science and Engineering, University of Yamanashi, Kofu, Japan

DOI:

https://doi.org/10.46604/aiti.2025.14609

Keywords:

AI risk assessment, failure mode and effect analysis, FMEA, AI incident database, NIST’s AI RMF

Abstract

As artificial intelligence (AI) technologies continue to spread into human life, developers must ensure benefits while minimizing the risk of adverse impacts. This study aims to evaluate risks in real-world AI applications using the AI Incident Database. It employs Failure Mode and Effect Analysis and the National Institute of Standards and Technology AI Risk Management Framework to identify failures, their causes and effects, and assess how current systems address them. A total of 100 incident reports were analyzed. The findings indicate frequent failures in autonomous systems and biased predictions. Seven cases were classified in the highest risk categories, including those involving physical harm and loss of life. Over 80% failures originated from algorithmic flaws or poor data quality. The method employed successfully evaluates the risks in current AI applications, revealing critical gaps in risk management and emphasizing the urgent need for targeted safeguards and proactive mitigation strategies.

References

A. Drapkin, “AI Gone Wrong: An Updated List of AI Errors, Mistakes and Failures,” https://tech.co/news/list-ai-failures-mistakes-errors, 2024.

ISO/IEC 31000:2018 Risk Management — Principles and Guidelines, ISO/IEC, 2018.

“Artificial Intelligence Risk Management Framework (AI RMF 1.0),” National Institute of Standards and Technology (U.S.), Gaithersburg, MD NIST AI 100-1, 2023.

“ISO/IEC 23894:2023 Information Technology — Artificial Intelligence — Guidance on Risk Management,” ISO/IEC, 2023.

B. Xia, et al., “Towards Concrete and Connected AI Risk Assessment (C2AIRA): A Systematic Mapping Study,” Proceedings of the IEEE Conference on AI Engineering – Software Engineering for AI (CAIN 2023), IEEE Press, pp. 104-116, 2023.

N. R. Tague, The Quality Toolbox, 2nd Ed., Milwaukee, ASQ Quality Press, 2005.

P. Haapanen and A. Helminen, “Failure Mode and Effects Analysis of Software-Based Automation Systems,” Radiation and Nuclear Safety Authority (STUK), Technical Report STUK YTO TR 190, 2002.

M. Takahashi, R. Nanba, and Y. Fukue, “A Proposal of Operational Risk Management Method Using FMEA for Drug Manufacturing Computerized System,” Transactions of the Society of Instrument and Control Engineers, vol. 48, no. 5, pp. 285-294, 2012.

S. McGregor, “Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 17, pp. 15458-15463, 2021.

M. Wei and Z. Zhou, “AI Ethics Issues in Real World: Evidence from the AI Incident Database,” Proceedings of the 56th Hawaii International Conference on System Sciences (HICSS 56), IEEE Press, pp. 4923-4932, 2023.

A. Jobin, M. Ienca, and E. Vayena, “The Global Landscape of AI Ethics Guidelines,” Nature Machine Intelligence, vol. 1, no. 9, pp. 389-399, 2019.

A. Kaun, “Suing the Algorithm: The Mundanization of Automated Decision-Making in Public Services Through Litigation,” Information, Communication & Society, vol. 25, no. 14, pp. 2046-2062, 2022.

B. Heinrichs, “Discrimination in the Age of Artificial Intelligence,” AI & SOCIETY, vol. 37, pp. 143-154, 2022.

K. Brecker, S. Lins, and A. Sunyaev, “Why it Remains Challenging to Assess Artificial Intelligence,” Proceedings of the 56th Hawaii International Conference on System Sciences, pp. 5242-5251, 2023.

S. S. Chanda and D. N. Banerjee, “Omission and Commission Errors Underlying AI Failures,” AI & SOCIETY, vol. 37, pp. 937-960, 2024.

Y. Wen and M. Holweg, “A Phenomenological Perspective on AI Ethical Failures: The Case of Facial Recognition Technology,” AI & SOCIETY, vol. 39, pp. 1929-1946, 2024.

J. Li and M. Chignell, “FMEA-AI: AI Fairness Impact Assessment Using Failure Mode and Effects Analysis,” AI and Ethics, vol. 2, no. 4, pp. 837-850, 2022.

L. T. Ostrom and C. A. Wilhelmsen, Risk Assessment: Tools, Techniques, and Their Applications: 2nd Ed., Hoboken, NJ: John Wiley & Sons, 2019.

M. Feffer, N. Martelaro, and H. Heidari, “The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements,” Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO '23), no. 3, pp. 1-11, 2023.

N. Pittaras and S. McGregor, “A Taxonomic System for Failure Cause Analysis of Open Source AI Incidents,” Proceedings of the SafeAI 2023 Workshop, vol. 3381, pp. 17-28, 2023.

M. Hoffmann and H. Frase, “Adding Structure to AI Harm,” Center for Security and Emerging Technology (CSET), 2023.

OECD, “OECD Framework for the classification of AI systems,” OECD Digital Economy Papers, No. 323, 2022.

S. Ahmed, et al., “Examining the Potential Impact of Race Multiplier Utilization in Estimated Glomerular Filtration Rate Calculation on African-American Care Outcomes,” Journal of General Internal Medicine, vol. 36, no. 2, pp. 464-471, 2021.

S. Hamza-Cherif, L. F. Kazi Tani, and N. Settouti, “Improving Healthcare Communication: AI-Driven Emotion Classification in Imbalanced Patient Text Data with Explainable Models,” Advances in Technology Innovation, vol. 9, no. 2, pp. 129-142, 2024.

G. Kotonya and I. Sommerville, Requirements Engineering: Processes and Techniques, Chichester: John Wiley & Sons, 1998.

C. Carlson, Effective FMEAs: Achieving Safe, Reliable, and Economical Products and Processes Using Failure Mode and Effects Analysis, Hoboken, NJ: John Wiley & Sons, 2012.

A. Meriem and M. Abdelaziz, “Combining Model-Based Testing and Failure Modes and Effects Analysis for Test Case Prioritization: A Software Testing Approach,” Journal of Computer Science, vol. 15, no. 4, pp. 435-449, 2019.

C. Spreafico and A. Sutrisno, “Artificial Intelligence Assisted Social Failure Mode and Effect Analysis (FMEA) for Sustainable Product Design,” Sustainability, vol. 15, no. 11, article no. 8678, 2023.

A. Agarwal and M. J. Nene, "Addressing AI Risks in Critical Infrastructure: Formalising the AI Incident Reporting Process," 2024 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, pp. 1-6, 2024.

Downloads

Published

2025-09-25

How to Cite

[1]
Yunarso Anang, Lya Hulliyyatus Suadaa, Lutfi Rahmatuti Maghfiroh, Nori Wilantika, Masakazu Takahashi, and Yoshimichi Watanabe, “Risk Management Framework-Based Failure Mode and Effect Analysis for AI Risk Assessment ”, Adv. technol. innov., vol. 10, no. 4, pp. 340–357, Sep. 2025.

Issue

Section

Articles