South Asian Research Journal of Engineering and Technology (SARJET)
Volume-5 | Issue-06
Review Article
Securing the Black Box: A Systematic Review of the Critical Intersection Between Explainable AI and Trusted Hardware Implementations
Abigail Adeniran, Zaynab B. Bello, Temidayo J. Omotinugbon, Ayokunle Adeyemo
Published : Dec. 26, 2023
Abstract
Artificial intelligence (AI) models have gained widespread adoption across numerous fields. However, these models often require substantial computational power and memory resources, making their deployment on resource-constrained IoT devices challenging. Frequently, the deployment of pre-trained AI models on IoT devices is outsourced to third parties who may not be fully trusted. In some scenarios, these third parties may act maliciously, potentially embedding harmful circuitry within the hardware design of the AI model. As AI models increasingly penetrate decision-critical and safety-critical domains that directly impact human lives, the development of Explainable AI (XAI) techniques has become essential. These techniques enhance our understanding of AI model operations and illuminate the rationale behind their decision-making processes. XAI methodologies help identify the specific features detected by individual neurons within the model architecture. In this work, we specifically examine layer-wise relevance propagation and activation maximization XAI techniques, exploring how they can contribute to the secure deployment of AI models in hardware implementations. We analyze the application of these XAI techniques from dual perspectives: that of an attacker seeking to compromise the accuracy of AI models deployed on IoT devices, and that of a defender working to preserve model accuracy and integrity. This dual analysis provides comprehensive insights into securing AI models within hardware environments.