المستخلص: |
Neural networks (NNs) are vital for diverse applications like speech/image recognition, cyber defense, decision-making, and financial forecasting. Deep neural networks (DNNs) are popular for their accuracy, but they demand high memory, power, and complexity. Hardware attacks (e.g., piracy, Trojans) threaten deep learning platforms. Securing hardware is crucial to safeguarding software integrity. This paper reviews the attacks on deep learning devices and accelerators, briefly reviewing their risks and consequences. Specific attacks (backdoor insertion, model extraction, spoofing, etc.) are addressed through secure hardware design, memory protection, code integrity checks, and secure boot processes. Hardware accelerators for DNNs enhance performance but face security risks. Ensuring confidentiality and integrity against potential threats is crucial. Hardware Trojans (HTs) and other attacks (fault injection, reverse engineering, sidechannel) are significant concerns. Effective strategies are necessary to mitigate these threats. Continued research and collaboration among hardware designers, AI/ML researchers, and cybersecurity experts are vital to build a secure foundation for global IC design flows. This enables safe deployment of AI/ML technologies in real-world scenarios.
|