ارسل ملاحظاتك

ارسل ملاحظاتك لنا









Overview on Deep Learning Hardware Attacks

المصدر: مجلة الدراسات المستدامة
الناشر: الجمعية العلمية للدراسات التربوية المستدامة
المؤلف الرئيسي: Ismaeel, Maath F. (Author)
مؤلفين آخرين: Hammood, Maytham M. (Co-Author) , Alasad, Qutaiba (Co-Author)
المجلد/العدد: مج5, ع4
محكمة: نعم
الدولة: العراق
التاريخ الميلادي: 2023
التاريخ الهجري: 1445
الشهر: سبتمبر
الصفحات: 183 - 195
ISSN: 2663-2284
رقم MD: 1410001
نوع المحتوى: بحوث ومقالات
اللغة: الإنجليزية
قواعد المعلومات: EduSearch
مواضيع:
رابط المحتوى:
صورة الغلاف QR قانون

عدد مرات التحميل

6

حفظ في:
المستخلص: Neural networks (NNs) are vital for diverse applications like speech/image recognition, cyber defense, decision-making, and financial forecasting. Deep neural networks (DNNs) are popular for their accuracy, but they demand high memory, power, and complexity. Hardware attacks (e.g., piracy, Trojans) threaten deep learning platforms. Securing hardware is crucial to safeguarding software integrity. This paper reviews the attacks on deep learning devices and accelerators, briefly reviewing their risks and consequences. Specific attacks (backdoor insertion, model extraction, spoofing, etc.) are addressed through secure hardware design, memory protection, code integrity checks, and secure boot processes. Hardware accelerators for DNNs enhance performance but face security risks. Ensuring confidentiality and integrity against potential threats is crucial. Hardware Trojans (HTs) and other attacks (fault injection, reverse engineering, sidechannel) are significant concerns. Effective strategies are necessary to mitigate these threats. Continued research and collaboration among hardware designers, AI/ML researchers, and cybersecurity experts are vital to build a secure foundation for global IC design flows. This enables safe deployment of AI/ML technologies in real-world scenarios.

ISSN: 2663-2284

عناصر مشابهة