• Part of
    Ubiquity Network logo
    Interesse beim KIT-Verlag zu publizieren? Informationen für Autorinnen und Autoren

    Lesen sie das Kapitel
  • No readable formats available
  • Robust Training with Adversarial Exampleson Industrial Data

    Julian Knaup, Christoph-Alexander Holst, Volker Lohweg

    Kapitel/Beitrag aus dem Buch: Schulte, H et al. 2023. Proceedings – 33. Workshop Computational Intelligence: Berlin, 23.-24. November 2023.

     Download

    In an era where deep learning models are increasingly deployed in safetycritical
    domains, ensuring their reliability is paramount. The emergence of
    adversarial examples, which can lead to severe model misbehavior, underscores
    this need for robustness. Adversarial training, a technique aimed at
    fortifying models against such threats, is of particular interest. This paper
    presents an approach tailored to adversarial training on tabular data within
    industrial environments.
    The approach encompasses various components, including data preprocessing,
    techniques for stabilizing the training process, and an exploration of diverse
    adversarial training variants, such as Fast Gradient Sign Method (FGSM),
    Jacobian-based Saliency Map Attack (JSMA), DeepFool, Carlini & Wagner
    (C&W), and Projected Gradient Descent (PGD). Additionally, the paper delves
    into an extensive review and comparison of methods for generating adversarial
    examples, highlighting their impact on tabular data in adversarial settings.
    Furthermore, the paper identifies open research questions and hints at future
    developments, particularly in the realm of semantic adversarials. This work
    contributes to the ongoing effort to enhance the robustness of deep learning
    models, with a focus on their deployment in safety-critical industrial contexts.

    :

    Empfohlene Zitierweise für das Kapitel/den Beitrag
    Knaup, J et al. 2023. Robust Training with Adversarial Exampleson Industrial Data. In: Schulte, H et al (eds.), Proceedings – 33. Workshop Computational Intelligence: Berlin, 23.-24. November 2023. Karlsruhe: KIT Scientific Publishing. DOI: https://doi.org/10.58895/ksp/1000162754-9
    Lizenz

    This chapter distributed under the terms of the Creative Commons Attribution + ShareAlike 4.0 license. Copyright is retained by the author(s)

    Peer Review Informationen

    Dieses Buch ist Peer reviewed. Informationen dazu Hier finden Sie mehr Informationen zur wissenschaftlichen Qualitätssicherung der MAP-Publikationen.

    Weitere Informationen

    Veröffentlicht am 18. November 2023

    DOI
    https://doi.org/10.58895/ksp/1000162754-9