Security and Communication Networks

Adversarial Machine Learning in Secured Intelligent Systems


Publishing date
01 Jun 2023
Status
Published
Submission deadline
27 Jan 2023

Lead Editor

1University of Sydney, Sydney, Australia

2CSIRO, Sydney, Australia

3University of South Queensland, Toowoomba, Australia

4Southeast University, Nanjing, China


Adversarial Machine Learning in Secured Intelligent Systems

Description

Nominated data-intensive tasks, such as processing images and video for computer vision, can now be performed at a high level due to advances in deep learning techniques. However, adversarial machine learning research has demonstrated that such intelligent systems cannot yet be as robust as human systems. As a range of new technologies, adversarial machine learning covers the development and study of the intact capabilities and malicious behaviors of machine learning models in adversarial scenarios.

The potential vulnerabilities of ML models to malicious attacks can result in severe consequences for safety-critical systems, for example, via imperceptible perturbations to the input of images or videos. Researchers in the areas of machine learning and computer vision have a responsibility to preempt attacks and build safeguards, especially when the task is critical for information security or human life, such as in autonomous driving systems, and in order to achieve this our understanding of machine learning in adversarial environments must be deepened. While the negative implications of this nascent technology for data-intensive tasks have been widely discussed, researchers in machine learning are yet to explore the numerous positive opportunities. The positive impacts of adversarial machine learning are not limited to boosting the robustness of ML models, but cut across several other domains, including privacy protection, reliability and safety tests, model understanding, and improving generalization performance on data-driven tasks from different perspectives. Since there are both positive and negative applications of adversarial machine learning in intelligent systems, ensuring adversarial learning is applied in the right scenarios requires the development of a positive framework.

This Special Issue aims to bring together researchers and practitioners from a variety of fields, including computer vision, machine learning, and computer security, to synthesize promising ideas and research directions, as well as foster and strengthen cross-community collaboration on both theoretical studies and practical applications for advanced intelligent systems. We welcome both original research and review articles.

Potential topics include but are not limited to the following:

  • Security in machine learning-based systems
  • Adversarial machine learning
  • Adversarial/defense attacks
  • Adversarial ML in the real world
  • Theoretical studies of adversarial machine learning in advanced intelligent systems
  • Practical applications of adversarial machine learning in advanced intelligent systems
  • Development of frameworks to ensure the positive use of adversarial learning
Security and Communication Networks
Publishing Collaboration
More info
Wiley Hindawi logo
 Journal metrics
See full report
Acceptance rate10%
Submission to final decision143 days
Acceptance to publication35 days
CiteScore2.600
Journal Citation Indicator-
Impact Factor-
 Submit Evaluate your manuscript with the free Manuscript Language Checker

We have begun to integrate the 200+ Hindawi journals into Wiley’s journal portfolio. You can find out more about how this benefits our journal communities on our FAQ.