Gray-Box Adversarial Testing for Control Systems with Machine Learning Components
The investigation of Neural Networks (NN) in high-assurance systems has a long history, and the advantages of including a NN in the control loop can be substantial. For example, a system may include components with complex dynamics that cannot be modeled by first principles and need to be learned. Most importantly, a high-assurance system requires the ability to adapt in catastrophic situations. NNs provide such an adaptation mechanism with only limited assumptions on the structure of what is to be learned. However, despite substantial progress in the stability analysis and verification of such systems, the problem of system-level verification of transient behaviors remains a major challenge.
Researchers at Arizona State University have developed a new framework that searches for adversarial tests through functional gradient descent. With system properties expressed in Signal Temporal Logic (STL), a local optimal control-based search is performed with a global optimizer due to the non-convex nature of the problem. This approach requires neither analytical information about the system model nor the NN architecture. Further, any information required by the framework is readily available by most model-based development tools for control systems, namely linearizations of the closed-loop system at given operating points. These linearizations help approximate the gradient descent directions without the need for computing sensitivity matrices or numerical approximations of the descent directions.
• Safety-critical systems
• Neural networks
• Adversarial testing
Benefits and Advantages
• Vastly outperforms black-box system testing methods in experiments
• Optimal-control approach searches directly in the infinite search space of the input functions
• Compatible with Recurrent Neural Networks (RNN) which cannot be handled by existing testing and verification methods