Machine learning (ML) algorithms, as the core technology in artificial intelligence (AI) applications, such as self-driving vehicles, make important decisions by performing a variety of data classification or prediction tasks. Attacks on data or algorithms in AI applications can lead to misclassification or misprediction, which can fail the applications. For each dataset separately, the parameters of ML algorithms should be tuned to reach a desirable classification or prediction accuracy. Typically, ML experts tune the parameters empirically, which can be time consuming and does not guarantee the optimal result. To this end, some research suggests an analytical approach to tune the ML parameters for maximum accuracy. However, none of the works consider the ML performance under attack in their tuning process.
Researchers at Arizona State University have developed a new analytical framework for tuning the ML parameters to be secure against attacks while maintaining high accuracy. The framework finds the optimal set of parameters by defining a novel objective function, which incorporates the test results of both ML accuracy and its security against attacks. For validating the framework, an AI application has been implemented to recognize whether a subject’s eyes are open or closed, by applying k-Nearest Neighbors (kNN) algorithm on electroencephalogram (EEG) signals. In this application, two main parameters of kNN—the number of neighbors (k) and the distance metric type—are chosen for tuning. The input data perturbation attack, one of the most common attacks on ML algorithms, is used for testing the security of the application. Exhaustive search approach is used to solve the optimization problem. The experiment results show k=43 and cosine distance metric is the optimal configuration of kNN for the EEG dataset, which leads to 83.75% classification accuracy and reduces the attack success rate to 5.21%.
• Machine learning
• Autonomous systems
Benefits and Advantages
• Delivers optimal ML configurations for accuracy and robustness against attacks