Are the sensitive data used to train deep learning algorithms safe from attacks?

Ajouter à mon PDF personnalisé

Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning

Deep learning, a machine learning method inspired by the information processing of a biological brain, is being successfully applied to many types of potentially sensitive user data. Massive datasets of user speech and images, medical records, financial data and location data points are provided to those systems to train their algorithms, with concerning privacy implications. How secure are those data records, especially in a federated learning setting where training data is distributed among multiple parties?

Nasr et al. present a comprehensive framework for the privacy analysis of deep neural networks, using a novel Open-box membership inference approach. Previous research has been focused on closed-box attacks, a scenario where an attacker’s observations are limited to the model’s output and intermediate calculations remain hidden. Open-box inference attacks, where attackers take advantage of their prior knowledge of the algorithmic model and parameters in use, are more accurate than their closed-box equivalent. They also correspond better to many real-world federated learning scenarios where participants themselves could be potential attackers.

Nasr et al. experimented all attack types on three different datasets consisting of color images, online shopping records and hospital discharge records. The authors analyzed and exploited in their attacks the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the de facto standard for training artificial neural networks.

Results show that deep learning models previously assessed as not very vulnerable to closed-box inference attacks could be substantially more vulnerable to open-box attacks. The DenseNet model for instance, tested with a 54.5% closed-box inference accuracy (50% being the baseline for a random guess), returned a white-box attack accuracy of 74.3%. In a federated learning setting, where the training data is distributed among multiple parties, adversarial participants were demonstrated able to successfully run active membership attacks again other participants, pushing the algorithm to leak other parties’ data.

Findings highlight the vulnerability of potentially sensitive training data used in deep learning neural networks. They suggest additional precautionary measures, especially in a federated network context.

Sensitive data used to train deep learning algorithms we considered safe are in fact vulnerable to inference attacks. Additional precautionary measures are needed.