Beyond Accuracy: Adversarial Robustness of Deep Learning-Based Browser Fingerprinting Systems
Abstract
The paradigm of online user identification has increasingly shifted from stateful cookies to stateless browser fingerprinting, a technique significantly amplified in efficacy by deep learning. State-of-the-art methods, such as those employing Long Short-Term Memory (LSTM) networks, have demonstrated remarkable accuracy, surpassing 94% in identifying unique users across various platforms and conditions.1 This advancement, however, has predominantly focused on optimizing classification accuracy, leaving a critical security dimension largely unexamined: the vulnerability of these sophisticated models to adversarial attacks.
This paper addresses this gap by investigating the adversarial robustness of AI-based browser fingerprinting systems. We introduce a novel hybrid deep learning architecture that synergistically combines 1D Convolutional Neural Networks (CNNs) for robust, localized feature extraction with LSTMs for sequential pattern analysis. To evaluate this architecture, we develop a domain-specific framework for generating adversarial browser fingerprints by adapting established gradient-based attack methods to the unique, heterogeneous feature space of browser data.
Through a comprehensive comparative analysis against a state-of-the-art LSTM model, our experimental results demonstrate that the proposed hybrid model not only maintains competitive accuracy on benign data but also exhibits significantly superior resilience to adversarial evasion attacks. These findings establish adversarial robustness as an essential, co-equal metric alongside accuracy for the evaluation and deployment of next-generation user identification systems, highlighting the need for a paradigm shift from a singular pursuit of accuracy to a more holistic, security-conscious approach.