The Illusion of Security: How AI-Powered Surveillance Erodes Privacy, Amplifies Inequality, and Redefines Democracy in the Digital Age
DOI:
https://doi.org/10.70670/sra.v3i4.1249Abstract
AI-powered surveillance tools—facial recognition, biometric tracking, and social media monitoring—are reshaping national security at the cost of individual privacy. This study examines how these technologies, often justified as crime deterrents, exploit legal ambiguities to normalize mass privacy intrusions. Drawing on case studies from 2018–2024, including leaked government contracts and grassroots resistance campaigns, the research reveals systemic flaws: facial recognition systems misidentify darker-skinned individuals at rates 34% higher than lighter-skinned counterparts, while predictive policing algorithms funnel police into low-income neighborhoods based on biased historical data. The study also uncovers how governments and corporations collude to bypass regulations. These systems disproportionately target marginalized groups, such as asylum seekers whose therapy sessions are transcribed and shared with immigration authorities. Despite these harms, communities are fighting back. Indigenous groups in Australia use traditional face-painting to confuse biometric scanners, while Tunisian developers create open-source apps to blur protesters’ faces in real time. The myth that “more surveillance means more safety” collapses under scrutiny: in cities like London and Jakarta, violent crime rose 19% under dense AI surveillance, while trust in police hit historic lows. The study demands prohibitions on live facial recognition in public areas, independent bias audits of algorithms, and Barcelona-inspired models where citizens govern surveillance tools. By elevating voices of those misidentified by flawed system protesters, migrants, informal traders—the research positions privacy not as a privilege but as democracy’s shield against automated oppression.
