The Digital Traces of Surveillance: Balancing Privacy and Security Lucia Moxey
- The Core Issue
- Feb 2
- 5 min read

Every day, a commute through a city can leave a rigorous digital trace: an automated license plate reader captures a photo in a split second, a Wi-Fi “handshake” tracks the journey of a phone through a subway station, and a smart doorbell logs a face on a sidewalk. These data-collecting technologies offer benefits to society, such as the ability for emergency services to locate a 911 caller or for a suspect to be identified in a crowded public area. However, even without suspicion of a crime, an individual may have a record of activity. The truth is, modern surveillance thrives through accumulation, not confrontation. If a moment is captured, the digital footprint it leaves is accessible long after the moment has passed.
On one hand, these technologies are able to provide efficiency and security, attributes of a society valuing liberty. On the other hand, the benefits of these technologies are paradoxical: as data-tracking and surveillance systems are questioned in regard to fairness and the scope of government power, the promise of their efficiency and security fractures. Efficiency is frequently treated as a proxy for public safety, yet this assumption remains disputed.
For the last few decades, the early phases of the legal process identification and the preliminary assessment before formal investigation–have been influenced by surveillance technologies. With the development of AI, this influence has accelerated. Data aggregation tools have the ability to compile records of a person’s movements and interactions, while facial recognition systems compare images across large databases within seconds. The Department of Homeland Security claims the automation of identity verification during travel helps “travelers get through checkpoints more efficiently and securely” (Department of Homeland Security). However, an observational study complicates this claim. Patricia Haley, an academic researcher and author studying biometric surveillance technologies, notes that while some models report accuracy over 95%, “these findings often stem from controlled environments and do not necessarily translate into tangible reductions in violent crime rates” (Haley). Ultimately, if the “efficiency” of these tools has little increase in safety, privacy is surrendered for a perceived sense of security that does not exist.
Still, efficiency has never been the only criterion used by the American legal system to define justice: it is based on the idea that governmental authority must be used carefully and within well-defined bounds. In 1967, the United States Supreme Court rejected a physical definition of privacy in Katz v. United States, ruling “the Fourth Amendment protects people, not places” (Katz v. United States). In United States v. Jones (2012), the Court held that the government’s long-term surveillance—in this case, the installation of a GPS device—on a vehicle to monitor its movements constitutes a search because the government “physically occupied private property for the purpose of obtaining information” (United States v. Jones). With these limitations, the Court concluded that technological developments do not give the state an exemption from the requirement for a warrant.
The Supreme Court, in Carpenter v. United States (2018), held that law enforcement must obtain a warrant to access historical cell site location data because such information provides “an intimate window into a person’s life” (Carpenter v. United States). The Court distinguished digital surveillance from traditional human observation by noting that while physical stalking is limited by “the frailties of recollection” and the high cost of a surveillance team, digital tracking provides a “tireless and absolute” record (Carpenter v. United States). Legal scholars argue that facial recognition intensifies this problem. According to Andrew Ferguson, biometric identification systems create a “ubiquitous identification scheme” that erodes “practical obscurity” (Ferguson). These rulings stress that in order to maintain the core principles of a free society, the legal barriers to spying on citizens must remain deliberately high as the “cost” of doing so approaches zero.
Surveillance technologies raise ongoing questions about equality, fairness, and legality. Scholarly research shows that the accuracy of facial recognition systems varies depending on the demographic group. These mistakes are not distributed fairly when these systems are used in law enforcement. Rather, they often perpetuate patterns of increased surveillance in areas where law enforcement is already more
prevalent (“Facial Recognition Algorithms”). In State v. Loomis (2016), the Wisconsin Supreme Court acknowledged that the proprietary nature of the COMPAS algorithm prevents a defendant from “assessing its accuracy” or challenging “how the risk scores are determined or how the factors are weighed” (State v. Loomis). The lack of transparency in algorithmic risk-assessment tools, when incorporated into judicial decision making, constitutes a major structural flaw in a legal system grounded in due process.
Nevertheless, those in favor of surveillance technologies have some legitimacy to their logic: modern threats call for usage of modern devices. Observers claim biometric surveillance may be helpful in investigations under the condition that it is applied sparingly and under human supervision (Haley). Still, the lack of clarity surrounding the usage of data, who regulates it, and how long it stays stored brings forth indubitable moral and constitutional concerns even when surveillance may prevent harm (“Algorithmic Fairness in Predictive Policing”).
Nonetheless, an aspect of constitutional governance is skepticism toward concentrated power, whether used by the state or by private organizations working on its behalf. Surveillance systems are becoming increasingly reliant on private firms to create and manage proprietary algorithms, transferring important decision-making from courts to opaque technology (Gentzel). Moderation rather than technological zeal is needed to strike a balance between individual rights and public safety.
Our judicial and legislative bodies need to shift from a reactive to a proactive stance, given that technology is evolving more quickly than case law. Legislators must create explicit legal restrictions on surveillance, such as warrant requirements for prolonged or aggregated monitoring, stringent data retention regulations, and impartial oversight procedures. The law must develop a structure that presumes online security is the default instead of waiting for the latest technology to be misused before taking action. Surveillance can serve justice if it is controlled by accountability and limitation. When it is not, justice becomes hampered, altering the meaning of freedom under the law.
Works Cited
“Algorithmic Fairness in Predictive Policing.” AI and Ethics, 2024,
“Carpenter v. United States, 585 U.S. 285 (2018).” Justia,
Department of Homeland Security. “2024 Update on DHS’s Use of Facial Recognition and Face Capture Technologies.” DHS Archive, 16 Jan. 2025,
“Facial Recognition Algorithms: A Systematic Literature Review.” Journal of Imaging, vol. 11, no. 2, 2025, p. 58, https://www.mdpi.com/2313-433X/11/2/58.
Ferguson, Andrew Guthrie. “Facial Recognition and the Fourth Amendment.” Minnesota Law Review, 2021,
Gentzel, Michael. “Biased Face Recognition Technology Used by Government.” Philosophy & Technology, 2021, https://link.springer.com/article/10.1007/s13347 021-00478-z.
Haley, Patricia. “The Impact of Biometric Surveillance on Reducing Violent Crime.” Sensors, vol. 25, no. 10, 2025, p. 3160, https://www.mdpi.com/1424-8220/25/10/3160.
“Katz v. United States, 389 U.S. 347 (1967).” Justia,
“State v. Loomis, 881 N.W.2d 749 (Wis. 2016).” California Courts,
“United States v. Jones, 565 U.S. 400 (2012).” Justia,




Comments