Our business blog focuses on issues affecting Virginia, D.C. and Maryland business owners as well as those in other jurisdictions throughout the country. We provide timely insight and commentary on federal and state rules and how they affect you. If you are interested in having us cover a specific topic, please let us know.

contact us



Select Month:


Facial Recognition: Security Tool or Threat Vector?
June 6, 2019
Joe Meadows, Kurtis Minder and Nikolay Danev
Facebook LinkedIn Twitter Email Print
Topics Litigation
Facial Recognition: Security Tool or Threat Vector?

Security systems have evolved quite a bit since the first locks and keys dating back to 4000 BC. These days it’s keycards, secret passwords, fingerprint logins, and two-factor authentication.

Enter facial recognition as the latest, cutting-edge technology in the security game. Like fingerprint logins, facial recognition can be used to access electronic security systems.

Facial recognition converts a photo or a person's face image into a unique digital “faceprint.” The faceprint may be composed of a blueprint of relative positions, patterns and distances of face features (e.g., temples, eyes, nose, mouth, jaw). The faceprint is then compared to the person’s stored faceprint in a database. If the faceprints match, the person’s identity is verified and access to electronic security systems granted.

Privacy Concerns

By comparing a faceprint against a database of known other faceprints, facial recognition can also be used for identification purposes. Governments (think mug shots and airport check-in photos), commercial enterprises, schools, and social media operations maintain databases, sometimes large ones, of known faceprints. Many have written about data privacy concerns surrounding facial recognition, usually within the context of government or commercial misuse of facial recognition technology.

Already, we’ve seen several cases where facial recognition sits squarely in the center of a controversy. For example, questions have arisen as to whether the over-representation of African Americans within facial recognition databases, coupled with a lack of regulation over the use of facial recognition technology, could open the door for racial profiling by law enforcement.

A number of groups have also raised privacy concerns about the broadening use of facial recognition. These concerns include a reduction in anonymity, people tracking, knowledge and consent, and the use and security of personal data. The United States Government Accountability Office (GAO) published a report delving into these and other issues.

Privacy Protections

Some may find it frightening that the odds are greater than 50 percent that you’re already in a facial recognition database somewhere. As a result, some may want to avoid facial recognition systems altogether. But this will be difficult as digital camera technology advances every year, possibly allowing your neighbor to collect high resolution, zoomed-in images of you or your family from afar. 

Still, facial recognition is just another channel for authentication and verification. The three vectors of authentication (something you know, something you have, something you are) are held as the pillars of a robust authentication system when two or more vectors are used. The something you are component resonates as the strongest vector. Yet sometimes it is neither unique to the person, nor difficult to copy or replicate. We are slowly retracting into an ever-smaller circle of privacy while more of what we hold private becomes part of the public realm.

The inevitability of a leak of your or your employee’s data (whether passwords, faceprints, or other) drives the need for early notification of a breach. Monitoring for the monetization or exposure of that data to a breach is also critical to any security and intelligence program.

Cyber Attack Concerns

While the privacy debate continues, a few experts and interested parties have written about facial recognition cyber attacks.

These cyber attacks could take many forms, provided the attacker has facial recognition technology (easy) and ready access to a faceprint database (less easy, but not all that hard):

  • Security Access - An attacker could gain access to or duplicate (using a 3D printer) a person’s faceprint to access electronic security systems built into computers, phones, offices, or someday, even vehicles. An attacker could also hack back-end verification systems to cause a person’s faceprint to generate a false acceptance or rejection, allowing the bad guys in and keeping the good guys out.
  • Identification - An attacker could simply discover a person’s identity. It wouldn’t take long to snap someone’s picture, search it against a facial recognition database, and find out who the person is, where they live or work, and what car they drive.
  • Defamation - An attacker could defame or disparage a person online. The attacker could even drop the person’s photo into other photos or videos on the web.
  • Extortion - An attacker could use a person’s faceprint or results of faceprint matching to demand money, property or services. A person might be willing to pay big dollars to keep their security systems locked or photos/videos on the web private.
  • Harassment - An attacker could track a person’s movements and engage with them or their companions remotely – or worse, physically.
  • Dark Web Sales - An attacker could sell a person’s faceprint or results of faceprint matching to others on the dark web, enabling the buyer to engage in any of the things above.

Cyber Attack Protections

So, what can victims of a facial recognition cyber attack do?

Law enforcement would be the natural first place to go. But their interest level may depend on the severity of the cyber attack and their investigatory resources.

A digital forensic expert would be a next logical stop. These experts investigate data breaches, analyze the sources and methods of cyber attacks, identify attackers, and isolate and preserve electronic evidence.

An attorney may be the last hope for some measure of relief. Federal and state laws (except perhaps Illinois, Texas and Washington) have not caught up with facial recognition technology, much less newer technologies involving voice, gait, and body shape recognition. Nonetheless, existing statutory and case law on computer fraud and abuse, revenge porn, cyber bullying, stalking, trespassing, impersonation, defamation, disparagement, false light, false advertising, infliction of emotional distress, and tortious interference may apply in facial recognition cyber attacks.

It’s only a matter of time before these cyber attacks become more prevalent. Alexander Kabakov, co-founder of the popular Russian facial recognition app Findface, casually told The Guardian, “If you see someone you like, you can photograph them, find their identity, and then send them a friend request.” He added, “It also looks for similar people. So you could just upload a photo of a movie star you like, or your ex, and then find 10 girls who look similar to her and send them messages.”

This obviously could go very badly.

Our world is evolving and as our images become a threat vector, services like VIPRecon from GroupSense can help individuals stay on top of facial recognition and other types of digital threats.

Joe Meadows is a trial attorney, internet defamation lawyer, and partner with the law firm Bean Kinney & Korman in Arlington, Virginia. Kurtis Minder is the CEO of GroupSense, a cyber intelligence firm based in Arlington Virginia. Nikolay Danev is the Vice President, EMEA for GroupSense.

This article is for informational purposes only and does not contain or convey legal advice. Consult a lawyer. Any views or opinions expressed herein are those of the authors, and are not necessarily the views of any client.

  • Joseph L. Meadows

    Joe Meadows is a litigation shareholder at Bean, Kinney & Korman. He is an experienced trial attorney, focusing on business disputes, internet defamation, and cyber-attack/privacy matters.

    Joe has tried cases before judges and ...