Your Digital Self: Facial-recognition technology is one of the biggest threats to our privacy

This post was originally published on this site

If you used Facebook between 2010 and November 2021, unlocked a smartphone with your face, entered a secured office building or a bank, or walked the streets of cities dotted with surveillance cameras, your photo or a video of your face has likely been stored, analyzed and used to create a set of unique identifiers that help various algorithms recognize you and act upon it.

Your data is then used for a wide range of applications — from unlocking your phone and being tagged on a photo on your favorite social network, to authentication schemes, including those related to law enforcement and other government agencies and even private businesses. Aside from police departments, security services and other government organizations, your photos can also fall into the hands of hackers and researchers of AI.

As you can see, once it gets digitized and analyzed, your face, your immutable and unique identifier, is being tossed around and shared every which way, without you having much say in the matter.

Needless to say, this is detrimental to privacy, as your inalienable human right:

Article 12 of the United Nations’ Universal Declaration of Human Rights states: “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence. … Everyone has the right to the protection of the law against such interference or attacks.”

I’m sure you would agree that having cameras record your every move isn’t quite in the spirit of the declaration. But government institutions are unwilling to forsake this powerful surveillance tool.

In China, for example, Huawei tested a face-scanning system that can trigger an “Uighur alarm,” which detects members of the Uighurs, the oppressed minority group. (Chinese authorities have arbitrarily detained as many as one million Uighurs and other minorities in as many as 400 facilities in Xinjiang, in the largest internment of an ethno-religious minority since WWII.) This system would allow the Chinese government to control and prosecute the Uighurs if they so wished.

‘Racist’ AI

China’s example notwithstanding, even if we allow facial recognition to be used in law enforcement, the question remains: Is the tool itself reliable enough for such applications? Sadly, the answer is no.

Darker-skinned people have been, and still present, a considerable challenge for these algorithms. Since dark skin reflects less light and generates less contrast in unoptimized conditions, some photos used for facial recognition don’t provide enough data points for the algorithm, causing a mismatch.

In their 2018 research paper Gender Shades, Joy Buolamwini and Timnit Gebru tested gender-classification algorithms that rely of facial recognition built (among others) by Microsoft, IBM and Amazon. According to the paper, programs made substantial error rates (between 20%-34%) in facial detection, identification and verification among black males and even more so, females.

Independent assessment by the National Institute of Standards and Technology (NIST) did an even more thorough sweep; it included as many as 189 programs for facial recognition and came up with the same findings: They were most inaccurate when analyzing faces of the dark-skinned women.

High error rates alone should have been good enough reason to outlaw the use of such unreliable technology. But facial recognition programs remain in use mainly due to greed and disregard for human rights. Some researchers disregard the real danger behind the issue, and instead of voting the use of this technology out of the legal system, suggest optimizing photo parameters to account for darker skin tones and implementing “consent” as a prerequisite for creating facial recognition datasets.

Hacker threats

By now we’ve all seen that when faced with limitations imposed by law, consent can be enforced. If you need your photo taken in order to go to work, buy food or access health care, you have no choice but to provide it. Furthermore, improving this technology serves no one but governments and big companies that make substantial profit from selling licenses for their software.

Someone else also profits from your data: Hackers and hacker groups.

In 2019, at an annual Black Hat hacker convention, hackers breached Apple’s iPhone FaceID authentication system in just two minutes.

In February 2020, Clearview AI — a company that scrapes the internet and syphons billions of online photos for facial-recognition technology use — had its entire client list stolen. This hack has most likely played a crucial role in further hacking attempts at the company and its clients, most of which being law enforcement agencies and banks.

In 2020, a McAfee cybersecurity team demonstrated a fault in facial-recognition systems. They used a specially manipulated photo to trick a system similar to one used at airports for passport identification via facial recognition into accepting that the individual on the passport was the same as the one recorded by the system camera. This would enable a person on a no-fly list, for example, to board the airplane.

In March 2021, a criminal group used photos bought from the online black market to dupe a government-run Chinese site, stealing $76.2 million in the process.

The list goes on.

Remember: In all these hacks, the final victim is always you. Even if your private data doesn’t end up being used for phishing and identity theft, it’s sold to the highest bidder on the dark web for other nefarious purposes.

What you can do

If you’re like me, you already wonder: What can be done? What is being done? As a matter of fact, a lot.

EFF, the Electronic Frontier Foundation, is pushing for a ban of government use of face recognition. Fight for the Future, a nonprofit advocacy group, has started an online campaign, Ban Facial Recognition. You can track who in Congress supports or opposes this technology, tweet at them, or share information and help promote the cause on the local level. One of the most notable victories is BIPA, or the Illinois Biometric Information Privacy Act.

BIPA represents a step in the right direction, as it requires consent before businesses can record a photo of a person’s face for facial-recognition purposes. These photos, once taken, need to be deleted after a fixed period of time. If businesses fail to adhere to these rules, a wronged individual is entitled to a private right of action, which means that private persons (i.e. you, not your company) have the right to initiate a lawsuit.

The new German government seems to understand the gravity of the situation and plans to ban facial recognition and restrict usage of mass surveillance tools.

Even Meta Platforms — aka Facebook — cracked under pressure and largely limited the use of its facial-recognition feature in November. It can be used only in special cases, such as verifying identities and unlocking hacked accounts. The full list is longer and unspecified, and it shows that the company isn’t keen on letting go of its face-recognition toy DeepFace, Facebook’s algorithm trained on 1 billion face scans. Most importantly, Meta hasn’t ruled out going back to facial recognition in the future.

As you can see, the battle for privacy and human rights continues, and is likely to escalate. Hopefully, now you understand the concept and know the stakes.

What comes next is up to you.

Add Comment