Privacy and security issues associated with facial recognition software


Image: Adobe Stock

The market for facial recognition technology is increasing fast as organizations employ the technology for a variety of reasons, including verifying and/or identifying individuals to grant them access to online accounts, authorizing payments, tracking and monitoring employee attendance, targeting specific advertisements to shoppers and much more.

In fact, the global facial recognition market size is forecast to reach $12.67 billion by 2028, up from $5.01 billion in 2021, according to The Insight Partners. This increase is also driven by the growing demand from governments and law enforcement agencies, which use the technology to assist in criminal investigations, conduct surveillance or other security efforts.

But as with any technology, there are potential disadvantages to using facial recognition, including privacy and security issues.

Privacy concerns around facial recognition technology 

The most significant privacy implication of facial recognition technology is the use of the technology to identify individuals without their consent. This includes using applications, such as real-time public surveillance or through an aggregation of databases, that are not lawfully constructed, said Joey Pritikin, chief product officer at Paravision, a computer vision company specializing in facial recognition technology.

Tracy Hulver, senior director, digital identity products at Aware Inc., concurred that it’s important for organizations to let users know what biometric data they’re collecting and then get their consent.

“You have to clearly state to the user what you’re doing and why you’re doing it,” he said. “And ask if they consent.”

Stephen Ritter, CTO at Mitek Systems Inc., a provider of mobile capture and digital identity verification products, agreed that consumer notification and consent is critically important.

“Whether we are delivering an app or a user experience directly to a consumer or whether we’re providing technology to a bank, or a marketplace or any company that’s providing an app to the end user, we require appropriate notification, meaning that the consumer is made very aware of the data that we’re going to collect and is able to consent to that,” Ritter said.

SEE: Mobile device security policy (TechRepublic Premium)

In surveillance applications, the main concern for citizens is privacy, said Matt Lewis, commercial research director at security consultancy NCC Group.

Facial recognition technology in surveillance has improved dramatically in recent years, meaning it is quite easy to track a person as they move about a city, he said. One of the privacy concerns about the power of such technology is who has access to that information and for what purpose.

Ajay Mohan, principal, AI & analytics at Capgemini Americas, agreed with that assessment.

“The big issue is that companies already collect a tremendous amount of personal and financial information about us [for profit-driven applications] that basically just follows you around, even if you don’t actively approve or authorize it,” Mohan said. “I can go from here to the grocery store, and then all of a sudden, they have a scan of my face, and they’re able to track it to see where I’m going.”

In addition, artificial intelligence (AI) continues to push the capabilities of facial recognition systems in terms of their performance, while from an attacker perspective, there is emerging research leveraging AI to create facial “master keys,” that is, AI generation of a face that matches many different faces, through the use of what’s called Generative Adversarial Network techniques, according to Lewis.

“AI is also enabling additional feature detection on faces beyond just recognition—that is, being able to determine the mood of a face (happy or sad) and also a good approximation of the age and gender of an individual based purely on their facial imagery,” Lewis said. “These developments certainly compound the privacy concerns in this space.”

Overall, facial recognition catches a lot of information depending on the amount and sources of data and that’s what the future needs to concern itself with, said Doug Barbin, managing principal at Schellman, a global cybersecurity assessor.

“If I perform a Google image search on myself, is it returning images tagged with my name or are the images previously identified as me recognizable without any text or context? That creates privacy concerns,” he said. “What about medical records? A huge application of machine learning is to be able to identify health conditions via scans. But what of the cost of disclosing an individual’s condition?”

Security issues related to facial recognition technology

Any biometric, including facial recognition, is not private, which also leads to security concerns, Lewis said.

“This is a property rather than a vulnerability, but in essence it means that biometrics can be copied and that does present security challenges,” he said. “With facial recognition, it may be possible to ‘spoof’ a system (masquerade as a victim) by using pictures or 3D masks created from imagery taken of a victim.”

Another property of all biometrics is that the matching process is statistical—a user never presents their face to a camera in exactly the same way, and the user’s features might be different depending on the time of day, use of cosmetics, etc., Lewis said.

Consequently, a facial recognition system has to determine how likely a face presented to it is that of an authorized person, he said.

“This means that some people may resemble others in sufficient ways that they can authenticate as other people due to similarities in features,” Lewis said. “This is referred to as the false accept rate in biometrics.”

Because it includes the storage of face images or templates (mathematical representations of face images used for matching) the security implications of facial recognition are similar to any personally identifiable information, where accredited encryption approaches, policy and process safeguards must be put in place, he said.

SEE: Password breach: Why pop culture and passwords don’t mix (free PDF) (TechRepublic)

“In addition, facial recognition can be prone to what we call ‘presentation attacks’ or the use of physical or digital spoofs, such as masks or deepfakes, respectively,” Pritikin said, “So proper technology to detect these attacks is critical in many use cases.”

People’s faces are key to their identities, said John Ryan, partner at the law firm Hinshaw & Culbertson LLP. People who use facial recognition technology place themselves at risk of identity theft. Unlike a password, people cannot simply change their faces. As a result, companies using facial recognition technology are targets for hackers.

As such, companies usually enact storage and destruction policies to protect this data, Ryan said. In addition, facial recognition technology usually uses algorithms that cannot be reverse engineered.

“These barriers have been useful so far,” he said. “However, governments at the state and federal levels are concerned. Some states, such as Illinois, have already enacted laws to regulate the use of facial recognition technology. There is also pending legislation at the federal level.”

Pritikin said that his company uses advanced technologies, such as Presentation Attack Detection, that protect against the use of spoofed data.

“We are also currently developing advanced technologies to detect deep fakes or other digital face manipulations,” he said. “In a world where we rely on faces to confirm identity, whether it be in person on a video call, understanding what is real and what is fake is a critical aspect of security and privacy—even if facial recognition technology is not used.”



Source link