- I can't recommend this rugged power station enough to drone users -- now with $340 off for Black Friday!
- Give your iPhone 16 thermal camera superpowers with this gadget
- The Jackery Explorer 1000 V2 is one of the best entry-level portable power stations (and it's now half price for Black Friday)
- One of the most versatile power stations I've tested is now 50% off for Black Friday
- This is my favorite power bank for my MacBook Pro, and it's sale for $100 for Black Friday
Relying on the Unreliable
The Hidden Costs of Facial Recognition Technology
By Tinglong Dai
Bernard T. Ferrari Professor of Business
The Carey Business School at Johns Hopkins University
With generative AI dominating the tech world’s attention, facial recognition looms like a long, frightening shadow. The way it is trained and used, it is a force that is changing the boundaries of personal freedom and privacy. The training process uses a wide range of data sources, including images posted online by users, to build a detailed, often unreliable picture of who we are and what we do. The use of such unreliable technology is even more questionable.
Not only does this technology try to recognize us, but the way it’s trained, often using data collected by bots that scour the internet, find every photo you post online, and match your face to every other representation of your face they can find, it can easily infiltrate our lives and peer into the most private parts of them. Our simple walk past a security camera or an Instagram photo can be turned into a file of our personal, legal, and financial records by piecing together our digital footprints, often mixing ours with those of others. The consequences can be just as swift and frightening.
And one can only hope that the technology is as reliable as it is advertised to public and private decision makers. In reality, the technology is based on a probabilistic model that matches patterns but is prone to error. As more data sources are added, these errors can be magnified, with disastrous consequences. Even more so when decision makers rely on the technology without questioning its reliability.
These errors are not just hypothetical. The story of Alonzo Sawyer, a Maryland man, shows them clearly, as Eyal Press wrote for The New Yorker. A web of legal and emotional problems engulfed Sawyer after a facial recognition system misidentified him. The system’s decision was upheld despite the obvious age and physical differences between him and the actual perpetrator that a normal person could see with the naked eye. A probation officer’s hasty and erroneous confirmation, along with his denial of the possibility that the facial recognition system could be wrong, made matters worse. The criminal justice system has been a terrifying ordeal for Sawyer and his family. They have been confused, frightened, and forced to defend his innocence. They have been the victims of a reliance on technology that is not always reliable.
A tragic tale, Sawyer’s story is also a rallying cry for those in positions of authority, calling for swift introspection and decisive action. We must confront the frightening questions that loom large in the shadow of technology. What moral limits should be placed on the use of a technology that is so ubiquitous? What rights and limitations should people have over their digital identities? To what extent should we, the people, allow our governments to use unreliable technology to make decisions that can destroy citizens’ lives without apology? These are pressing questions that demand immediate answers as AI rapidly permeates our lives and work.
At this crossroads, the answers to these questions will shape the course of our lives and those of generations to come. We must consider the ethical issues raised by facial recognition technology. Finding a middle ground between technological advances and concerns for social justice and human rights is essential. At a minimum, there must be honest disclosure of how models are trained, the sources of data, and independent, ongoing testing and monitoring of the performance of such models.
The speed at which new technologies are being developed will not allow time for reflection. Every second that passes without clear moral standards and strong systems of governance is a step toward a future where our lives are just data points in a vast surveillance network that is as unchecked as it is flawed.
We cannot autopilot our future. We can and should steer technology toward a future that values and protects people’s right to privacy and freedom of choice. We must ensure that the legacy we leave behind is one of clarity, honesty, and a strong commitment to the fundamental values that make us human. The story of Alonzo Sawyer and the many others who could be caught in the web of facial recognition technology is a stark warning of how important this is. Now is the time to act.
About the Author
Tinglong Dai is the Bernard T. Ferrari Professor of Business at the Carey Business School of the Johns Hopkins University. He serves on the core faculty of the Hopkins Business of Health Initiative and the executive committee of the Institute for Data-Intensive Engineering and Science. He is also the Vice President of Marketing, Communication and Outreach at INFORMS. Tinglong can be reached online at Tinglong Dai, PhD (jhu.edu).