Facial Recognition Under Fire in the U.S. Congress

Use Cases & Projects Claire Carroll

The age of unsupervised tech deployment may be coming to an end. While Amazon stakeholders recently voted not to impede the sale of facial recognition tech over privacy concerns, the U.S. House of Representatives is stepping in to evaluate antitrust and civil liberties governance in the age of tech giants.

Mt. Rushmore with crop lines

Privacy Infringement as a Bipartisan Issue

The House Oversight and Reform Committee held its second hearing yesterday on law enforcement usage of facial recognition algorithms. Increased governance of the tech appears to have bipartisan support. The Chairman of the Committee, Elijah Cummings, described facial recognition tech as "evolving extremely rapidly, without any real safeguards. [...] There are real concerns about the risks that this technology poses to our civil rights and liberties and our right to privacy." The ranking Republican member of the committee, Jim Jordan, raised 1st Amendment, 4th Amendment, and Due Process concerns at a massive scale: “All this happens in a country with 50 million surveillance cameras.” 

The data sources for the FBI facial recognition programs are inherently controversial. The Interstate Photo System is based on “criminal” mugshots, associated with an arrest, but images are not removed from the database unless the arrest is expunged, leaving several million innocent people in this “criminal” dataset. Additionally, the database pulls information from drivers’ license databases on a state-by-state basis without driver consent, following state opt in from “appropriate state officials,” according to Kimberly Del Greco, the FBI Deputy Assistant Director of Criminal Justice Information Services. She testified that the FBI needs this type of biometric data to “investigate, identify, apprehend, and prosecute terrorists.”

police car with lights on

Historical Data Bias

While normalized government facial recognition programs impact everyone, they disproportionately fail people of color, with dire consequences. ProPublica’s AI recidivism rating analysis demonstrated how historical criminal data’s inherent racial biases impacts the way defendants are sentenced, and facial recognition models based on similar criminal data would likely perpetuate similar biases. 

In the first part of the House hearing on May 22, Joy Buolamwini, founder of the Algorithmic Justice League, testified about her research, which has uncovered skin type and gender biases in AI services from tech giants including Microsoft, IBM, and Amazon. She described how “facial analysis software [failed] to detect my dark-skinned face until I put on a white mask.” This was likely unintentional, but underlines the critical importance of diverse developer teams in the development of tools that can best support diverse users. 

american flag waving

The accuracy of the FBI’s facial recognition algorithms at such a large scale is unsubstantiated. The FBI claims over 99% accuracy, however, Dr. Gretta Goodwin, Director of Homeland Security & Justice at the U.S. Government Accountability Office (GAO), explained that this number was generated on tests that returned lists of 50 candidates, yet many states requested much smaller lists.

Dr. Goodwin also affirmed that the FBI has not complied with several recommendations made by the GAO in 2017, which suggests that even the minimal oversight currently governing the programs has little tangible impact. Carolyn Maloney, a representative from New York, suggested additional transparency and audit capabilities, stating that “it’s important to understand whether this technology is helping people.”

TSA Assistant Administrator, Austin Gould, testified that when facial recognition is used in public spaces like airports, that no real-time data is stored long term. Signs notify travelers of the usage of facial recognition, and Mr. Gould affirmed that they “will always have the opportunity to opt out of the program.” It was not made clear in the hearing unclear how travelers would navigate through the airport without being automatically analyzed.

You May Also Like

Taming LLM Outputs: Your Guide to Structured Text Generation

Read More

No-Code ML and GenAI With Dataiku and Fabric

Read More

The Objects of an LLM Mesh for Building LLM-Powered Applications

Read More

Data Lineage: The Key to Impact and Root Cause Analysis

Read More