Artificial Intelligence and Facial Recognition Software – Technological Wonder or Brave New World? Or Both?

AI-and-FRS-Technology-Concerns.jpg

Concerns with AI and FRS Technology: The Cost and Biases Connected to Developing Computed Intelligence

Your phone and tablet recognize your face or thumbprint, and Google and other search engines have become more efficient and their searches more comprehensive and complete. For most of us, the advancement and evolution of artificial intelligence (“AI”) and facial recognition software (“FRS”) have provided a perception of increased efficiency and, even more importantly, security. But is the perception of greater security reality?

AI and FRS Technology Concerns Raised from R&D Stage and Use of Private Data

Most people are unaware that in researching and developing AI and FSR, companies, universities, and government labs accessed and used millions of images collected from a variety of online resources, including social media and other online applications, like FLICKR—all without the knowledge or permission of the persons whose faces were collected and utilized—and that this practice continues to date. Sites such as FLICKR, bought and sold many times over including by Yahoo, allowed users to share photos pursuant to what is known as a Creative Commons license. Commonly used on many websites, this type of license allows third parties to use photos posted by others, supposedly with specific limitations as to the type of use allowed; it turns out, these restrictions may have been ignored as groups combed the internet in search of photos to be used in developing FRS, including being used to contribute to surveillance systems in the United States and other countries, such as China.

A Good Thing Gone Wrong?  How AI and FRS Technology Concerns Were Unintended

This unknown and unauthorized use of the facial images of millions (known as “biometric data”) is just one of many unintended and unanticipated consequences. Early on, it was a normal practice for research groups to share data, including photos and data from social media and even dating websites; in fact, it was generally considered legal and was done in the interest of science, generally within academic communities. 

For example, researchers at the University of Washington created an FRS program known as MegaFace and posted it to the internet. Although designed using biometric data without the knowledge or consent of the subjects, MegaFace was intended for use in an academic competition intended to encourage the development of FRS in an academic setting, and was never intended for commercial use. Yet, MegaFace has been downloaded over six thousand times to date by companies and governments worldwide, including U.S. defense contractors, the investment arm of the CIA, Byte Dance (the Chinese owner of TikTok), and Chinese surveillance company Megvii.  Only a few of the known downloads were actually related to the academic competition. 

MegaFace has since been decommissioned, but copies may exist anywhere and likely are being used for ongoing FRS and AI technology research and development. Some who downloaded MegaFace have since deployed facial recognition systems, such as Megvii, which was blacklisted by Congress last year after the Chinese government used Megvii technology to monitor the country’s Uighur population.

Even those groups looking to blow the whistle on the unauthorized use of biometric data and help members of the public determine whether their own data has been used without authorization must be very careful, lest their own technology be commandeered for a less noble purpose—thus limiting their ability to notify the public. 

Do AI and FRS Technology Include Biases Baked In? 

Even beyond the obvious privacy concerns (which remain generally unknown to the public) as FRS and AI systems continue to evolve and develop by tracking patterns in online data, researchers are recognizing that these programs can learn various biases present in society, including biases against women and minorities.

Last fall, Google unveiled a new AI technology called BERT that quickly revolutionized how scientists build systems that learn how people talk and write. BERT (now used in Google search engines) learns from decades of internet and digitized information such as old books, news articles, and even Wikipedia. Yet, those proficient in AI are beginning to realize the existence of underlying biases in AI systems like BERT, such as a greater likelihood of identifying computer programming with men than women.

Recently, computer scientist and Ph.D. in computational linguistics Dr. Robert Munro (formerly of Amazon Web Services) conducted his own examination of cloud-based computational services from both Google and Amazon Web Services designed to help businesses add language skills into new applications. Dr. Munro observed that both services did not recognize the word “her” as a pronoun, although each properly identified “his.” Dr. Munro has also observed that BERT was more likely to associate certain words—like jewelry, baby, horses, house, money, and action—with men than women. Munro reported, “[t]his is the same historical inequity we have always seen”—only now, that inequity is perpetuated artificially.  FRS gives rise to similar concerns. For example, in 2015, Google Photos was caught labeling photos of African Americans as “gorillas.”

Importantly, says Emily Bender, University of Washington professor of computational linguistics, “[e]ven the people building these systems don’t understand how they are behaving.” As researchers, themselves, are slow to understand, end-users worldwide are delayed in identifying and correcting even obvious biases in the system.

Other AI and FRS Technology Concerns

While personal data privacy and potential bias are huge concerns, these are not the only downsides to AI and FRS, as developers struggle to identify and address other pressing concerns, including:

Unwarranted Mass Surveillance

This practice has been prevalent in China as it is supported by the government and has been used to create state-wide surveillance of China’s own citizens—allowing the government to “tag” those considered a threat. More recently, the software was incorporated into a Nationalized Chinese Socialized credit system intended to determine whether, how, and to what extent Chinese citizens are to be allowed access to loans and other financial services. 

In the United States, allegations recently surfaced regarding the FBI’s use of its Next Generation Identification program. There, the FBI admitted adding to its database of 30 million mug shots by making side deals with states to obtain access to drivers’ license photos and other identification, resulting in the FBI’s ability to now scan up to 411 million facial images. The FBI was collecting this information without appropriate supervision and without the knowledge of American citizens. 

Unintentional Bias in Hiring Processes

FRS has more recently been utilized in the context of the hiring process, which renders the overall process more subjective and at risk of unintended bias. Likewise, Congress also has expressed significant concerns over the 14% failure rate of the FBI database (discussed supra), as a disproportionate share of misidentifications involving black Americans.

Fraud and Other Criminal Behavior

Researchers also warn against other negative consequences of AI and FRS including fraud (when one’s biometric data is replicated), break-ins (when others can use technology to tell that you are not at home), false identification (through both system error and fraud), and stalking.

What Can be Done to Lessen AI and FRS Technology Concerns?

Recognizing the myriad of issues arising out of this largely unknown practice, some states have begun enacting laws against the unauthorized use of biometric data. Currently, only Washington, Texas, and Illinois have passed legislation expressly designed to regulate the collection and use of biometric data and only Illinois allows for a private right of action. New York currently is considering its own legislation that, if passed, would join Illinois in allowing for a private right of action and the recovery of monetary damages. It is likely more states will consider similar legislation in the coming years, as the general public gains knowledge of the prevalent practices and concerns. 

At a minimum, individuals and industries cannot afford to be lax in evaluating and addressing privacy, bias, and other public safety concerns related to the use of AI and FRS. Indeed, Americans are likely to see the rise of an entirely new industry related to the oversight and correction of these issues as specific to AI and FRS, particularly as American and International jurisprudence evolve to address these concerns. For now, it is clear that AI and FRS cannot be relied upon in isolation, but they do require a constant “human presence” to guard against unintended consequences. 

Key Takeaways on AI and FRS Technology Concerns

As Artificial Intelligence and Facial Recognition Software grow in our daily lives for purportedly facilitating daily tasks and enhancing security, there are concerns that the public should be aware of, such as:

  • how use of the private data has been, and may be used in the future;

  • whether there are any applicable laws to protect users of AI and FRS technology;

  • unintended biases in the learning of the technology; and

  • the potential threat of fraud and criminal behavior with use of the technology.

For more information on data privacy, see our Technology & Data and Industry Focused Legal Solutions pages.