March 1, 2021
-
First gaining widespread recognition in early 2020 , Clearview AI offers law enforcement agencies around the world technology that can match images of faces to a database of more than three billion images the company scraped from various social media and online platforms. Though touted as potentially revolutionizing law enforcement, the company’s activities have sparked privacy concerns from numerous NGOs and other civil society groups. These concerns were amplified after data breaches at the company revealed customers lists included private companies as well.
The tumult of 2020 meant that reporting on privacy concerns surrounding Clearview AI’s facial recognition software largely faded from view. However, as TechCrunch reports, the Canadian Privacy Commissioner recently released a statement that the company violated federal and provincial privacy laws.
The strong statement comes after calls for modernization from the Privacy Commissioner and the introduction in late 2020 of a bill aimed at doing just that.
The Commissioner recommended that Clearview AI stop offering its facial recognition services to Canadian clients, stop collecting images of Canadian individuals, and delete all previously collected images and biometric facial arrays of individual in Canada. While the company discontinued services to its only remaining Canadian subscriber (the RCMP), they have not committed to following the other two recommendations.
Strong statements from the Privacy Commissioner, growing public concern about privacy, and moves from other big tech companies to protect user data, mean companies like Clearview AI may need to pivot. At the very least this may involve developing strategies aimed at improving their public image and at most, potentially changing their business model. In October 2020, the Wall Street Journal reported Clearview was adding new compliance features aimed at ensuring the ethical use of their technology by law enforcement.
Author: Emma Baumann
Commentaires