Belgium’s federal police have formally denied using highly controversial facial recognition software, despite data leaked from the manufacturers themselves.
The information comes from an investigation by the online magazine BuzzFeed, which initially broke the story based on a list of log-in details stolen by hackers in 2020, which was later admitted by the owners, Clearview AI, to be genuine.
It now emerges that the software was demonstrated at a conference of Europol the EU’s law enforcement agency, in 2019, and interested forces were given demonstration copies to test for themselves.
According to the BuzzFeed investigation, 88 police forces in 24 countries outside the US took up the offer, among them the Belgian Federal Police (which s a unitary force). According to BuzzFeed, the Belgian force used the tool between 100 and 500 times. When contacted by the magazine, the police declined to comment.
Now, according to Data News, the police have formally denied using the software, while saying that the various services within the force are being asked for comments on the question. And in any case, there are no current plans for the tool to be brought formally into use.
The problem for any force in Europe thinking of doing so is that it is generally considered to breach the GDPR, the regulation which we encounter many times a day when we agree to the use of cookies on a website.
Clearview, in fact, effectively hoovers up photos from websites like Facebook, Twitter and Instagram, without the permission of the people who are pictured or those who uploaded them. The photos then form Clearview’s database of facial images, which the company boasts now numbers three billion images.
Those photos, privacy authorities are likely to conclude, were obtained illegally and may not be used by police forces in the EU.