Failed Scoring: Facial Recognition, Potential for False Positives, & Importance of Design

Failed Scoring: Facial Recognition, Potential for False Positives, & Importance of Design

Last week I wrote a piece on the practice of derived inference, aka scoring; that is, the practice of using computer algorithms to infer insight from data.

A key undercurrent of this piece was the issue of false positives. False positives occur when a derived result is presumed to be accurate, aka positive, but it is actually inaccurate, false, wrong, or biased in some way.

False positives can happen for a wide number of reasons, including when initial data fed to an algorithm is inaccurate or otherwise compromised, the confidence tolerances for data analysis are not properly set, misunderstood or ignored, or for any number of other technical, policy, or human reasons.

Policing with Facial Recognition

Today I read an article in Business Insider, “A US police force is running suspect sketches through Amazon’s facial recognition tech and it could lead to wrongful arrests,” which talks about how Amazon Rekognition is being tested by the police to identify suspects from handwritten sketches.

Amazon Rekognition is Amazon’s intelligent image and video analysis service. Organizations use the service to add Amazon’s artificial intelligence powered image and facial recognition capabilities to their business practices.

The Importance of User Design and Proper System Use

This article is a great example for highlighting the importance of proper user experience and design, solution training, and the potential for technology to be misused.

Proper user experience, design, and training

The article points out that Amazon recommends that the confidence tolerances in this use case, i.e. where drawn pictures are used in a policing facial recognition exercise, are set to 99%. What this means is the application should only return a positive result, aka a potential facial recognition image match, if the application is 99% confident that the images returned from the search are a match.

The article points out the officers interviewed 1) don’t set a confidence tolerance, and 2) the law enforcement application returns the top five closest matches and does not display a confidence result anyway. In other words, we have a failure in user experience, design and training, which may lead to false positives. Which, in this situation, could have extremely adverse results for all parties involved.

Key Lesson

Technology is a cornerstone of modern society, but it is important that we learn to deploy it and use it appropriately and wisely.

Image by teguhjati pras from Pixabay

Tags: , ,