Failed Scoring: Facial Recognition, Potential for False Positives, & Importance of Design

Failed Scoring: Facial Recognition, Potential for False Positives, & Importance of Design

Last week I wrote a piece on the practice of derived inference, aka scoring; that is, the practice of using computer algorithms to infer insight from data.

A key undercurrent of this piece was the issue of false positives. False positives occur when a derived result is presumed to be accurate, aka positive, but it is actually inaccurate, false, wrong, or biased in some way.

False positives can happen for a wide number of reasons, including when initial data fed to an algorithm is inaccurate or otherwise compromised, the confidence tolerances for data analysis are not properly set, misunderstood or ignored, or for any number of other technical, policy, or human reasons.

Policing with Facial Recognition

Today I read an article in Business Insider, “A US police force is running suspect sketches through Amazon’s facial recognition tech and it could lead to wrongful arrests,” which talks about how Amazon Rekognition is being tested by the police to identify suspects from handwritten sketches.

Amazon Rekognition is Amazon’s intelligent image and video analysis service. Organizations use the service to add Amazon’s artificial intelligence powered image and facial recognition capabilities to their business practices.

The Importance of User Design and Proper System Use

This article is a great example for highlighting the importance of proper user experience and design, solution training, and the potential for technology to be misused.

Proper user experience, design, and training

The article points out that Amazon recommends that the confidence tolerances in this use case, i.e. where drawn pictures are used in a policing facial recognition exercise, are set to 99%. What this means is the application should only return a positive result, aka a potential facial recognition image match, if the application is 99% confident that the images returned from the search are a match.

The article points out the officers interviewed 1) don’t set a confidence tolerance, and 2) the law enforcement application returns the top five closest matches and does not display a confidence result anyway. In other words, we have a failure in user experience, design and training, which may lead to false positives. Which, in this situation, could have extremely adverse results for all parties involved.

Key Lesson

Technology is a cornerstone of modern society, but it is important that we learn to deploy it and use it appropriately and wisely.

Image by teguhjati pras from Pixabay

Managing Partner at | Website

Michael Becker is an intentionally recognized identity & personal information management solutions strategic advisor, speaker, entrepreneur, and academic. He advises companies on personal information economy business strategy, product development, business development, and sales & marketing strategies. He also represents them at leading trade groups, including the Mobile Ecosystem Forum. Michael is an advisor to Assurant, Predii, Privowny, and Phoji. He is the co-author of Mobile Marketing for Dummies and a number of other books and articles related to mobile marketing, identity, and personal information management. He is on the faculty of marketing of the Association of National Advertisers and National University. A serial entrepreneur, Michael founded Identity Praxis, co-founded mCordis and The Connected Marketer Institute, was a founding member of the Mobile Marketing Association (MMA), and was on the MMA board of directors for ten years and was MMA’s North American Managing Director for three years. In 2004, Michael co-founded iLoop Mobile, a leading messaging solutions provider. In 2014, Michael was awarded the 2014 Marketing EDGE Edward Mayer Education Leadership Award for his commitment to marketing education.

Tags: , ,

Post a Reply

Your email address will not be published. Required fields are marked *