Need autonomous driving training data? ›

Preventing Bias in AI Retail Experiences

Preventing Bias in AI Retail Experiences

From Amazon Go to Sephora’s Virtual Artist app, some of the biggest advancements in computer vision are taking shape in the retail sector. Many of these advancements are revolutionizing shopping for consumers. As with every major shift, they come with both exciting new experiences and potential pitfalls.

 

But what is perhaps the most significant danger to be mindful of in a world where cameras manage in-store security, and sensors map and categorize the human body? The negative consequences of bias in AI systems.

 

Purpose-driven systems

Think about it: two people walk into a cashierless grocery store, and both attempt to swipe a bottle of water from the counter. One of them gets away with it, the other does not. Why the variance? It could be for the simple fact that the person who got caught was wearing a hoodie. In this scenario, the cameras used to monitor the physical space have displayed the uniquely human assumption that hoodies must be associated with thieves. This system is inherently flawed because engineers trained its computer vision models on historical human data where bias is present—and the system will continue to project bias until it’s trained on clean datasets.

 

In order to prevent bias, it is imperative that the data driving AI systems be purpose-driven—that is, data that is generated for the sole purpose of the task at hand. In the scenario presented, if data had been generated specifically for the purpose of tracking products—not people—this security flaw would be avoided altogether.

 

Diversity of data

Another disturbing form of bias AI systems can adopt from bad datasets is racial bias. Consider an autonomous application that relies on computer vision to help people sample and buy makeup. Now imagine the system’s creators have overtrained it on images of light-skinned faces. When the system encounters darker complexions, how well do you think it will perform? Will its recommendations be as strong? Likely not.

 

Even more alarming, now imagine the system is not designed for makeup recommendations but rather to autonomously alert human security personnel of shoppers the system believes may be at risk of shoplifting. What if that system was trained on biased data to detect people of color as posing a greater threat to store security, thereby leading human store employees to inappropriately monitor those shoppers more closely? AI systems trained on bad data can have serious real world consequences for innocent people.

 

This is why, in addition to making AI systems purpose-driven, it’s important to train them on diverse datasets that are representative of the population as a whole—not representative of the biases humans may have about what shoplifters look like.

 

The same case for diversity of data can also be made for gender. In a world where gender and gender identity is increasing fluid, autonomous systems trained on biased historical data about what it means to present as a certain gender is an area where retailers must be mindful. Computer vision applications that detect shoppers, assess their physical appearance, and make recommendations for clothing and accessories are especially vulnerable to projecting gender bias and making false assumptions about the types of outfits people may want to wear based on how the system perceives their gender. This is where training such a system on a massively diverse set of images of people who are gender binary and gender nonconforming can help prevent bias—and avoid bad experiences for consumers.

Photo by Victor Xok on Unsplash