In early June of 2020, tech giant IBM announced they will stop offering facial recognition software until further anti-bias testing has been conducted.
“IBM firmly opposes and will not condone the uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, [or] violations of basic human rights and freedoms,” wrote IBM chief executive Arvid Krishna in a letter to Congress.
A few days later, Amazon and Microsoft followed suit, vowing to reexamine the use of their facial recognition software by law enforcement after citing similar concerns about bias.
These announcements come in the midst of anti-racist protests that have sparked global dialogue about police use of force and racial discrimination by law enforcement and the criminal justice system. It has also set off a wave of concern about the ways bias and discrimination creep into facets of everyday life and disproportionally impact marginalized peoples.
In light of these events and public response to them, many companies have publicly committed to fighting discrimination by reexamining their hiring practices, donating to organizations that aid communities of color, and promoting diversity, inclusivity, and open dialogue internally. But IBM has taken it a step further by looking at the ways their technology could lead to harmful — even life-threatening — oversights for lack of thorough testing.
Is Technology Biased?
In theory, no — technology itself is a blank slate, incapable of bias or discrimination.
But in practice, tech inherits the biases of its creators, whether or not they intend it. Without proactive methods to protect against this, its application in the real world can alienate customers and exacerbate existing challenges.
The failings of facial recognition software to reliably distinguish between people with non-European facial features are not the only problem area. Similar instances have been documented in other technology from even the best-resourced companies — most famously, Amazon’s automated recruiting engine that was scrapped in 2018 for showing bias against female candidates.
At the end of the day, surfacing and resolving these biases is a quality issue of critical importance. As the audience for sophisticated technology gets more and more mainstream, your product will likely land in the hands and homes of customers much more diverse than your product development team.
Why risk your product’s star rating, negative reviews, and your company’s reputation on something that is completely avoidable? Fortunately, there is a tangible solution to streamlining your product for people of different ages, genders, races, and abilities: test it with a real and diverse group of your customers.
Combating Bias with Real-World Product Testing
With a highly stressed population emerging into a post-pandemic landscape, there’s never been a more important time to make sure your product gets it right. For one, stress and uncertainty make people less tolerant of minor setbacks. For another, consumers have more time — to research options, to read reviews, and to leave their own detailed reviews of their experiences. Finally, the economic toll of COVID-19 raises the stakes for releasing products. Many companies literally can’t afford a flop.
When a segment of diverse needs is overlooked, it creates usability and satisfaction gaps that compromise an otherwise great product. By intentionally diversifying your tester pool to reflect more of your target market, you open the doors to uncovering those deep-seated issues — and create opportunities for product evolution.
In addition to receiving firsthand data on product performance from a much wider range of technology and use cases, real-world testing also highlights usability challenges difficult to devise and replicate in a lab. Deciphering different accents, for instance, is an all too common challenge in the voice technology industry. Real-world product testing enables you to surface and prioritize those issues, not only by introducing a variety of accents but by giving in-depth feedback on how usability changes depending on environmental fluctuations. Think about going from a car into a house where the A/C is running. Situations like that can be studied and corrected before launch.
While product makers need to actively challenge systemic practices to prevent biases from rooting their way into new technologies in the first place, real-world testing goes a long way toward protecting your company from catastrophic oversights right now. More importantly, it protects your customers from frustrating experiences that diminish loyalty and shake the foundations of trust.
Get Started with Our Tester Network
For many product testing teams, recruiting target market testers from a diverse audience is a barrier to collecting the product insights they need to succeed. If you are struggling to build a tester team that reflects the diversity of your target audience, consider running your next recruitment with our network of 250,000 pre-profiled users.