All insights

Stopping AI Bias Starts With Diverse Product Teams

Radhika Krishnan Radhika Krishnan
Chief Product Officer, Hitachi Vantara

September 28, 2021


One of the most potent, but often unseen, hurdles to effective product design is under-representation, which can lead to inherent bias.

Consider the automated dispenser that doles out liquid cleanser for white hands but doesn’t respond consistently to those with darker hands. How about the airbags that are now a staple on every modern vehicle? It turns out that even with the safety gear, women are 80% more likely than men to suffer crash injuries during accidents.

Beyond the ethics of debating gender or racial bias, there are serious ramifications for these and many other product design decisions. In the case of airbags and seat belts, according to the blog post from Expedia linked above, “As recently as 2011, vehicle safety ratings were based solely on how they performed in crashes with male test dummies.” Given that women, on average, differ from average men in both height and weight, it’s no wonder the standard design and test procedures produced dramatically flawed results.

Similarly, this impact of bias and under-representation in product design has consequences in the digital world, which leans heavily on artificial intelligence (AI). AI-enabled robots are augmenting human labor on the factory floor, improving customer service with chatbots and enriching the retail experience with recommendations about what products to buy. While these AI use cases are centered around convenience and efficiency benefits, what happens when AI is tasked with health care or college admissions decision-making? In these instances, under-representation and bias can have lasting and consequential ramifications.

The use of AI in an experimental recruiting application is a perfect example of what can go wrong. A design team built an AI program to review job applicant resumes and automatically rank talent to streamline the job search, but as it turns out, the AI program didn’t operate in a gender-neutral way. That’s because the computer models were trained using patterns in resumes that were submitted over a 10-year period, during a time when the most successful candidates were men. As a result, the AI model penalized resumes that included words like “women” or “women’s chess club captain,” and it downgraded candidates graduating from all-women schools. In the end, this recruiting program was removed.

There are reports of similar design fails related to algorithmic bias and under-representation in the area of autonomous vehicle development. A study released by the Georgia Institute of Technology found automated vehicles could better detect those with lighter skin. Using a sample group divvied up based on the Fitzpatrick scale, the researchers found the AI models were approximately five percentage points less accurate when it came to detecting those with darker skin tones compared to their whiter skin counterparts.

Field A Diverse Team

In each of these cases, it’s fair to suggest that a more representative design team would’ve been better equipped to address critical differences as part of early design and test requirements. Given the correlation to innovation and product safety, well-timed diversity talking points and recruitment tactics are still no substitute for making representation a pillar of business and design strategies.

Here are three recommendations for promoting diverse design thinking:

1. Recruit For Diversity

Turning that best practice into a mandate is a challenge given the tech industry’s well-documented diversity problem. Nearly every ethnicity, with the exception of Asian, is under-represented in the sector — according to 2019 U.S. Census Bureau estimates, Black tech workers represent 13% of the U.S. population but only 7% of tech industry employees, while Hispanics account for 18% of the total population and 8% of the high-tech workforce. The problem is even more acute in the AI sector — only 22% of all global AI professionals are female.

Nevertheless, it’s important to seek out and hire diverse candidates, but also those with a proven track record of challenging the status quo and speaking up in group settings. Spotlighting diverse role models will attract more of the same.

2. Enforce Ethical Guiding Principles As Part Of Product Design

Principles such as mutual respect, acceptance of different ideas and varied backgrounds and working styles should be part of the product design mandate. Moreover, principles that guide the ethical use of AI will ensure that the technology is employed properly to address complex societal issues.

3. Strive For Transparency

Don’t design black box products, but instead let users understand the decisions made. For example, my company created novel AI technology to predict the risk of hospital readmissions within 30 days for patients with heart failure. The system identified the patients that were more likely to be readmitted, which had benefits on its own.

Built around a concept called explainable AI, the system also laid out the reasons for readmission, building patient trust and allowing them to make changes that would decrease the possibility of hospitalization.

In a separate example, a Japanese bank leveraged explainable AI to create accurate risk models, also using the interaction to give loan applicants clarity on what steps they could take to improve their credit scores.

Clearly, the lack of diversity among those creating and training AI models has resulted in technology that only works for some people. But with a focus on diverse design thinking, we can create innovative products that will work for everyone.

This article originally appeared in Forbes.


Radhika Krishnan

Radhika Krishnan

As the global head of products, Radhika led vision, strategy, delivery and business performance across data storage, data operations and analytics and IoT. Prior to Hitachi Vantara, she was GM, Software at 3D Systems and in senior roles at HP, Cisco and NetApp.