AI bias is personal for me. It should be for you, too.

AI bias is personal for me pattern for mobile
AI bias is personal for me pattern for desktop

Mitra Best

Technology Impact Leader, PwC US

Email

I was first drawn to computer science because of the promise of artificial intelligence (AI). I thought it would change the world by offering neutral, precise, scientific and objective outcomes, free from the inaccuracies and prejudices of humans.

Precision aligned with my personality and workstyle, but the idea of operating in a universe free of prejudice was even more exciting. As a young computer scientist, I was hyper aware of the potential for bias creep. I was often the only woman in the room. I experienced firsthand not being heard, counted or included. I hoped and believed that a mathematical approach to reasoning would neutralize the effect of people’s unconscious biases. It could equally hear, count and include all perspectives. By eliminating bias, AI would help reduce discrimination and inequity.

It hasn’t worked out that way. When it comes to bias, AI hasn’t lived up to its potential.

AI: when good technology goes bad

Across industries and geographies, I have seen many examples of AI gone bad. Studies have found mortgage algorithms charging Black and Latinx borrowers higher interest rates and egregious cases of recruiting algorithms exacerbating bias against hiring women. A series of studies about various facial recognition software found that most had misidentified darker-skinned women 37% more often than those with lighter-skin tones. A widely used application to predict clinical risk has led to inconsistent referrals by race to specialists, perpetuating racial bias in healthcare. Natural language processing (NLP) models to detect undesirable language online have erroneously censored comments mentioning disabilities, depriving those with disabilities of the opportunity to equally participate in discourse. 

The list goes on — and attention to such problems is rightfully growing. Many leaders are now more aware of hidden, unfair processes in their systems. They realize that bias can cost their companies in brand and reputation damage, lost revenue, undermined employee recruitment and engagement and regulatory fines. They want to reduce these risks and try to make their AI a force for good.

AI bias: how the problem begins

How can mathematical models or algorithms be biased? The answer is people, whose biases and assumptions can creep into algorithms through three main avenues: in decisions about what data to use to train models to find patterns and insights, in how the model or algorithm is structured and in how the output is visualized. 

Consider hiring software, where AI-enabled decision making is so common, 75% of resumes never get reviewed by human eyes. If, for example, an AI model for hiring software engineers is trained purely on historical data — and historically, most candidates were men — the model may assign precedence to male applicants and automatically reject qualified women. In this case, even an accurate dataset may inadvertently perpetuate historical bias.

Humans write the algorithms to make certain choices: what insights to value, what conclusions to draw and what actions to take. Since the AI research community suffers from a dearth in diversity, the biases of the majority who tend to share certain dominant perspectives, assumptions and stereotypes can seep into AI models, inadvertently discriminating against certain groups. 

The decisions humans make about how to present insights from AI models can also bias the users’ interpretation. Consider how internet search engines powered by AI models show top-ranked results, implying that they are more important or even true. Users, in turn, can misinterpret the ranked results as the “best results,” possibly never clicking on what could be more accurate results as they get lower in priority with each non-click. These biases can shape our understanding of truth and facts. The many ways bias can enter AI models may impact automated decision applications to become systematically unfair for certain groups of people.

AI bias and your bottom line

Failing to address the risk of AI bias not only amplifies societal inequities, it could cost your company heavily among regulators, consumers, employees and investors. 

Lawmakers and regulators are sharpening their focus on AI bias. The Federal Trade Commission, for example, recently warned companies that they may face enforcement actions and penalties if their AI reflects racial or gender bias. Studies show that consumers prefer to buy from companies that reflect their values — and some of the world’s most iconic companies have built campaigns around societal justice. Many employees prefer to work at companies that visibly uphold standards of equity and inclusion. Even the capital markets are paying attention. Environmental, social and governance (ESG) funds, which typically include diversity and inclusion benchmarks among their requirements, captured more than $51 billion in new money last year.

The remedies for AI bias

Many of the companies I work with as they seek to reduce AI bias encounter a common problem: AI models are incredibly complicated and designed to evolve — making them not only a difficult target, but a moving one. They’re also proprietary, containing information that can’t be revealed, further increasing the challenge of identifying and mitigating potential bias. 

These companies needed a solution, and finding one soon became a passion for me. With the brainpower of our incredible teams, we found a way to bypass the main challenges. Instead of evaluating the code we opted to scrutinize the outcomes with Bias Analyzer, a cloud-based application.

“Bias Analyzer scans the output of an AI model, assessing it against tested bias metrics and flagging “out of range” results that may indicate bias.”

Users can then simulate the impact of different mitigation strategies — quantifying expected results and comparing them against fairness metrics — before they go in and tweak their models.

Bias Analyzer helps companies proactively identify, monitor and mitigate potential bias risks in automated decision-making systems. It allows them to self-regulate and continuously improve fairness in their AI models without slowing down business imperatives. 

I’m passionate about innovations that technology can provide. I’m also passionate about innovations that creative minds can dream up. Just as we hold each other accountable for our actions, in the digital world we need to hold algorithms accountable for the outcomes they produce. 

With solutions like Bias Analyzer, and confirming diversity in our teams, we will soon have a better chance of accessing the promise of AI: data-driven, objective and equitable outcomes.

Get help mitigating bias in your AI models

Bias Analyzer, A PwC Product

Get help mitigating bias in your AI models

Learn about PwC Bias Analyzer

See the product

Unlock the full potential of analytics and artificial intelligence

Unlock the full potential of analytics and artificial intelligence

PwC’s Analytics & AI Transformation Solution

Learn more

 

Next and previous component will go here

Follow us