Humans are predisposed to make questionable decisions. We have biases, are driven by emotional responses, have poor predictive capabilities and struggle to factor in more than a few data points at once. That gives AI a decision-making edge over humans, one with the potential to not only make us smarter but also keep us impartial. The problem is that AI learns from data generated by human actions (with the assistance of explicit domain knowledge) and therefore mimics our biases. That leads to algorithms that reflect our racism, sexism and other prejudices. Many organizations— is an example—are focused on finding and fighting human bias. This work is necessary, and I applaud such valuable efforts.


One way of eliminating bias in your AI is to hire inclusively. Build AI teams that achieve diversity across race, gender, sexual orientation, age, economic conditions and more. I would argue that if you do this, you’ll make better AI because your teams will be better at spotting bias, different backgrounds will drive more creative thinking, and more diverse teams will improve your ability to scale your solutions across the enterprise.

Diverse teams are better at avoiding bias in programming because they’re better at anticipating potential problems. For example, healthcare providers recently used an algorithm created for major insurance companies to steer high-need patients toward programs that would provide them with extra care. The problem was that the algorithm based its care decisions on how often patients sought medical care in the past, which favored patients with better insurance, more time, more money, etc. This favored group of patients was disproportionately white. The algorithm failed to recommend extra care for patients who were sicker but had less time and money, and that included a significant cohort of Black patients. This created an accidental racial sorting in which white patients were steered toward better care than Black patients. We don’t know which firm created this technology, but I guess that it wasn’t a diverse team. Black team members and those with less affluent backgrounds might have seen the potential for this kind of bias before it was released.


Diversity in thought also drives creative thinking. In 2006, Netflix announced a competition with a grand prize of $1 million to the team that could create a successful prototype of its now-famous recommendation engine. The winning team was BellKor’s Pragmatic Chaos. How did they do it? First, they competed with 20,000 other teams from 150 countries through multiple rounds. Then, during the later phases of the contest, teams began to merge with each other. The winning team was a combination of three teams: BellKor, Pragmatic Theory and BigChaos. Accuracy, the ability to combine models and the ability to generate insights from human behavior were all critical in achieving the winning prototype, and each team had one of these critical strengths. This clearly demonstrates that diversity in thought drives superior results. Team members with diverse backgrounds bring diverse thought to projects, which will drive superior AI.


Finally, deploying and scaling AI requires more than just technical know-how. Tb bv he ability to design intuitive interfaces, collaborate with users and leaders, communicate changes and have empathy for users are critical qualities for successfully scaling AI. Having empathy for the end user is key to elements like training, conveying the potential for positive impact and more. Collaboration and communication across functional groups (IT, data science, business) is also critical to ensure adoption of AI solutions. Studies suggest that women may perform better than men in these skills. Having team members with emotional intelligence, whether they are women, men or nonbinary is the key here. At the very least, the team that develops the AI should resemble the people who adopt it. A predominately white or male team may have a harder time connecting with a broader organization that isn’t as homogeneous. 


We can and must do more to enhance team diversity in the development of AI. I am committed to enhanced diversity on my teams and my journey is only beginning. Of course, we can’t stop at hiring a diverse mix of people. We also need to build a culture of anti-racism, awareness of unconscious bias, and continual dialogue and education, which you should urge your HR team to pursue if it hasn’t already. I know that data scientists are hard to come by, and I’m not suggesting that building such teams will be easy. But it’s worth the investment, even if that means you’re growing talent or reskilling and upskilling team members. You may not be able to snap your fingers and hire a diverse team for your next AI project, but if you want your team to make smarter, better, less harmful AI, then do whatever it takes to build a diverse team.


A diverse group of colleagues contributed ideas to this article.


This article was originally published on