Member Article
3 million employees now using Workhuman’s AI-powered bias detector
- Workhuman’s Inclusion Advisor flags bias in everyday workplace language and recommends ways to communicate more inclusively
- It has been adopted in employee recognition programmes by global enterprises including Merck, LinkedIn, Cisco, and others
- Built by Workhuman’s team of engineers in Dublin, the proprietary AI tool takes just a fraction of a second to scan a message and supply an alternative bias-mitigating suggestion
- In the last nine months, Inclusion Advisor flagged 69,000 instances of bias in 40,000 recognition messages
Workhuman®, the company revolutionising the way employees celebrate, connect with, and appreciate each other in the workplace, is today announcing that its proprietary AI-powered bias detector, Inclusion Advisor, is now being used by 3 million employees worldwide. Global enterprises including Merck, LinkedIn, Cisco, and others are using the tool as part of their employee recognition programmes to provide team members with continuous coaching that enables measurable changes in attitudes over time.
Workhuman’s first-of-its-kind tool analyses everyday workplace language to detect unconscious bias. With a click of a button, employees can use Inclusion Advisor to scan the message they have written in Workhuman’s employee recognition platform. The tool was built in Dublin by Workhuman’s technology and engineering teams. It has been developed over a four-year period and harnesses the expertise of the Workhuman iQ team of linguists, data scientists, engineers, psychologists, and experts in natural language processing (NLP) and machine learning (ML).
Workhuman research shows that implicit bias is found in 20-30% of all written communications in the workplace, even in the most positive setting. The Inclusion Advisor model can detect 18 forms of bias and in the last nine months, it flagged 69,000 instances of bias in 40,000 recognition messages. Some types of prevalent biases categories that it flags are:
- “Me, Myself and I” – found in 14% of bias instances, this language conveys the writer’s sense of self-importance and distracts from the awardee’s accomplishments.
- “No Work-Life Balance” – appearing in 13% of bias instances, this category praises work that is detrimental to a healthy work-life balance.
- “Gender stereotyping” – representing 5% of all bias detected, this is praise which conveys unconscious gender bias, involves gender-biased language, and/or refers to stereotypical gender roles.
Inclusion Advisor, which is continuously updated with real-life examples of implicit bias, can also detect biases including age bias, microaggressions, and negative comparisons.
Jonathan Hyland, Workhuman CTO, said: “We wanted to create a tool that provides an in-the-moment learning opportunity to better understand how language can be perceived by others. Even the most well intended messages can have unconscious bias. The right words and the right message, delivered in a timely manner, can have a positive and even lifelong impact on others. Inclusion Advisor helps users write inclusive, meaningful recognition messages that will make the recipient feel seen, heard, and valued for their contributions.
“With the latest advancements in AI and ML, we are able to develop more sophisticated technology that can better understand and interpret recognition messages, identify trends and patterns, and provide more accurate and personalised recommendations. Inclusion Advisor reacts to unconscious bias on a continuous, sustainable basis, which is the only way to see real behavioural changes. In doing so, we are providing businesses with the most advanced and effective tools to foster a truly inclusive workplace environment.”
This was posted in Bdaily's Members' News section by P Adams .