Guarding Against Bias in AI

By Tom Huntington

Jun 15, 2022

The use of Artificial Intelligence (AI) has become commonplace across a myriad of applications and technologies that affect our everyday lives. While there is no question that AI and Machine Learning, in many circumstances, have lead to powerful advancements that have measurably improved people’s lives, it is also true that, if left unchecked, the use of AI can reinforce and in some cases introduce bias.

As Dr. Sanjiv M. Narayan, co-director of the Stanford Arrhythmia Center, put it, “...bias in AI occurs when results cannot be generalized widely. We often think of bias resulting from preferences or exclusions in training data, but bias can also be introduced by how data is obtained, how algorithms are designed, and how AI outputs are interpreted.”

The Risk of Disability Bias 

At Arena Analytics, we use AI to help our customers identify unconscious bias and create more equitable workforces. At the same time, we are acutely aware of the risks of unintentionally introducing bias into the hiring process, despite the best of intentions, and we are constantly monitoring our technology to help ensure that doesn’t happen. 

Recently, the U.S. Department of Justice and the Equal Employment Opportunity Commission (EEOC) published a document about the dangers of disability discrimination when employers use artificial intelligence (AI) and other software tools to make employment decisions. Specifically, the document suggests that employers “...should have a process in place to provide reasonable accommodations when using algorithmic decision-making tools” in order to prevent unfairly “screening out” candidates with disabilities.

The federal government has a very important role to play in identifying and regulating these issues, and actions like the recent notice from the DOJ are, in fact, overdue. Disability bias is one of the many dangers we need to guard against in the hiring process, and providing detailed information on best practices will make it easier for organizations to do the right thing while shedding light on bad actors in the space. 

Prioritizing Ethics

Prioritizing ethics and fairness is the key to continuous, intentional improvement around these issues. We take the issue of ethics very seriously and have assembled an AI Ethics Advisory board filled with distinguished industry experts, academics, and thought leaders who provide guidance and strategic direction in order to help us ensure that we are acting responsibly, fairly, and ethically when deploying our technology to the market. 

The bottom line is that any company that uses AI or machine learning needs to embrace the challenge of identifying and mitigating bias as a key part of their mission. Together, we can assure that we are deploying exciting innovations in technology in fair and responsible ways.

Tom Huntington

Senior Director of Communications