EECS 649 Blog 3: Bias in AI Models

Author: Megan Rajagopal

With the advancing of technology, creating models to represent and/or predict outcomes is becoming a very large part of our society. Cathy O’Neil talks about how these technological models can be harmful in her book Weapons of Math Destruction. In Chapter 1, she describes what a model is and how human bias caused harmful/negative models.

According to O’Neil, models are nothing more than abstract representations of some process. This means models can be created to represent an array of things like: baseball game, supply chain, sales, choosing best restaurant option, predicting the best dinner options for a family etc. These models are created by humans using their previous experience and knowledge to predict the outcomes of various situations. However, humans creating the model will lead to biases in the model.

Bias comes into the models because humans must decide what data/information is important and necessary to accurately represent the world around them. As a result, a humans previous experiences and knowledge play a big role in creating biases in technological models. Furthermore, the success of a model is also determined by humans. A model will take in data and output its prediction/result. It is up to a human to take the information produced by the model and decide if the model is successful or not. The definition of success is another area where human biases can come in to play, since one human’s definition of success for a model can differ from another.

Since humans are creating these models and are influencing the models with their biases, there are many examples of biases in AI technology. One well known example is the Amazon recruiting tool. In 2014, amazon developed an AI tool that would review applicants’ resumes and would assign a rank to them so that amazon recruiters did not have to manually screen resumes and applicants. However, the data used to create the model showed a bias towards women, since men had been dominating the technology industry and men represented over half of the employees. With that said, the algorithm showed a bias against women and would rank them lower. Amazon has since stopped using this algorithm for recruiting. Another example is Facebook ads. Facebook was allowing advertisers to target ads to users based on gender, race, and religion. That being said, women were targeted for job ads for nursing or secretarial work, while minority men were targeted for jobs ads for janitor or taxi driver. This bias was created by the advertisers who used their previous experience and knowledge to tailor their ads. As a result, Facebook stopped allowing employers to stop specifying age, gender, and race in ads.

Biases in AI can be prevented, if humans ensure that a complete set of data is used to train the model. Having a complete set of data means that biases are removed from the data set, so the AI technology will not be influenced by human bias.

Sources:

Cathy O’Neil, Weapons of Math Destruction

Bias in AI: What it is, Types & Examples, How & Tools to fix it (aimultiple.com)