(Human + Bias) x Technology = ?
We are naturally biased regarding a variety of different topics. Some of these topics are very menial and not really important. For example, I am biased against pickles over baked goods. If my friend were to organize a party, I may try to persuade them to serve baked goods instead of pickles because of my bias. Also, have you ever noticed that people either love or hate pickles? There never seems to be any in-between. Team HATE PICKLES all the way!
But, of course, bias can be much more serious. People can be biased against gender, race, or sexual orientation. “I’m going to be friends with this person because they are a girl.” Or “I’m going to give this job to this guy because he’s straight like me.” Suddenly bias plays a much more crucial role in our lives than deciding between baked goods and pickles.
I never really considered how our biases are deeply engrained into the things that we create. This makes perfect, logical sense, but it’s not something I usually think about critically, though I should. When it comes to technology, this is something extremely important to think about.
Google published a video on YouTube last year explaining what machine learning is and how human bias can substantially influence it. I have to give Google credit; it’s a pretty good description on what machine learning is and how they are aiming to fix it to make “technology for everyone.”
Despite this, it gets me thinking a lot about how human bias is influencing the way in which we use and interact with technology. Does it influence statistics? Does it cause machines to be sexist? Racist? Homophobic? It’s an odd concept to wrap our heads around, believing that a machine can have these negative “personality traits,” but if they are programmed to have such tendencies, then they will fulfill them.
I can easily see this getting out of hand without the tech designers even realizing it. The addition of all this human bias can create substantial problems when it comes to how certain technologies run their algorithms.
I recently spoke at an event hosted by VulnerableWin in New York City. This idea was a big topic of discussion, particularly about automated staffing software which hires freelance workers to perform routine tasks. The idea, I believe, is smart in many ways. It frees up the time of other employees and increases time and efficiency. But at the same time, humans are not points of data. And, whoever is programming this software, is responsible in making sure that it does not discriminate against a certain group of people.
These are tricky waters, indeed, for we have no way of controlling our human bias in the moment. I will say that the more we learn, and the more we expose ourselves to people and cultures different from our own, the greater probability that those biases will be less likely to interfere in creating technology. This takes a lot of self-discipline of the creator, however. Are the majority of people in the world willing to educate themselves on people and develop better awareness of themselves and the world around them? These are the big questions that must be answered moving forward.