Algorithms; love them or hate them; they cannot be avoided in today’s technology-powered world. They are present in almost every aspect of our digital lives, from the computer programs you use to do your job, to the social media platforms that adapt to the posts you like and share.

But what is an algorithm? An algorithm can be described as a step by step procedure designed to perform an operation which, like a flowchart, will result in the desired outcome if followed correctly. They have a definite beginning, middle and end, and a finite number of steps. Algorithms are perfect for solving structural problems, but are not suitable for problems where value judgements are required.

algorithmic bias

It is essential to understand that algorithms are not in themselves, a form of artificial intelligence. However, artificial intelligence and machine learning systems are made up of various algorithms that detect patterns and ‘learn’ by understanding them and adapting in response.

The advantages of algorithms

The press have been focussing in on the negative aspects of algorithms recently – especially regarding social media and the way machine learning has created echo chambers of opinions and fuelled the rise of fake news. Even so, it is important to remember the positive outcomes that algorithms provide when used well.

According to the respondents of a Pew Research Centre survey we can expect the use of algorithms to help us to improve the environment and see less pollution, improve human health, and reduce economic waste. Algorithms and machine learning applications have the potential to equalize information. Self-driving cars could dramatically reduce the number of accidents we have, and we can improve evidence-based social sciences by using algorithms to collect data from social media and click trails.

The possibilities are endless, and they can positively impact our lives, from subtle improvements such as algorithms doing the repetitive tasks which will free us up to do more human work such as creative thinking, to vast improvements such as quicker and more accurate disease diagnosis, potentially saving lives.

When algoarithms reflect a bias

While algorithms have the potential to improve the quality of life, they can also lead to harm and frustration. And because algorithms are not intelligent, so lack value systems, they simply carry out any command programmed into them without nuance. Therefore, any biases – intended or not – will automatically become baked into the programming. This can lead to PR disasters and potential legal issues if your product discriminates against a specific group. More importantly, you risk marginalizing certain people further and potentially damaging the lives of those who rely on your product.

A seemingly trivial example of unintentional bias in technology is of an automatic hand soap dispenser  located at one of Facebook’s offices. The dispenser is a straightforward machine that works by following these steps:

  • Machine is in idle mode
  • User places hand below pump
  • Sensor detects by reflecting back from hand (- this is where the bias was introduced)
  • Pump dispenses soap
  • Machine returns to idle mode

The programmers had failed to test their product with a variety of skin tones, as darker skin absorbs more light than the machine required to be reflected to activate the pump. The sensor was seemingly not calibrated for skin tones other than the white hands of the people, who we can assume, tested it.

“It works for me, therefore it must work for everybody” is the wrong attitude to have.

Imagine now if the above example wasn’t merely a soap dispenser, but a system to access something more critical, or a method of opening a door to enter or exit a building. A seemingly small oversight could potentially have broader consequences.

How to tackle this

When we look at the diversity of staff in the biggest tech companies, it is easy to see how this mindset can become entrenched and for products to be created with inherited biases. While it’s doubtful that anybody would intentionally create a product to marginalize a group of people, we must understand that humans have inherent biases and that oversight can lead to these being built into the things we make.

Steps can be taken to avoid this situation:

  • Don’t live in a bubble – work with a broad team and hire people unlike yourself
  • Expand your testing audience – bring in people with different lived experiences to your own, their perspective will be invaluable
  • Be inclusive – work to identify and eliminate any barriers
  • Check yourself – hold yourself, and your team, to account and continuously ask if you are being considerate and aware when developing each aspect of your product

Conclusion

Technology is made by people, and people are imperfect. We must acknowledge this, embrace it, and learn from it.

Without self-imposed checks and an elemental awareness, sensitivity, and compassion throughout the various stages of development, our unconscious biases can creep into our products causing frustration and harm to users, as well as damage your company’s reputation. Brendan O’Connor is an assistant professor at the University of Massachusetts who has published a paper on bias in natural language processing, which raises the issue of some AI programs learning to exclude African-American voices. He warns “We need to be aware this is happening, and not close our eyes to it and act like it’s not happening.”