Author: Tristan Greene / Source: The Next Web

Machines don’t actually have bias. AI doesn’t ‘want’ something to be true or false for reasons that can’t be explained through logic. Unfortunately human bias exists in machine learning from the creation of an algorithm to the interpretation of data – and until now hardly anyone has tried to solve this huge problem.
A team of scientists from Czech Republic and Germany recently conducted research to determine the effect human cognitive bias has on interpreting the output used to create machine learning rules.
The team’s white paper explains how 20 different cognitive biases could potentially alter the development of machine learning rules and proposes methods for “debiasing” them.
Biases such as “confirmation bias” (when a person accepts a result because it confirms a previous belief) or “availability bias” (placing greater emphasis on information relevant to the individual than equally valuable information of less familiarity) can render the interpretation of machine learning data pointless.
When these types of human mistakes become baked-in parts of an AI — meaning our bias is responsible for the selection of a training rule that shapes the creation of a machine learning model– we’re not creating artificial intelligence: we’re just obfuscating our own flawed observations inside of a black box.
According to the paper, this is all new territory:
Due to lack of previous research, our review transfers general results obtained in cognitive psychology to the domain of machine learning. It needs to be succeeded…
The post Human bias is a huge problem for AI. Here’s how we’re going to fix it appeared first on FeedBox.