As covered in our last article, we’ve most definitely gone beyond the point of no return when it comes to automation. But before we go completely overboard into an automated society, one crucial factor needs to be sorted out: the bias of data and algorithms. Because algorithms are written by humans, they can reflect biases. For example, you search for something and get no results for that particular term because “it’s not allowed.” Or search results returned are politically slanted in some way.
For the most part, humans tend to not question algorithms and are too quick to trust them. The tendency at the beginning of the proliferation of AI, with deep learning for example, was to believe that there was no bias. We’re now realizing this isn’t true, and AI providers such as IBM are having to face the fact that their algorithms are not flawless—that they might actually amplify social biases. The original consideration most of us had, at the outset of the digital age, was that artificial intelligence wouldn’t suffer from gender, racial, cultural or other prejudice. But we’re realizing now that this isn’t true, because artificial intelligence is created by humans.
Effects of Wrong Data
History is rife with the incorrect decisions made from wrong data. A catastrophic example is World War I, which began when Austro-Hungarian heir Archduke Franz Ferdinand was assassinated by a Bosnian Serb Yugoslav nationalist in Sarajevo. The Emperor of the Austro-Hungarian empire issued an ultimatum to Serbia, and Serbia’s reply to this ultimatum failed to satisfy the Austria-Hungary, and the Empire declared war on Serbia.
In my opinion, this was a wrong decision on the part of the Emperor; I don’t believe he had enough facts. If Emperor Franz Joseph had been in possession of all the facts from all angles from all sides, and in possession of all facts of Archduke Ferdinand’s assassination, he very well might not have made such a catastrophic decision.
So we need to make sure that our algorithms at least provide us the information to make a correct decision. There is a big debate in the US currently just about social media—that social media has a political bias and cannot be relied upon for correct data.
Data Bias in Sales
For our own careers, data bias is just as dangerous in sales as it is anywhere else. When you’re going in to make a presentation to a prospective client, you certainly don’t want to have incorrect data about that prospect, as it could totally jeopardize your presentation and end you up with a lost deal.
The data that gets input into CRM—and the algorithms that gather it—all need to be completely bias-free if you’re going to truly succeed.
Questioning Automation
It now becomes a question of, how can we avoid such biases in the future?
Going back to the question posed in the last article, I truly believe that we’re currently in the midst of the biggest transformation the world has ever seen. In the global economy, we’re dependent on automation, and for many industries, there’s absolutely no turning back. So in which direction will we go: human over machine, machine over human, or human and machine working together in harmony? We’re still at a place where the human is above the machine—so solving this question of bias is very, very important before we proceed much further.
Who is programming AI and feeding it data? How can we be sure it’s gender-racial-cultural-equal? I actually think that’s the “magic sauce” for the future. The more we discuss it and bring it to the surface, the more we can rely on it and make corrections. Possibly the solution is to ensure that teams working on AI are very diverse and composed of equal ethnicities, sexes, and cultures.
In any case, I do believe this is the final and most important piece of AI and automation—the removal of data and algorithm bias. Let’s all take responsibility to ensure it is wiped out of existence.
Comments