In our last article we discussed the fact that automation and algorithms are created by humans, and so, therefore, can be biased. For our final article on the subject of automation and where it’s taking us, we want to examine the human’s role in the automated society, and how important it is for us to fully understand it.
Different Interpretations of Humans
The Industrial Revolution brought machines to common use in the world. Where you might have had 50 people engaged in a certain task, those 50 people were then replaced by 1 machine. When all was said and done, though, you still had a human being controlling what the machine was doing. Advancements have been consistently made since then, and increasingly more human functions have been replaced. Today machines can compile and analyze data, and suggest strategies and directions to us.
But as we said in the last article, humans are the ones designing and programming the algorithms that are behind all this. They can be influenced by gender, age, race, culture, nation, upbringing and mindset. Also, a very important part of all this is the world view taken on the human—how is this person viewing the world?
The future will depend on how we interpret the directions we are getting from automation. It is therefore very important that we have a realistic view of human beings.
There are many different descriptions of the human being. For the purposes of economics, we can narrow them down to 2 distinct views:
1. As stated in the Austrian School of Economics, the human being, the individual, is the active agent behind anything that takes place in the world. The founder of the Austrian School of Economics, Carl Menger (1840-1921) described the human individual as the final source of reality, as opposed to society as a group. Another of the Austrian School’s leading lights, Joseph Schumpeter (1883-1950), actually created a term for this concept: methodological individualism.
2. Another very different (and highly unrealistic) concept of the human being, known as Homo Economicus, was evolved in mainstream economics. The concept was first advanced by British economist John Stuart Mill (1806-1873). Italian economist Vilfredo Pareto wrote about it also.
This theory basically states that humans all have the same reactions, and can, therefore, fit very neatly into economic equations. Homo Economicus is said to have 6 habits:
2. Rational in action.
3. Maximizing benefits.
4. Reacting to environmental conditions.
5. Equipped with established preferences.
6. Fully informed.
Comparing the Concepts
The Austrian School, from the beginning, has been completely opposed to the concept of Homo Economicus. Why? Very simply, because it’s impossible for individuals to fall into these mere 6 “pigeon holes.”
Let’s just make an example of a couple of computer programmers—one quite young, and another past middle age. Without much looking, you’re going to discover that they have very different preferences. The younger one would be after that new car, the latest smartphone, or the latest model appliances. Normally the older person would not have these same preferences, as they have completely different desires from life.
The Austrian School tells us that there are short-term and long-term preferences. The older a person becomes, the shorter-term become their preferences, simply because their life expectancy is shorter. For example an 80-year old, normally speaking, is not going to purchase a home.
Preferences can be broken down even further. Someone in their twenties, when it comes to automobiles, will be more likely to want a sports car. In their mid-30s when they might have children, preferences usually go in a different direction—they’ll want a vehicle more suited to a family, such as an SUV.
Taking a look at habit #6 above, are we really all “fully-informed”? The answer is, of course, no. The Austrian School tells us that we’re never fully formed about anything—and evidence of this can be seen on a daily basis. If people were truly fully informed, they wouldn’t make many of the wrong decisions that they make.
This also strikes right at #2 above—that our actions are based on rational decisions. Just look around at the world, and ask yourself how true that is. How many people really make rational decisions?
According to the Austrian School, there are three factors involved in decision-making:
Egoism, dealing with the person themselves
Altruism, dealing with what a person does for other people
Mutualism, dealing with cooperation with others
Another of the Austrian School’s leading lights, Friedrich Hayek (1899-1992), takes all of this even further and says that the rationality of behavior is limited by a person’s capacity to perceive, and their own principles of perception. This says that we cannot possibly have all of the knowledge that we need. So no, we’re not “totally informed” and therefore cannot make totally accurate predictions about the future.
Objective Versus Subjective View
Another leading economist from the Austrian School, Ludwig von Mises, in his book Praxeology, speaks about the theory of human action—basically that human action cannot be predicted. Therefore we cannot know how a human being will behave. He also points out that most reality is subjective, not objective.
This can be seen in history. The history of humankind is totally subjective—it’s an interpretation by the historian. This becomes obvious when you read 3 different accounts of the same historical event by 3 different people. You can discover the truth of subjective reality for yourself through the old game of “telephone” in which you line up 10 people, whisper a sentence in the first person’s ear, and have them pass it on by whispering it to the next person, and so on down the line. Then have the 10th person relate what was passed onto them. It will usually be totally different than the sentence started by the first person.
Subjective Views and AI
In that subjective realities vary so greatly, how can we then depend on Artificial Intelligence programmed by humans with such differing views and paths of thinking? (Note that I don’t have an answer to this question—I am simply raising it).
Once more we’ll return to the Austrian School’s Joseph Schumpeter, who pointed out that there is no difference in race, sex, religion, class, or nationality. Every human being is the same, but operate from different motivations rooted in egoism, altruism, and mutualism. We cannot predict because these motivations are subjective. Therefore when someone is programming an algorithm, who knows what that person is thinking and how they’re interpreting the world?
Another question might be, how many people will have to fine-tune the decisions we will make based on decisions issued by artificial intelligence?
Today, after 140 years, people are increasingly realizing the fact that there is a real problem with the theory behind Homo Economicus. Through the tremendous number of studies on human behavior, we know that human behavior is unpredictable. Why? Because a human, in a single instant, can change their preferences.
A sudden event can change everything with regard to a human’s perception and preferences. Someone could have an accident, or lose a partner, and it would change much. Or on the positive side, what about winning the lottery? I’m quite sure that the life preferences of the 24-year-old millennial who recently won 768 million dollars have radically changed!
Trying to predict human behavior was already a complex undertaking at the beginning of the 20th century. But we now have almost 4 times more people on Earth than we had then. When Menger and Schumpeter were writing, let’s say in the year 1920, we didn’t even have 2 billion people. Today we’ve got nearly 8 billion, and trying to predict behavior is a mathematical impossibility. In fact, we cannot predict what one human being is doing—and now we have nearly 8 billion of them.
Human Thinking Danger to AI
This being the case, how accurate can our computer algorithms, or the data they generate, actually be? Who will correct the analysis?
I’m not saying we should cease creating artificial intelligence or these algorithms, not by any means. I’m just bringing up the risk factors for the future. We should be aware of them.
The “computer” in our mind is very different. Every word that you’re reading automatically brings an association to another text you have read at some time, or experience you have had, or something from your education. It is all this information that artificial intelligence and algorithms don’t have.
These particular weaknesses of human beings are what bring the danger, as I see it, to automation, to artificial intelligence.
With all our faults, though, we’re still in a supreme position to machines. We create them after all. One question becomes, will the machines eventually take over? A computer, essentially, doesn’t make mistakes, while humans do. That’s what makes it possible to fire rockets perfectly off into space and to have them guided. For this and for many other areas of life, we rely (and will increasingly do so) on technology that makes no mistakes.
In a previous article, we took up the self-driving car, which is an example. It will one day make very much sense, simply because it doesn’t make mistakes. A human driver might be on the freeway and someone calls them and yells, “Your mother died!” In that moment the driver irrationally stomps on the brake or the accelerator or just stops paying attention to the road. The driver creates an accident—where the self-driving car wouldn’t.
I believe it might be necessary to remove human decision from where—as in the case of the car, above—it is a risk factor. More broadly, we can see that the computer might be fully informed in comparison with the human being who is, as we’ve demonstrated, never fully informed.
Application to Sales
Which brings us, finally and once again, to sales. How many salespeople are upset because they reach out to prospects with calls and emails and get no response? Well, have you ever seen a time when a computer didn’t respond when you asked a question? A computer is running 24/7, has all the information and is totally informed.
It might behoove companies to actually leave the buying choices for big purchases to a computer. A computer, with no emotional bias whatsoever, could compare product to product, price to price, feature to feature, benefit to benefit, and make highly reliable comparisons. There is a consumer app now marketed for this purpose called Honey that is all the rage. This concept could potentially extend to more complex purchases.
I should note, however, that there have been a few predictions—one by Forrester comes to mind—that sales jobs would be drastically reduced over time because of automation. I don’t agree with this prediction at all, and in fact, I see sales jobs increasing. Yes, automation has already replaced some lower-level B2C sales positions, but with more complex B2B sales, live reps will always be needed. And even in B2C, we still see an increase in sales reps at places like Apple stores, don’t we? In an increasingly automated world, that human touch becomes more needed than ever.
Who Is This Human?
So once again, it all comes back to: who is this human being? How do we interpret the human being, and how realistic is the world view of that human being?
In answering this question, we have to make all our decisions for the future. I do realize this is very philosophical, but in my opinion, AI is very much like philosophy because—and we will see this very soon—it will affect every part of our lives.