Technology is growing exponentially—faster than we can even comprehend. When my grandfather was a boy, everyone was mainly still riding horses. In 1919 right after WWI, he visited the relatives of his mother in Chicago. A voyage he had to make first by ship from Rotterdam to New York and then by train from New York to Chicago. By the time he passed away in 1978, such trips were made in a matter of hours by air, and he was driving a brand-new luxury automobile. His lifetime saw the introduction of the car, the telephone, television, air travel, computers and much more.
But the rate at which technology is being adopted today is even more mind-boggling. Just look at the smartphone. The iPhone, which began the smartphone revolution, was introduced in 2007, and a 2013 survey across nine emerging economies found that one in four adults ages 18 to 34 owned a smartphone. By 2018, that number had grown to two-thirds for most countries.
For artificial intelligence, we could predict that adoption will be ten times as fast. Why? The majority of phone users worldwide already have a smartphone. But once you have the device you simply need to download and activate an AI solution. I feel this is one of the dangers of unregulated AI—it simply spreads everywhere super fast, almost instantly worldwide without any control or knowledge of risk to personal data.
Regulations
With this speed of innovation and development. We must reflect on the direction we want to go with AI and what kind of regulations should be placed upon it.
An example from another area would be skeet shooting, something I personally enjoy. Many regulations are associated with this sport, impacting how the shotgun is held, how ammunition is dispensed and cared for, and numerous others. Everyone participating expects and respects these regulations because they care for their own and others’ safety. If it weren’t for these rules and regulations, there would be many accidents, some most likely fatal.
We could take another example with the atomic bomb. It has been with us for around 80 years, and today is the biggest threat to our existence. Atomic weapon technology exists in many countries and, just as with AI, if it were to fall into the wrong hands it would spell disaster. There are many national and international regulations, as well as arms control, associated with nuclear weapons. So far these restrictions have prevented all-out global destruction.
In a similar way, we need to regulate artificial intelligence.
AI Threats
There are three areas in which AI could be a threat.
Government
The first would be the government. There is a term called “statism,” which means that the government has substantial centralized control over social and economic affairs. Control is the number one issue with AI because at some point, politicians would realize they could control it and lead society in any direction they like. This kind of control is what we have seen with Twitter and other social media in the last decade.
Politicians and private enterprises live in two different worlds. Show me one company that would lie as Congresspeople lie. These politicians tell lies, and we can do nothing about it. And it isn’t just one political party or the other—it’s both. They both serve one master, their own interests instead of the people’s interests.
Private Companies
The second threat would be from private corporations that make AI technology proprietary and utilize it in a way to manipulate people wrongly. Right now, AI is under the control of only two corporations, Microsoft and Google. Microsoft took control of OpenAI—originally non-profit, which has since become for-profit—and ChatGTP. The other company, Google, operates on the belief that AI could one day exceed humanity and eventually replace it.
As I explore in more detail below, I believe the only protection from this threat is to make AI completely open source.
Criminals
The third threat would come from criminals. AI in criminal hands could mean, literally, the wrecking of whole economic systems, not to mention threats of fraud to private citizens, their data and their finances, and private enterprise.
AI Regulations and Control
We would need experts to define AI regulations. I would go so far as to say that not only should the government not be responsible for regulating AI. The government is not responsible for choosing the experts who would regulate AI. If the government chooses the experts, they would then pay these experts’ salaries. So the experts would tell the government what they want to hear.
Independent experts for defining regulations must be chosen from every field, experts not allied to any government or government agency. As covered fully below, control of AI should come from the community, not from the government or private companies.
Introduction of Bias
If experts for AI are not completely non-partisan, the door could be open to entering bias into AI code.
An example of bias could be (as is happening in several sectors of society currently) that a woman cannot be defined biologically, only through subjective gender identification. But how is a human being actually identified? When it is born, it is identified as a human because it has two arms and two legs. The same applies to other animals—a lion is a lion, not a tiger. An elephant is an elephant. And as 99 percent of other animals in nature, humans are either born male or female, identified as such by physical characteristics. For any other type of identification to be programmed into AI would be folly.
Open Source
In addition to choosing independent experts, the other crucial direction is to make AI utterly open source so that access is available to everyone. When AI becomes a closed box, completely proprietary, it can then be in control of governments or various corporations.
There are currently over 100 million open source programmers contributing to the open source repository GitHub, and many more millions of graduate schools every year. With the growing number of open-source programmers, we can leave control of AI where it belongs—in the hands of the community.
I have been a very vocal advocate for open source for many years. In 2001, I and my company Uptime put together and presented a report about how vital open source would become to programming and technology. I presented this report at the famous Cafe Landtmann in Vienna. Executives from Microsoft were with me on the stage, and their position was completely opposite, that open source would never have a place in computing. Of course, today Microsoft has totally reversed its stance, having purchased open source repository GitHub as well as OpenAI.
As covered earlier, the wrong experts chosen for AI could mean the introduction of bias. Bias could also be introduced if AI code is not open source and is proprietary only. It may not happen in the first code generation, but it will happen. We’ve seen this occur through algorithms in social media, and for it to happen in AI would spell disaster.
The Right Direction
With all the confusion in the world today, it’s impossible to predict where events will take us. If the wrong people—meaning governments, private corporate interests or criminals—take charge and control of artificial intelligence technology, we are doomed.
It is, therefore, our responsibility to assume control and define regulations for AI and keep it from going in the wrong direction.
Comments