In the last couple of weeks, the internet has been frantically buzzing over a warning issued by Elon Musk—and supported by hundreds of industry tech leaders—that we should pause the development of AI because of its threat to humanity. Musk states that all companies fervently working on AI development should cease for six months and reflect on AI’s potential consequences and dangers.
Is this a warning we should heed? And would such a “pause” even be possible?
Bill Gates also recently published his views on AI, showing its practical direction and possibilities and formally stating that the age of AI has begun. Gates states, “Artificial intelligence is as revolutionary as mobile phones or the internet.”
I would go even further and say that AI is even more revolutionary. In the 1970s, when Gates co-founded Microsoft, only a handful of minds could influence the computer industry. Today, this group is gigantic. Five years ago, I gave a speech at DePaul University in which I stated that 24 million programmers were contributing to GitHub open-source repository, and today there are over 100 million. Additionally, millions of people are currently finishing high school and heading for science and technology careers. So the amount of brainpower being applied to computing, and therefore to AI’s development, is exponentially far greater than 40 years ago.
The other significant change in the industry is that not only are all these people programming, but because of today’s interconnection, there is much greater collaboration than in the past, when most innovation took place in isolated labs.
Two Different Models
In his article, Gates points out that there are two different models on the AI front. First, the technical term “artificial intelligence” refers to the model created to solve a specific problem or provide a particular service, such as the AI powering ChatGPT.
The second model is referred to as AGI—Artificial General Intelligence—which is AI that would be capable of learning any task or subject. There is currently a debate in the industry about whether or not AGI is even feasible, because it goes in the direction of computing being more powerful than the humans who create it.
Pause Even Possible?
While Elon Musk is urging everyone to cease AI development for six months, the question should be asked if this is even possible. Let us say that the United States, Australia, New Zealand, Europe and allied countries all agree to stop AI development. Will China and Russia then agree? It is doubtful.
In addition to countries, some companies may even agree to cease development, but then carry on in secret to gain development advantage.
From the anthropology employed in the Austrian School of Economics, we know that humans, in general, are selfish and operate mainly in their own self-interest. There would, therefore, be some who cooperate with this “pause” and many more who would not. While I see that Elon Musk’s urging is understandable, it can’t be practically executed.
Is Pausing Correct?
A second question to ask is this: is it actually intelligent to engage in such a pause?
I compare today’s scenario to when I was young, in 1967, when Dr. Christiaan Barnard successfully performed the first heart transplant in South Africa. He faced considerable opposition—the press in Europe and elsewhere in the world was crying out that such an operation was unethical and immoral. Today, a heart transplant is relatively common. While it is still miraculous, hundreds of surgeons today can carry it off. Fortunately, we didn’t listen to the naysayers and carried on with innovation.
If we pause development on AI, we won’t know what could actually be accomplished with it. OpenAPI GPT-4, the most advanced version of AI to date, was introduced in November of 2022. Solutions are being developed at an incredible rate, and it’s a race of who will win in this market. There is a tremendous amount of investment and competition. It might therefore be very worthwhile to carry on.
AI in Potential Beneficence
The word “beneficence” is defined as “an act of charity, mercy, and kindness with a strong connotation of doing good to others, including moral obligation.” Instead of pausing AI development, we should examine the core principles underlying AI development.
Instead of operating on the core principles we’ve had for thousands of years, some advocate throwing them out and establishing new ones. For example, in America, some claim that our constitution is outmoded and should be done away with and a new one drafted.
But the core principles we have operated under for thousands of years still hold true today. Perhaps we should proceed with AI development under these principles.
How about utilizing AI to provide better lives to the poor and to heal the sick? How about making it possible for deaf people to hear and for people who are blind to see?
How about utilizing AI to feed the hungry? Although the overall scene is better today than it was, say, 20 years ago, 45 percent of all child deaths worldwide are from causes related to undernutrition. That’s 3.1 million children per year. Let’s utilize artificial intelligence to figure out how to distribute the tons of food wasted daily, from supermarkets to restaurants, and even from more affluent homes. Much of the time, the obstacles are logistical—AI could undoubtedly help solve them.
Healthcare is another area. AI could be utilized to select the proper medication for a disorder or illness by instantly comparing it to others.
Education is certainly another sector that AI will profoundly impact, for it can instantly provide all possible data on one topic. This is, of course, assuming the data has been verified as correct.
Another area that AI could help tackle would be employment. People need to be able to work, and AI could be used to figure out where people could be best employed.
Regulations Needed
We can clearly see at this stage that the AI industry requires regulations. Recently, ChatGPT was banned in Italy over privacy concerns. It has also been recently reported that Samsung workers unwittingly leaked top secret data while using ChatGPT. Elsewhere, ChatGPT has been limited by Amazon and other companies because workers paste confidential information into it.
Comparing another industry, 75 years ago if you wanted to fly a plane somewhere, you just started up your aircraft and took off. Today, because of the sheer number of aircraft and many other reasons, there are countless regulations on the airline industry. An aircraft cannot even leave the ground without a fully qualified flight crew, having been fully cleared mechanically to prevent air disasters, a flight plan having been filed, and clearance from the control tower. Travelers are security checked and heavily regulated. There are many other regulations within that industry.
Elon Musk is saying we need to pause AI innovation. When aircraft were first being pioneered, it was a similar scene—they were considered dangerous and people were warned off of them. But think of what it would be like if we had paused or ceased development of aircraft. How many weeks would it take to go from New York to Europe by train and ship? Or to send a package from San Francisco to Tokyo? Today’s world is as fast-paced as it is because we can fly.
At the same time, imagine what the airline industry would be without regulations. It would be a chaotic nightmare to travel anywhere.
I do feel that the ChatGPT ban in Italy is a bit of an overreaction, and as I said earlier I don’t believe that Elon Musk’s suggestion of pausing is correct. But the issues brought to light by these events are indeed correct. There are legitimate security concerns, and the real answer is to regulate it so that it is safe—just like the airline industry. We need to be protected as the human race, and prevent anyone from committing reckless acts with the technology.
While at odds in other areas, even nations normally in opposition to each other have agreed on regulations for the airline industry. It is therefore obvious they can cooperate and coordinate in vital areas. Many of these same nations are currently working on AI development, so they should perform the same way with AI as they have with air travel regulations.
The next question becomes, who should devise these regulations and put them in place? I’m not here to suggest that I, or we at Pipeliner, know how to do this. But it is vital that it be done.
AI at Pipeliner
As I have pointed out many times, AI will work best in a supportive role, when it is our “wingperson.” This is how it is being used here at Pipeliner.
AI will never replace B2B salespeople, because B2B sales are too complex and variable to move forward without human intervention and control. An interesting article by George Bronten one of our competitors makes a very important point about AI being used in complex sales. Bronten writes, “the more complex the problem, the more difficult it is for AI to understand intent. For instance, imagine you have multiple people in the car for a road trip, and they each have different priorities: One wants a scenic route, one needs to take frequent breaks, another wants to drive past their old high school, and the driver has anxiety about road construction. A future version of an AI maps app might be able to incorporate all of these user intents as data points and create an optimal route—but only if it understands everyone’s intent.”
Similarly, Bronten points out, complex B2B sales is not a simple journey. It involves multiple stakeholders, differing priorities, complex problems, and often undefined risks and needs. These can only be addressed if the intent from everyone is fully understood. That is why I believe that AI is and will continue to be a strong supportive agent, like a good “wingperson,” not any kind of replacement for B2B salespeople.
As of this moment, we have examined 20 different AI tools, and are using 14 different kinds of AI just in our marketing. We use AI-generated music for our videos, so we no longer have to pay royalties. The voiceover in videos is also generated by AI, as well as the video transcript. We even have a human-appearing agent in our videos.
Using Fear as a Guide
In my opinion, fear is never good counsel, so I am hesitant to agree with Elon Musk’s warning and suggestion. Respect is a much better counsel. Let us respect life and utilize AI to help us bring life to all new heights.
We should examine the core principles that stand behind our AI development. As Austrian economist Ludwig von Mises said, the better idea will always prevail.
At Pipeliner, we want to use those better ideas for innovation. That is my approach.
AI is in our hands as the human race, and it needs our responsibility. Everyone wants freedom to develop AI as they wish—but as Thomas Mann said, responsibility is the other side of the freedom coin. Responsibility should be taken because we can, with this technology, do incredible things for ourselves and the planet.
Comments