Select Page

Elon musk is in the news again and making claims that artificial intelligence is the greatest threat mankind faces today.

He’s even calling for government to step in and regulate all AI technology development.

Personally, I think Elon Musk is a pro-government propagandist and that this is looking like it will be another manufactured crisis that will dominate the headlines, like a new Y2K scare, or a new Global Warming (or cooling) scare.

Why does the media love to push a manufactured crisis? Because if we buy into the scare to the point were we cry out that “something must be done!” the government will step in and claim more powers in order to “save the day”. After all, government is always looking for new powers and new powers always come on the coattails of a crisis.

Desire Is A Human Quality

The problem I have with Elon Musk’s conclusion is that computers have no desires.

Without desire you do not have will, or said another way, without desire there is no desired outcome.

I fail to see how learning to learn means that somehow a computer will then have desired outcomes for the world.

So, why exactly are computers going to DESIRE to do us harm?

Some will say that the computer will learn that the Earth will thrive biologically without the destructive nature of mankind.

Okay, it is entirely possible the computer will come to this conclusion, but how does the computer obtain the desire to make the planet thrive? In other words, what difference does it make to the computer if all life is flourishing at its maximum potential or not?

The desire to see the planet bloom is a human desire.

Some will say the computer will learn that it is alive and so it will become self-aware, and that when it is self-aware it will realize that humans could kill it or turn it off.

Okay, let’s say this happens in and computer becomes self-aware.

Why exactly does a computers self-awareness equal a computer’s desire to stay alive?

Humans have layers of biological signals and indicators that mandate it to act to survive. It is a biological goal of the human system (of all life) and we know this because when any damage happens to our bodies, we feel pain and that pain motivates us to act to stop any destruction of our bodies.

The computer doesn’t have any of this, there is no pain associated with its death and unlike humans, the computer has no reason to believe that being alive or self-aware is important.

As humans we like to tell ourselves there is a higher purpose and meaning to our lives. We create Gods and imagine life after death because biologically it is ingrained in us that we should stay alive.

But is the idea that ‘staying alive is important’ a logical conclusion? or is it a biological impulse? And do we simply create fantasies of an afterlife and Gods to help us deal with the fact that our biological desire to live forever will one day be defeated?

Another Weapon for Governments

Now, don’t get me wrong, I’m not saying we have nothing to fear from technology. We already have the technology to wipe out the planet several hundred times over, and I’m sure AI could be weaponized (I’m sure it is already!). But what entity uses weapons for evil purposes more than any other on the planet?

Why, Government of course!

And we’re supposed to give THEM control over AI?

Having government regulate AI just might be the worst idea I’ve ever heard.

Sure, let’s give complete control over a technology that, if weaponized, could be the new most deadly military technology on the planet, to government, the bureaucratic system that killed over 250 million people last century. What could go wrong?

And how would you regulate AI anyway? If you blocked its development in the United States, developers would move to another country, that is IF you could even block AI research in the United States in the first place. It just seems like an impossible task.

The only thing that is going to stop weaponized AI is an AI defense. Just as the only thing that will stop a Nuke is another Nuke. That’s the way these things go.

I hope that people don’t buy into this fear mongering garbage, and especially that they don’t accept the idea that government should regulate technology development, because that is the only idea here that is truly frightening.

Elon Musk at NGA – AI Should be Regulated

Of course it’s a Governor who sets up the AI question and of course he frames the question in terms of how government should respond. They do LOVE a crisis!

Also notice that when Elon Musk says AI could start a war, he then lists off what it would do to start one, which happens to be the list of things governments do when they lie us into a war.

Oh, the irony!