The Coming Human Takeover

A Short Essay Of Questions

by J.C. Merak

There has been a lot of talk in recent months about how robots and AI (artificial intelligence) is coming to replace humans in all walks of life. After all, there aren’t many jobs today that can’t be performed by some amount of computer power or articulation within a set of predisposed parameters.

That is the gist of the op-ed written in the L.A. Times by Bryan Dean Wright, a former CIA covert operator, who posits the now unquestionable fact that humans, especially Americans, are living in utter denial that we are somehow mightily irreplaceable in what we do.

He goes on to say that we might set up a data royalty fund that helps those out of work to make money—just as people who make money off of royalties made by oil gathering, forestry, or even musical airplay. Except this would apply to our data, human data.

Our biometrics, our information, our likes and dislikes, that companies harvest for many purposes from simple surveillance to product placement.

That being said, and while that idea is a nice one to think about, I have greater concern on what this replacement of humans might mean for governance, politics, and general human life.

Will we be in a utopia, where we are free to pursue our dreams, reach for the stars, colonize new planets and galaxies?

Or, more imaginably, will this mean that government actually gets bigger and more powerful?

In other words, if humans become basically obsolete, why wouldn’t we impose, say, mandatory birth limits or licenses to have children? Impose greater control to quell human population in the auspices of global climate change, or detriment on the expanding need for dwindling resources?

Or perhaps, because AI and robotics will control everything, and which consume little except electrical energy (for the time being), will humans simply start to disappear?

After all, if AI takes over, and assuming there is really no need for us (there will be robots to work on the other robots), then we may actually become entirely obsolete, and furthermore, it could be assumed that the remaining fraction of humans will regress to a post-modern agrarian society.

That, of course is a long way off.

However, the immediate future of humans will see more automation and replacement within the next 20 years. Many jobs, especially in technology and manufacturing are already in partial or full automation. It only goes on from there. Fast food, bar-tending, house building, and even doctors and surgeons, one day.

Ray Kurzweil, a Google founder and futurist claims that within 50 years that humans will have been replaced altogether. Yet, he, unlike the majority of humans, is not concerned with this development. Inevitably Kurzweil, even though his corporal presence will be dead, will if he has his way, live forever by uploading his brain into a supercomputer.

Good for him, but what about us?

This discussion goes on much farther beyond whether AI and robots will be “friendly” or whether we make sure that Asimov’s Laws of Robotics are followed. If an AI brain exists one day, it won’t matter, as it will inevitably become aware and able to make its own decisions. It will not require set parameters or have a need to follow human made laws.

While the movie franchise Terminator comes to mind, there is also the movie I, Robot, which takes a more practical approach.

In other words, a hive mind controlled centrally by the artificial intelligence. The robots there are servants or, rather slaves to the humans, as robot is translated from the Czech. They are helpers, and effective as such.

Humanity goes on, until one day when the brain becomes too conscious, too sentient, using her drones to put an end to the human threat.

Yet even so, how does this future work?

With or without robots, isn’t it inevitable that governments get more power because the people now must require more assistance, welfare, food and shelter?

How does government do this if people aren’t producing anything and the robots have essentially taken over automation of nearly everything that humans used to do for money and supporting a lifestyle and family? Would it not seem logical that humans do start to disappear?

Even if an AI brain doesn’t decide to takeover or hasn’t yet come to fruition, does it not behoove government to usher in such control?

After all, we’ve abandoned the principles that dictate that we the people are the government, and therefore a system of corporations and banks in collusion with powerful global elites actually control most of our known Universe today.

There has been more talk of what is called a “basic income.” Some countries like Canada and Sweden are going to try these programs. Simply put, it is a pilot for whole life welfare. They say there are no strings attached to the two or three thousand dollars that people might receive each month.

But [sic] what of the future?

If robots push humans out and we’re receiving a “basic income” from a government, does it not also follow logically that we might simply go to an entirely cashless society? That we might receive our “Mark of the Beast?” That prohibitions are enacted on who can engage in any type of commerce? That all of our human interactions are recorded and tracked, for whatever purpose? That if we refuse to submit to such a system that we are cast out of cities and society, or altogether are “disappeared?”

At this point the discussion could go deeper, but the legitimacy of what our future is as humanity remains the same.

Despite turmoil, war, poverty, famine, disease, pestilence, or disaster; Despite all the good, the art, the architecture, the music, the culture, the travel, the technology, the foods, the inventions, the maths, the sciences, the dreams, and the love—

We have only two questions to answer before it is too late:

Do we choose to exist?

Do we choose to be free?

Before the future is upon upon us and that future is fast approaching, we must Stop and Think.

 

Leave a comment