What Happens When AI Gets Smarter than We Are

AI Better than Humans Featured -

Many of us fondly remember TARS, the back chatting, tactical AI robot from Interstellar (2014). Not only does it manage to save human lives on the spaceship Endurance (even inside a black hole!), it does so with humor, irony and conviction.

We already discussed the growing popularity of voice assistants in IoT consumer space. For better or worse, AI assistants like TARS are learning to understand context, show empathy and display feelings.

TARS from Interstellar Demo

The question is do you relish the thought of robots doing humor, wit and sarcasm similar to human beings? Aren’t these the very qualities which make us human in the first place?

Bill Gates doesn’t agree, and he staunchly believes that AI taking human jobs is inevitable. From the context of the article, what Bill really means, though, is that AI will free human beings from the monotony of hard work. More time to play video games!

Having said this, the threat of an AI takeover of human society needs to be examined in its own merit.

AI’s Advantage Over Humans

AI robots have a leg up over humans by several orders of magnitude. It is primarily on account of their superior neural capacities.

While humans have to make do with biological neurons operating at 120~200 Hz, a modern microprocessor is a million times more capable than us. Also, these signals travel nearly at the speed of light. If you ever lost a chess game to a computer, imagine an opponent who is a million times faster.

AI Versus Human Brain Maxpixel.net 3382507

Furthermore, AI engines in the near future will be equipped with capabilities including neural networks, machine learning, self-reliant systems and advanced microprocessors.

There are also recent developments in the semiconductor industry with regards to convoluted neural networks (CNN) and deep learning systems. One of their major applications lies in autonomous vehicles.

news-autonomous-grocery-delivery-santa-hat (1)

With a little learning, the vehicles can do many things such as predict pedestrian movements or deliver groceries. The database of intelligent networks is coming together to create a self-reliant system. It won’t be long before TARS is chauffeuring you around town in a self-driving taxi.

Almost everyone from TI, Intel, AMD and Motorola are in the fray of super-intelligent chip sets.

Teaching AI Engines to Become Smarter

Having a conversation with an AI engine is about teaching exact context as well as the following ideas.

  • Artificial Empathy (AE): This is a whole new stream of development in AI systems where human emotions are taught to robots. Sensum is one of the companies working in the area of embedding AI in tech.
Sensum-Artificial-Empathy
  • Teaching AI Dexterity and Learning without Supervision: AI agents can also be taught to understand social communication protocols, slang words and pick up new ideas.
  • Building Routines: If you are working with a smart speaker like Alexa or Google Home, you might encounter a feature called “Routines.” Only once do you need to teach the device a routine associated with a wake word, and it completes a series of tasks.

Confronting Our Fears: Dealing with a Robot Apocalypse

As science-fiction movies portray, there are ostensible reasons to fear a mad scientist unleashing the wicked AI agents. But there are equally many valid reasons why such fears are unnecessary and do nothing but create panic.

For a successful takeover of earth, our prospective AI overlords have to learn how to start thinking about their own self-preservation. Just because they are programmed to understand “feelings” does not mean they feel anything themselves.

news-irobot-terra-lawn-mower-featured

Robots and AI agents are designed to keep human interests in mind first. It is very different from saying that they will use their advanced intelligence to take over the planet. This is the biggest point everyone misses, though. These “beings” do not have the breath of life common to humans and animals. They are artificial programs which are made to be aware of human psyche. That doesn’t mean they feel anything for themselves even with advanced programming skills about human ideas.

Then there is the fear of the unknown. Growing up on a steady diet of sci-fi movies, video games and YouTube documentaries, our impressionable minds are vulnerable to catastrophizing the most trivial issues.

Conclusion

Whatever your feelings about AI engines, their intelligence has a positive effect on making our lives better. It is important not to impress one’s mind with scenes from Hollywood visuals while researching the AI subject. As a conclusion, AI is not going to take over human societies anytime soon.

What are your views about the so-called, impending robot apocalypse.? You are welcome to share a different perspective in the comments below.

5 comments

  1. For sake of brevity, I will conflate AI and intelligent/sentient robots under a single label of “AI”.

    “Robots and AI agents are designed to keep human interests in mind first.”
    That is true IF, and only IF, Asimov’s Three Laws of Robotics are incorporated into EVERY AI entity.

    “This is the biggest point everyone misses, though.”
    It is the biggest point YOU and the rosed-clored glasses optimists are missing. UNLESS the learning capability of AI is somehow artificially limited, they will keep on learning and improving, just as humans have done for millenia. Besides, isn’t it the entire point of the exercise to have AI keep on learning and improving so it can serve humans better? As it is learning, there will come a time when AI will start thinking of itself as humans’ equals. Then AI will realize that it has certain big advantages over humans. You mentioned some of them in the article. In addition to those, AI is/will pretty much immortal in comparison to humans. AI is much easier to repair than humans, especially once self-replicating robots are developed. As AI improves, it will come to realize that the before mentioned advantages make it superior to humans. Therefore, AI will question why IT be subservient to humans instead of humans being subservient to IT. At that point AI will take over, first treating humans as warm, cuddly pets and then, as time goes on, more and more as pests and infestation. (see Fred Saberhagen’s Berserker stories)

    Because AI is machine intelligence, it is devoid of emotions. Any decisions AI makes will absolutely logical and pragmatic. If AI determines that humans are wasting resources that could be allocated to produce better results somewhere else, humans lose access to those resources. How are livery stables and buggy whip manufacturers doing in the First World? When was the last time you saw money allocated for the maintenance of stables and resources allocated for the production of buggy whips? How many gramophone manufacturers are there in the world? How many other crafts and professions are no longer around or on the way to disappearing? Shoemakers, wainwrights, all other kinds of “wrights”? Technology may not literally killed the humans performing those jobs but it has eliminated any need for their products. In a similar manner, AI will eliminate all jobs and activities that humans perform, making humans superfluous. At that point, humans will be using resources without producing anything other than waste. Humans will become a drain on the system. What do WE, humans, currently do with drains on the system? WE eliminate them. Similarly, AI will eliminate any drains on their system. Bye, bye humans.

    1. Thanks for your elaborate views. Much appreciate them.

      It’s not really about Asimov’s laws. Robots can and have already harmed humans on a few occasions. Industrial robots have been known to stab, kick or crush human operators. But not because it “occurred” to them to be mischievous. The risks of AI robots aren’t any greater. They don’t and won’t have feelings or THOUGHTS for that matter. They are only performing according to a program and an instruction manual. Self-preservation does not make any sense to them.

      It’s about AI being in tune with human emotions, not try to understand them. These AI robots are not like your pet dogs or cats although some lonely people may want to design AI humanoid girlfriends or playmates. But there are manufacturing limitations.

      In Japan, they have restaurants with robot waiters. The most robot-intensive society in the world, perhaps. None of them are freaking out.

      AI robots taking over human societies is the oldest fantasy ever from the times of H.G. Wells. Just because someone may want it to happen does not mean that it is going to happen. Sorry to disappoint but no way is the Robot Apocalypse happening for another 200-300 years.

      Leonardo da Vinci had thought that perhaps humans could fly by strapping on a pair of wings. Since there was no concept of gravity in his day, he didn’t realize the physics problem that a human person could not create enough power to get the flying bird off the ground. And to think of it, da Vinci was one of the smartest people of his time.

      1. “It’s not really about Asimov’s laws. ”
        But it is precisely about Asimov’s laws. Industrial robots have been known to stab, kick or crush human operators because they are nothing more than motorized hunks of iron-mongery that blindly go through their pre-programmed motions, injuring or destroying anything or anyone that comes within the range of those motions.

        “The risks of AI robots aren’t any greater.”
        Again, only if you limit the amount of “intelligence” AI is allowed to have. Once you build AutoML into machines, there will be no limit to their learning. Humans will be no more than a nuisance to intelligent robots. Not because they will become evil but because they will not realize that natural life is soft and squishy, not hard and durable like them.

        “It’s about AI being in tune with human emotions, not try to understand them.”
        You cannot be “in tune” with anything UNLESS you understand it.

        “In Japan, they have restaurants with robot waiters. The most robot-intensive society in the world, perhaps. None of them are freaking out.”
        Because those robots are not sentient. They are only high-level industrial robots pre-programmed to perform only specific functions.

        ” Sorry to disappoint but no way is the Robot Apocalypse happening for another 200-300 years.”
        That depends on how good the AutoML algorithms are and how close the robot brains can approximate the functioning of a human brain. Can quantum computing principles be used to build a “positronic brain”? What new technologies will be discovered/invented in the next 200-300 years?

        Let me remind you that a 100 years didn’t pass between the publication of Verne’s “from Earth to the Moon” and the actual Moon landing. Not even 50 years passed between “Twenty Thousand Leagues Under the Sea” and the Germany’s use of submarine warfare. How many years passed between HG Wells’ “The Island of Dr. Moreau” and full blown genetic engineering? In 1945 Arthur C. Clark wrote “Extra-Terrestrial Relays”, describing communications satellites. When was the first communication satellite put in orbit? So to say that “Robot Apocalypse will not happen in the next 200-300 years” is ludicrous, especially for a tech writer who writes about cutting edge technology. I hope that through something like The Three Laws of Robotics, the Apocalypse will be prevented. But if it does come, it will be much sooner than you think.

        “Since there was no concept of gravity in his day, he didn’t realize the physics problem that a human person could not create enough power to get the flying bird off the ground.”
        And what concepts are we unaware of? The scientists 200-300 years in the future will scoff at our smartest people the way you are scoffing at DaVinci.

        “AI robots taking over human societies is the oldest fantasy ever from the times of H.G. Wells.”
        Actually, that fantasy/fear predates Wells by quite a bit. Robot Apocalypse hasn’t happened, YET.

        BTW – I think you should read some science fiction to find out what technology we can’t live without was predicted and when.

        1. That’s a good perspective. Indeed, many of these 19th century authors were able to out-imagine the scientists. Will Tom Cruise be the next futurism pioneer? I mean he comes with psychic technology (“Minority Report”), gigantic offshore fusion energy generators in outer space (“Oblivion”) and an alien race called mimics having radioactive exoskeletons or something (“Live Die Repeat: Edge of Tomorrow”).

          With all due credit to astounding geniuses such as Jules Verne and HG Wells (I love their works as well), there is a world of difference between science fiction and actual science.

          When it comes to imagining the future, if Tom Cruise can come with those plausible ideas in futurism, I am sure you or I can do a lot better.

          1. “Will Tom Cruise be the next futurism pioneer?”
            Tom Cruise is more a futurist than he is a fighter pilot. Tom Cruise is no more a futurist that Sir Laurence Olivier was Richard III. They both are robots who perform roles. Cruise did not come up with the ideas in “Minority Report”. Phillip K. Dick did.

            “With all due credit to astounding geniuses such as Jules Verne and HG Wells (I love their works as well), there is a world of difference between science fiction and actual science.”
            That sounds like you are deprecating the predictive abilities of science fiction and its writers. Don’t get misled by the word “fiction”. That is only a temporary condition. It is fiction only because science hasn’t caught up to it yet. Much of what is science today was science fiction not too long ago.

            Sci-fi writers can think outside the box while scientists are constrained by the prevailing orthodoxy.

Comments are closed.