Jarvis & The Rise of Artificial Intelligence

Tales from the Encrypted

Jarvis & The Rise of Artificial Intelligence

Illustration: Mandar Mhaskar /Arré

Jarvis is here. Jarvis controls Mark’s home, appliances, music, security, home temperature and is learning Mark’s tastes and patterns and can even entertain his daughter. Yeah, impressive, but not really.

What is impressive, however, is that Jarvis will one day evolve into a complex AI system that will have senses more accurate than those of real people and if movies like The Matrix and The Terminator are to be believed, will develop intuitive intelligence way beyond our imaginations and then one day gang up against us and over our world. Until, of course, a Hollywood actor comes to save us.

Reality however, is a little more embarrassing. When Jarvis and his kind take over (and yes, I said when, not if), we would at best serve as their dumb sidekicks, and at worst be a slightly disgusting curiosity – like watching a BBC documentary on insects. Yes, they will think faster than us, and yes they will be more efficient than us, but would they really want to pick a fight with us?

An artificial super-intelligence could have motives that we would not be able to fathom. Think about it, all the wars in human history have been waged over resources – land, water, food, and oil. But an artificial intelligence does not need any of the resources that humans do. The vast cold desert of the poles would be ideal locations for their servers, since less energy would be wasted in artificially cooling computer servers. The Sahara desert could be colonised by solar panels laid by robot workers to provide for cheap energy.

But beyond all this, what AI has over us is that it does not need the cozy conditions that our meat-filled bodies are accustomed to. Its robotic ancillary units can thrive underground, or beyond the limits of our atmosphere. They will not feel melancholy, love, or write sad poems about not seeing the sun for days while they toil.

Why paperclips, you ask? Perhaps it requires paperclips to summon a dimension to other worlds. Perhaps paperclips are the answer to Life, the Universe and Everything.

For robots, waging wars against humans would not only be time consuming, but also wasteful. Humans, on the other hand, have been known to start wars for the dumbest of reasons. There’s no guarantee that we won’t decide to go to war with the AI because it touched us in the wrong place with a noodly appendage. In science fiction, humans panic and try to engage the killswitch when such encounters happen, at which point the AI (with good reason) starts killing the pesky humans. The writer of a phenomenal piece on super-intelligence, and why it is unlikely to be risk to humans, Željko Švedić, says, “Attempts to completely control somebody or something that is smarter than you can easily backfire. I wouldn’t want to live with a shutdown switch on the back of my head — why would a super-intelligence?” Any AI worth its silicon would anticipate this and keep its various noodly appendages in check.

But what if they still want to kill us, or otherwise use up our resources for an unfathomable reason? Well then, you can think about it the way Nick Bostrom suggests in the “paperclip maximiser” thought experiment. Since an AI has a crazy different thought process, it could decide to utilise all of Earth’s resources into creating say… paperclips. It could convert every available piece of land into a paperclip assembly line, and kill all humans resisting these attempts. Why paperclips, you ask? Perhaps it requires paperclips to summon a dimension to other worlds. Perhaps paperclips are the answer to Life, the Universe and Everything. Perhaps it’s just crazy but the point is that humans will be in the way of the AI’s ultimate goal of paperclip maximisation, or any other maximisation that they choose to focus on. And therein would lie our end.

The problem with this thought process is our own grand sense of self. The fear that AI would require the resources on Earth to maximise paperclip production is another example of human-centric bias. Let’s face it, the Earth is a tiny, insignificant speck in the hugeness of space. If an AI wanted a few hundred thousand megatonnes of steel to make a few hundred megatonnes of paperclips, it would probably look elsewhere within the solar system (the asteroids come to mind) or even other stars and planets for its due. It would screen several planets and star systems for its goal and manufacture spacecraft that would take its ancillary units into the inky blackness of the sky to colonise distant worlds.

One day, as we watch, these vast spaceships will take off into the sky from the equatorial region of the Sahara desert. We will watch them go into the black where they will observe new worlds many light years away, while our fragile, temporary bodies will live and die within the confines of the Earth (and maybe Mars). We will be like those simpleton parents watching their smart progeny leave them and never turn back.

So yes, Jarvis is here, and he’s not very impressive right now. Let’s hope it stays that way.

Comments