Cisco UK & Ireland Blog
Share

TechTalk: Artificial Intelligence – the computers are getting a whole lot smarter


November 4, 2015


Welcome to our inaugural Tech Talk blog – where each month I’ll be dissecting some of the biggest technology trends in the industry today. To kick things off, I’m starting with Artificial Intelligence…

It was acclaimed futurist Vernor Vinge who said: “We are on the edge of change comparable to the rise of human life on earth.”

Artificial Intelligence is no longer confined to the realms of science fiction; we are on the verge of a new reality that is going to totally transform technology, culture, and society – for better, or for worse.

It’s a fascinating (and vastly complex) topic, so here I’ve broken it down into what you need to know. First off, there are three forms (think of them as levels) of AI:

Artificial Narrow Intelligence (ANI) – or Weak AI – specialises in one area, and is pretty commonplace today. Think of the computer playing a game of chess. Google Maps is also classed as Weak ANI.

There’s also Artificial General Intelligence (AGI) – or Strong AI – which is a computer that’s as smart as a human across the board. Reaching this step is much more difficult, but this is where we’re headed.

And then there’s Artificial Superintelligence. I’ll let philosopher Nick Bostrom explain that one: “An intellect that is much smarter than the best human brains in practically every field, including scientific, general wisdom and social skills.” In simple terms, it’s a bit like comparing the brilliant human brains of Einstein, and Hawking to the brain of God, the creator of the Universe.

So where are we currently sitting on this road to Superintelligence?

Well, ANI (level 1) is everywhere. Just look at the digital assistant on your smartphone, which can quickly respond to your voice and answer queries (with an added degree of sass, when required).

Learning algorithms can now be found monitoring credit card transactions for fraud, automatically trading retirement funds on international markets, and even acting as an oncology advisor (in the form of IBM’s Watson).

ANIs are good at mimicking human behaviour in certain tasks. But is it true intelligence? Reaching that next level is pretty difficult.

If Moore’s Law (computing power doubling every two years) holds true, we’ll have a supercomputer with the same brainpower as a human by 2025. But programming that computer to think like a human is another challenge in itself.  The ‘routine intelligence’ that we take for granted as humans (i.e the things we do without thinking) is incredibly complex to replicate.

Make complex calculations? Fine. Searching through lots of data? No problem. Picking up a ball, walking, and understanding the words on a page are much more difficult. However, there is a lot of progress here as well.

If it is achievable, however, computers will have an uncanny advantage over us. Machines can work 24/7 for starters. A self-learning machine can increase its processing power and memory, and it is only bound by its own programming with no morals.

There is a lot of debate in the scientific community when we’ll be able to produce anything like AGI- Human thinking – (predictions range between and 20 to 40 years, or not at all).

And if we ever reach the lofty heights of ASI, what will it look like? There are two main views – the optimists and the extremists.

The optimists believe technology will help us a lot more than it will hurt us, solving all the world’s problems. Global warming? The ASI would halt CO2 emissions by coming up with a better way to generate energy without fossil fuels. Then it will create some innovative way to remove excess greenhouse gases from the atmosphere.

World hunger? We’ll use nanotechnology to build meat from scratch, molecularly identical to real meat, and then distribute around the world with ultra-advanced transportation.

We could bring back endangered species, cure cancer, and even conquer our own mortality.

The extremists don’t look at the future quite so rosily, and it’s a bit more along the lines of a dystopian nightmare we’re all too familiar with through any number of Hollywood movie plots.

When ASI arrives – who or what will be in control of this vast new power? What will its motivation be?

One possibility is the ASI has been built by a malicious human or a rogue state. However, experts don’t think this is a reality, as even a malicious human agent would have the same problems containing ASI that ‘good’ humans would have.

More likely is a ‘malicious’ ASI that destroys us all. Without some specific programming, an ASI system will be both amoral and obsessed with fulfilling its original programmed goal.

So AI will not be ‘unfriendly’ – it will neither like us nor dislike us, but simply it will have better uses for the atoms that we’re made up of. If it’s programmed to make the most perfect paper clip with as much skill as it can learn, if humans are the way of creating that paperclip…they must go.

Even bleaker, the ASI could create an army of tiny nanorobots which could multiply exponentially. A grey goo will consume all matter in the universe and convert them into paperclips. Humans create ASI – universe is converted into nothing but paperclips.

So in one hand we have the potential to unravel the mysteries of the universe. In the other, there’s the potential to destroy it.

The truth is we simply do not know what will happen; however brilliant the advocate of a particular outcome may be. The real point to note is that it’s not obvious that all the attempts to achieve ASI will fail.

One thing we do know is we cannot stop the march towards super intelligence. The advantage of owning one is simply too great for any business, any government, or army to resist.

This is important, because the impact of this event would be huge. Following the first demonstration of true general intelligence, a much larger and more powerful machine will be constructed shortly after almost certainly.

We need to be prepared for this ‘intelligence explosion’, tying the decision-making capabilities in AI to a human responsibility. This could require self-limiting laws, and international treaties – a conversation needs to happen on the limits, so we are prepared.

But amongst all this we should not lose sight of the enormous benefits that this technology is providing mankind today and could provide in the future.

We are living in exciting times!

Some recommended reading (which really does this topic some justice) comes in the form of this excellent essay ‘The AI Revolution: The Road to Superintelligence’ from Wait BUT Why, which explores all the latest research in this field. You’ll need a good hour to read both parts!

 

Tags:
Leave a comment

1 Comments

  1. Hello Alison,
    a great summary, well done.
    we are covering this topic at Windsor Debates on 18 March 2016 for members. whilst the Bionic Man/Woman TV series, Blade Runner, Chappie and ET movies all fit into the Hollywood glamorisation of AI whether 30 or 5 years ago, who or what is propelling the development and adoption of AI? is it for protection, or distruction? will humans merge with cyborgs? if robots never sleep they are 24/7/365 replications of ourselves. so highly functional. but what about human emotions? will we erode these in order to be the better cyborgs of ourselves?
    or should we just play David Bowie backtracks and hope to find the answers in his lyrics, or some other artist?

    thanks for your input
    Tina