ai

Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.

Teach a person that 6 x 2 = 12, and he will not forget it, teach a person how to multiply, and he will be able to figure out 6 x 3 = 18.

Tell a machine to find the next biggest prime, and it will make faster computations than are humanly possible to do it, tell a machine why you want the next biggest prime, and it will figure out a more clever way of encryption.


 

Artificial intelligence (AI) is terrifying, and our species’ foremost thinkers (Elon Musk, Stephen Hawking) have warned that it is the single greatest threat to humanity. AI can be referred to as general, narrow, strong, or weak, but roughly speaking, it is the ability for computers to be “smart.” They don’t have to think like humans necessarily, but they do have to produce results that most of us would classify as intelligent.

As one of the readings pointed out, we have yet to discover something in our brain that is not replicable in computers. We haven’t figure everything out about our brains, but there is no empirical evidence, thus far, of any intangible quality that they might possess.

So the current state of the battle that pits human brains against machine brains is that humans have the better structure (billions of neurons arranged in layers), but computers have the faster processor (transistors fire more quickly than the chemical gradient in our neurons). For now, our more robust structure seems to give us the advantage, but there is no limit to the size of computers (i.e. number of transistors), and we could conceivably mimic the structure of our brains in computers eventually.

That leads to the conclusion that computers will surpass humans in terms of intelligence, which is scary. The scarier part, to me, is that computers could potentially keep learning, becoming more intelligent than we can fathom, but that doesn’t seem definite.

Machine learning is the trick for computers to speed past us. AlphaGo, the Google creation that handily beat the world’s best (human) Go player, is a technical marvel. It analyzes game situations at an absurd rate, making tiny adjustments to its algorithm to play the game – in other words, it is always learning, but not like a human…

It does not take breaks. It does not have days when it just doesn’t feel like practicing, days when it can’t kick its electronic brain into focus. Day in and day out, AlphaGo has been rocketing towards superiority, and the results are staggering.

AlphaGo will learn as long as it’s connected to a power source. It is worth noting, however, that it is learning with human rules. It may evolve to use new tactics to learn, but they will have been derived from the initial rules that were programmed by humans, which leads me to believe that humans could eventually reach those tactics also – they aren’t unfathomable to our minds, albeit, many years down the road. And we wouldn’t be getting much help from the computers, since as the Atlantic article put it, we don’t really understand how AlphaGo is learning; the adjustments it’s making aren’t intuitive to us.

As Christopher Moyer puts in The Atlantic, “AlphaGo isn’t a mysterious beast from some distant unknown planet. AlphaGo is us… AlphaGo is our incessant curiosity. AlphaGo is our drive to push ourselves beyond what we thought possible.”


 

So the question is, if we teach one of our supercomputers to learn, will it learn like the best human pupil we’ve ever seen, or will it take on a mind of its own that is the first domino in the fall of the human race?

Standard

Leave a comment