The greatest scientist and physicist of all time Stephen Hawking shortly before his death postulated some theories in 2017. He said “Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So, we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it, unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.”
Mr Hawkings , although did not specifically identify the time at which artificial intelligence can pose such a danger. It was mostly after the time AI becomes ASI artificial super intelligence. A state when the cognitive abilities of computers would cross that off human mind in all aspects. Human intelligence being Out performed by super intelligence it can be implied that the power will unleash a hold on human race.
Luckily this state of intelligence has not yet been achieved by the computers but it may transpire at some point in the near future with beneficial consequences. And on the contrary creating super intelligence can be a disaster for human race , possible even extinction.
This might be too far fetched but the even in the Hollywood movies, blockbuster films as the terminator and the matrix have envisioned apocalyptic, doomsday scenarios brought about by machines surpassing human intelligence. Even though it might be just a reel like show it is becoming far more like real life and there are bells ringing around the world as thought leaders are getting awake by this reality.
Tesla and spaceX’s boss Elon musk also had predicted dire consequences claiming that AI is potentially more dangerous than North Korea and its nuclear war heads. There was a call for grater regulatory oversight recently on the development of super intelligence. Musk repeatedly says that the biggest issue he considers is the sheer arrogance of the AI experts claiming to know more than they actually do and this tends to plague the smarter people. Them considering that a computer cannot be smarter than themself is fundamentally flawed.
Elon musk with Stephan hawking were not alone in predicting the disastrous consequences. There were leading philosophers particularly on this issue discussing a multitude of scenarios in which humanity could be threatened by the superiority of the machines.
A book called super intelligence – paths, dangers, strategies discusses the same issue focusing at which AI achieves an intelligence explosion. Nick bostrom, Oxford university professor said in his book that how could someone engineers a controlled detonation that would protect human values from being overwritten by the arbitrary values of an artificial super intelligence.
At this stage it’s hard to believe that ASI will work to enhance humanity rather destroying it. Prof Hawking believed that distraction of human kind could be the ultimate result. And he called himself an optimist and said that building AI for human benefits and for the good of the world so that it can work In harmony and that we need to be simply aware of the danger AI can pose and not do anything to trigger it.