I personally stand on the side of Stephen Hawking, Bill Gates, and Elon Musk in the world of Artificial Intelligence. AI has a lot to offer, but it also has a dark side that we need to consider if we want to move forward cautiously.
According to the director of engineering at Google, Ray Kurzweil, by 2029, computers should be able to outsmart humans, understanding various languages and learning from their experiences. Unfortunately, once that happens, we’ll face two significant issues: how to teach our robots right from wrong, and how to stand up to creatures that are evolving much faster than we can possibly control.
The Long-Term Implications of AI
Back in 1999, there was an AI expert conference, during which a poll was taken about when the Turing Test would be passed by artificial intelligence. The consensus was somewhere around 100 years, while many people believed that computers would never be able to surpass human intelligence. Now, we’re already reaching the tipping point towards computers that are intellectually superior.
AI is bringing together a range of mainstream technologies that have an impact on our day-to-day lives. But, what are we looking at in terms of long-term implications? Even Stephen Hawking has issued his warnings, saying that the development of complete artificial intelligence could lead to the end of the human race.
Caution Could be Crucial in AI.
During an MIT symposium in 2016, Elon Musk said that he believes we should be more cautious about AI, as it could represent our greatest existential threat. Similarly, Bill Gates recently announced that he was concerned about “super intelligence”. While AI might be useful at first, it has the potential to evolve into a real concern.
These scientific experts are joined in their concerns by the Future of Humanity Institute at Oxford University. Expert Stuart Armstrong believes that machines can work at speeds unachievable for humans, and may eventually take control of our markets, economy, health care, and more.
Last year, these professionals came together to sign an open letter which acknowledges the significant potential that AI has to offer, but also warns that any research into its rewards should also be matched with the same level of effort designed to reduce the potential for significant damage.
Looking Forward to The Future of AI
It’s worth noting that some people have less pessimistic views about the potential of AI. For instance, CEO of Cleverbot, Rollo Carpenter uses AI technology which learns from previous conversations. While his AI scores high in the Turing test by convincing large numbers of people that they’re speaking to a human, Carpenter still believes we’re far from full AI, meaning there’s plenty of time to address potential challenges.
Interestingly, we’re also taking steps to teach AI right from wrong. Many of the experts that teach machines believe that the more freedom we give to computers, the more they will need to be taught moral standards. For instance, the virtual school “Good AI” follows the mission of teaching artificial intelligence creations ethics – providing information on how to reason, think, and act. The students of the schools are hard-drives that are being asked to apply knowledge to new situations.
Additionally, other institutions have begun to teach robots how to behave elsewhere on the battlefield. Some scientists argue that robots can be made ethically superior to their human counterparts. Despite all of these precautions, it’s fair to say that AI is evolving faster than our ability to prepare for it. If this condition of naivety persists, the outcome could be severe.
We’re all doomed…
Rob is Founder & Publisher of UC Today, a leading news publication specialising in Unified Communications & Collaboration technologies.