The development of AI is advancing at a strong pace and we’ve reached a point where having an ultra-powerful artificial intelligence might just happen to be around before most people realize. The words “ultra-powerful AI” certainly sound intimidating. Would the Skynet scenario happen? Or would something completely unexpected unfold?
From today’s viewpoint that’s difficult to answer. Even the people who work on creating the various artificial intelligence projects and algorithms aren’t really sure what will actually happen. Their goals even sound a bit modest compared to what the real possibilities of AI are. What few people admit is that there’s no way to know exactly what will happen as AI continues to evolve. There are mostly hopes and expectations of how it will behave.
What is ultra-powerful AI?
“Imagine an airplane pilot who never makes mistakes, never shows attitude, never comes to work with a hangover… Meet the pilot.” – movie fans will know that this quote comes from Terminator 2: Judgment Day. That’s the honest, pure-hearted goal of Miles Dyson who works on the creation of Skynet. Little does he know that his creation will ultimately decide to destroy humanity.
And while it’s not likely that an AI which is created for a specific task, like driving cars for example, will endeavor to conquer the world, it’s possible that some military AI could get some unwanted ideas. The question is who will be able to contain such an event and “pull the plug”. Currently, that’s not a problem and AI can’t “fight back”. But what if it can?
Most AI creators say there will always be a way to control the platform and have a “master key” for any occasion. Even if the AI is able to modify itself and create new lines of code. The question is what will happen if the AI is also self-aware? Currently, such an achievement is so far away that it doesn’t even seem possible, given the current level of knowledge and technology. Before humanity gets to that point it will first have to actually create AI which is capable to even “talk” naturally to humans.
Who’s there to control what happens?
Officially, there’s no one. There are no regulations or rules in place which govern who can create AI and for what purposes. There are non-profits which try to raise the debate and IT companies and scientists try to cooperate in setting up agreeable rules. Basically the IT industry relies on itself to regulate AI and keep it in check. History usually shows that leaving something to be governed by the same one who actually does it doesn’t end well.
History also shows that the IT industry doesn’t really follow… history. It does its own thing and usually is pretty good at it. The same is also seen with the development of AI. The employees of big companies are very aware of these issues and actually demand their companies to be very careful and transparent with AI. One of the latest examples was the AI ethics board of Google which was dismantled in less than two weeks after its creation. The reason – some board members had ethical issues of their own and the employees demanded changes.
OpenAI is another organization which aims that AI will be developed only in a way which benefits humanity. It has its fair share of issues though. Elon Musk, who was one of the co-founders of the organization, left it to pursue his own projects. OpenAI also felt the little consensus on how to achieve the goals. And that’s the main issue here.
While the industry knows that AI should be for good and behave well, it still has very different opinions on how to achieve that. Hopefully it will find some common ground soon, as the development of AI won’t wait around and it’s inching ever so closer to the point of no return.