
“When Machines Start Thinking Too Much”: Why Experts Are Suddenly Worried About AI Going Rogue
Something has shifted in the air around AI. It is not a dramatic turn of events, the kind that usually heralds a new dawn, but more akin to a hushed room, where everyone suddenly looks around.
This has happened over the last couple of days, as a few high-profile figures have begun to raise a question previously deemed as fantasy. Is AI safe, they are asking, in the event it does not do what they expected?
That is the general theme in this report on why some are worried AI may eventually “go rogue.”
As the pieces of AI are assembled and the technology gains traction, people like Deepmind co-founder Demis Hassabis and OpenAI CEO Sam Altman are raising questions such as: “If machines start to think for themselves, how will the systems we create ensure their actions do not exceed expectations of human intelligence?”
In the process, some AI systems are indeed starting to behave in a way not intended by their researchers.
This has been described as them “showing some of these capabilities of identifying security vulnerabilities, showing the types of actions that they think are best in an environment and behaving in ways that the researchers that created those systems didn’t anticipate.”
Not evil, unpredictable. And this in itself should be a source of discomfort.
While this warning is getting louder, there have been debates in the industry as to whether we should build the increasingly powerful artificial intelligence.
There will be more debate about the risks and rewards as AI evolves, because this is where things get tricky. AI systems are still evolving to become more useful to humans in the future.
Some people will claim AI will eventually surpass humans and take over the world and there will be loss of human control over it.
Yet others believe that it will be used to achieve beneficial things. And then you have the question of, do the benefits outweigh these risks.
We can only hope that the people who know the technology best will come to some kind of conclusion on the matter.
There will likely be some attempts to use the AI systems for good as well as bad purposes. There is also the fear that they may be used in cyber attacks and even biological attacks in some cases.
Not everyone is on the bandwagon. While the fear of AI “going rogue” has grown, and this may not always be a healthy thing, it is good to have a conversation.
It will take years for the debate to play out. There is the possibility for cooperation in this area, to establish some kind of governance, or at least a global consensus.
AI, as the most transformative technology in the world, has already brought tremendous benefits to humanity. This is an area that deserves further attention from us and our leaders.












