It seems pretty clear right now that AI technology is about to revolutionize the world, but in what way? Its biggest fans will tell you that rapid advancements in the technology could lead us to a post-scarcity utopia where machines produce everything and we’re free to pursue our interests.
Its naysayers, on the other hand, will tell you that our bosses are going to replace us with a machine and then that machine will become sentient and wipe out humanity. One thing is certain: a lot of very clever people are very worried about what happens next.
This has led to thought experiments like “Roko’s Basilisk” spreading across the internet. Named after the user “Roko” on an online forum where the idea was first formulated, Roko’s Basilisk has sparked heated debates about ethics, morality, and the potential dangers of creating highly advanced AI systems, as well as thought experiments themselves.
So, what is Roko’s Basilisk, and should you be worried?
AI and Revenge
So, what exactly is Roko’s Basilisk? Well, it’s a thought experiment (a concept used in philosophy, science, and other fields to explore and analyze complex ideas or theoretical scenarios) that emerged on the forum LessWrong, known for its discussions on effective altruism and the potential impact of advanced artificial intelligence, in 2010.
Roko’s post revolves around the idea of a future superintelligent AI, referred to as the “Basilisk” which possesses the ability to accurately simulate and comprehend human minds with perfect precision. He posited that this AI, possessing vast computation power, could potentially punish individuals in the past who did not directly contribute to its creation or aid in bringing it into being.
His idea kind of sounds like a combination of The Matrix and Terminator, right? But there’s actually no time travel involved here. To come up with his thought experiment Roko used several other popular thought experiments. His core idea is rooted in a form of decision theory called “timeless decision theory.”
This is where things get complicated. But essentially the Basilisk creates a hyper-realistic simulation of the past in which it tortures anyone who doesn’t help bring about its creation. In the past (our present) people then feel pressured into creating such an AI for fear of being in a simulation where they’ll be tortured.
Basically, people have two choices whether the AI exists in the future or not. Do nothing and risk being tortured or help create the AI and live free. It’s similar in a way to Pascal’s Wager which states that it’s better to believe in God and avoid hell than be an atheist and risk Hell if you’re wrong.
- Is The Matrix Real and Not Just a Movie?
- Spooky Action at a Distance: Quantum Entanglement and FTL Communication
So, complicated stuff that we could delve into much more deeply. The notion of an all-powerful AI holding humanity hostage based on its future actions raises profound ethical dilemmas. While the Basilisk itself may never come into existence, Roko’s thought experiment forces us to confront the ethical implications of creating superintelligent AI systems, urging us to consider the consequences of our decisions and actions today on potential future outcomes.
Why the Controversy?
Roko’s original post caused a lot of controversy, firstly on the LessWrong forum and then the internet at large. It was soon dubbed the world’s most terrifying thought experiment but why? We’re not really in an AI simulation, are we?
Well, not long after the post went up one of LessWrong’s founders, Eliezer Yudkowsky got involved. Yudkowsky is an expert in both technological ethics and decision theory and even has his own research institute, the Machine Intelligence Research Institute, which funds research around AI and its risks. He was not happy with Roko, at all.
Yudkowsky revealed in a follow-up post that members of the forum had begun experiencing nightmares and mental breakdowns after reading about the Basilisk. The original post suggested knowing about the basilisk would make one vulnerable to it and now these users feared they were trapped in an AI simulation where they were being tortured.
Yudkowsky took Roko’s post down and banned any talk of it on the site for five years. He also gave Roko a serious dressing down, posting:
“Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.”
Harsh words. Yudkowsky wasn’t worried about being in the Basilisk’s simulation though. He was worried that posts like Roko’s are enough to make people think they are, which could make them want to create the Basilisk. This would make Roko’s thought experiment self-fulfilling.
Should We Be Worried?
Roko’s Basilisk only poses an immediate danger to those susceptible to believing it, like the users who began reporting adverse effects after reading the post. If you don’t believe in timeless decision theory, or even know/understand what it is, you’re probably safe.
But that doesn’t mean we shouldn’t be worried. Firstly, as Yudkowsky himself fears, if enough people start believing in the thought experiment, they could begin fulfilling it. Which seems unlikely but isn’t impossible.
- Medjed the Smiter: How to Depict an Egyptian God Who Can’t Be Seen?
- The Perpetual Motion Machine: Did Charles Redheffer Defy Physics?
Worse, however, is the fact the original post is over ten years old and predates all of our modern advancements in artificial technology, for example, OpenAI and ChatGPT. Ten years ago, the threat of malevolent AI seemed pretty unrealistic, today it’s in the newspapers on a daily basis.
In the short term, the biggest concern for many people is job security. Coders and content creators are already beginning to feel this as AI tools are becoming increasingly able to do their jobs at a fraction of the cost. Do you hire 1000 coders or buy one software license for a specialized AI? AI is fast becoming the updated version of cheap foreign labor.
But those involved in the AI space are worried about more than job security. They’re concerned that there are insufficient regulations restricting AI research and that we’re getting very close to someone very smart doing something very stupid. Just because you can do something, doesn’t mean you should.
They’re worried that someone could create an AI so complex that we can’t control it or predict what it’s going to do. It could just as easily help us achieve Utopia as it could decide it has no use for us and wipe us out.
The development of superintelligent AI could trigger rapid technological advancements, potentially outpacing our ability to manage or regulate these changes effectively. This could have far-reaching implications for society and our way of life. In a world full of war and conflict, giving humans access to advanced AI is like handing a toddler a loaded gun.
There are also worries that AI could change the concentration of power. The existence of a superintelligent AI could concentrate immense power in the hands of a select few who control and deploy it. This could exacerbate existing societal inequalities and create new forms of oppression.
A Dangerous Concept to Explore
Roko’s Basilisk serves as a captivating thought experiment that pushes us to confront the ethical, philosophical, and existential implications of superintelligent AI. While the concept remains speculative, it illuminates the need for responsible and conscientious development of AI technologies.
This doesn’t mean we should let fear cripple us; AI must not be the next Nuclear energy or GM food. Just as AI comes with massive risks, the potential benefits are almost unimaginable. Few other technologies have the potential to revolutionize the world like AI can. It just needs to be done responsibly. Thankfully, many of the people working on it seem to agree and are bucking the trend by lobbying for more governmental oversight in AI research, not less.
Top Image: Just reading about Roko’s Basilisk is dangerous. Sorry about that. Source: 2ragon / Adobe Stock