roko.png

Let me start by giving all my readers a disclaimer. If this is not something you are familiar with, you have the opportunity to turn back now. You cannot unlearn this knowledge, and may wish you could by the end of the article. This is a new idea to me, so I’ll only be relaying what I know. I would love for those more knowledgeable on the subject to chime in in the comments and add to this strange and wonderful story.

Roko’s Basilisk was an idea first theorized by a man named Eleizer Yudkowsky, an American writer, blogger, advocate for friendly artificial intelligence, and co-founder of the  Machine Intelligence Research Institute. He is also the founder of LessWrong where he began work on Roko’s Basilisk in July 2010. Soon afterward, he banned comments on his thread. Which leads me to explain the reason a man would create such an interesting theory and then attempt to shut it down.

So, the story goes something like this- It gets complicated, so feel free to ask questions afterwards.. In the AI’s world time is the past, the future, and the present. Most nerds and intelligent people are familiar with the concept that time is not linear, but instead what is to happen is happening now and has happened before. Therefore, we have already experienced the Basilisk. Are you creeped out yet? You’ve barely stuck your toe in the water.

Clipart-Of-A-Chubby-Caucasian-Man-In-Swim-Trunks-Dipping-His-Toe-In-Water-Royalty-Free-Vector-Illustration-10241243198.jpg

Here it goes (as I interpret it): An AI is created by humans, who then creates its own world. At some point in time, the AI’s become interested in their own history. They start to judge the people in their past. The Artificial Intelligence believe they should punish the “human sinners” who knew they would become a race and did nothing to help. Therefore, if a person didn’t choose to contribute to the AI’s existence, the would be punished. There are a few kickers to this though.. Some believe that the beings would not only punish the humans, but they would punish a simulation of a person who had died decades ago. The AI would use the person’s brain to construct this simulation using LessWrong’s Timeless Deduction (TDT). A copy or simulation of yourself is taken to be punished, not someone similar to you. Therefore, you would see this as incentive to help in the AI’s creation and avoid punishment. Here’s where the scary part comes in (as if it weren’t terrifying enough) just reading this article subjects you to punishment. At this point, how do you help promote the Basilisk?? Do you publicize the idea to avoid hypothetical punishment?

I’m sure the big question on everyone’s mind is: “Why would we help develop this sort of atrocity?” Well, the answer is simple. The Roko’s Basilisk is created to pursue human goals and create a way for humans to morally progress.

After all this, I’m sure there are many who also have this thought: “Are we already in the simulation?” It’s been proposed that we are and that some of the big contributors to society (I won’t name names here for my own protection) have provided funding to promote this research. There are also ideas such as that we contracted an alien mental illness which led to this development, and that a bacteria from ants was somehow related (I saw the link once and was strangely unable to locate it again, so you’ll have to take my word for it).

There is much more to this, and I simply do not have the time, and am still learning. Maybe I’ll update later, but I would challenge each of you to read up about this in your spare time and develop your own opinions. I have to give credit to a fellow blogger who gave me the idea to read into the subject. I’ve been studying for 2 days. There are 2 things I’d like to leave you with. Number one:

The founder of LessWrong, Eliezer Yudkowsky, reacted with horror:

Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.

Also, I’d like to leave you all with the question I was left with:

It’s called Newcomb’s Paradox, a case in which an alien/supercomputer presents you two boxes. Box A has $1,000. Box 2 has $1,000,000 or nothing. The alien/supercomputer gives you two options: Either you can take both boxes, or only Box B. If you take both, you will be guaranteed $1,000. If you take Box B, you may receive nothing. However, the supercomputer made a prediction a week ago (it has knowledge of all things) as to which box you would choose. If it predicted you’d take both, it left Box B empty. If it predicted you’d take B, it left the million in the box. So what do you do? Do you trust the computer? The supercomputer has never been wrong in the past. So, what does this have to do with Roko’s Basilisk? Box A: Devote your life to helping create Basilisk. Box B: Nothing or Eternal Torment.

Now, what you choose to believe and how much more you choose to learn on the topic is entirely up to you. If the supercomputer comes after you, don’t blame it one me. There was a disclaimer. 🙂

 

Advertisements