Browse parent directory
My AI Timelines (Easy Read)
Note: I am not a deep-learning pro. I know some theory, but I have not run large model training or built huge data sets. Please think for yourself. Also check views from others, like folks on this list. Search blogs, pods, papers, etc. Lesswrong is a good place too.
If you find proof my view on AI is wrong, please tell me. I may thank you for life. I can also pay you up to $1000 if you do that, but we would need to talk details.
This doc is for people in or near Lesswrong (LW) or Effective Altruism (EA). If you have not read their work, some of this might seem odd. Most of these points are guesses, not solid facts. A small bit of new info could change how I see this.
My Core View
-
There is about a 15% chance we build super-smart AI by 2030.
- By “super-smart AI,” I mean an AI that beats our best minds at tasks you can do with a laptop and the net.
- This might use big models (like modern transformers) plus loads of data and compute.
-
If that happens, it could be the biggest event in 10,000 years of human life. It might even be the biggest event in the last 14 billion years, though that chance is smaller.
-
If we get super-smart AI by 2030, I see a 30% chance it kills everyone on Earth soon after.
- So that is a 5% total chance (15% times 30%) that we all die by 2030 from AI.
-
I am not working on AI safety. The 15% chance is not big enough for me to drop my other work. But if you do AI safety research, I think you do big work and I salute you.
Why 15%?
Some folks in EA/LW have higher bets, but I do not share some of their logic. Example:
- They say: “We have as many ‘brain cells’ as X, so that sets a bound on compute for AI.” But I note: evolution used way more compute than a single brain.
- They say: “We can copy how nature shaped brains.” But that is hard to do. We do not know how to set the start rules or how much compute that needs.
- Most bold tech dreams fail. If you look at old labs in the last 100 years, you see many big plans that did not pan out.
- Some say new facts in neuroscience show that most of our smarts come from learning after birth, not just the shape of our brain cells. That might help, but I need to read more.
Also, I need more research like Katja Grace’s study on discontinuous progress in history. Or else I need a clear “gears-level” model that shows why these scaling laws keep working.
On the other side, many AI pros say super-smart AI is not near. But they use reasons I do not all trust. Example:
- They say: “These tasks are too hard for LLMs.” But we have seen LLMs solve tasks that once seemed too hard, with no clear theory on why.
- They say: “LLMs have a built-in limit in each pass.” Yes, but you can do many passes, or plug in other layers, or train a blend of tools (like mix a CNN and a transformer).
- They say: “We will run out of data,” but we might fix that by training small steps first, then using the model to make more data. I am not sure, since I am not an expert. But it could work.
Why 30% Chance the AI Kills Us All If It Arrives?
- I do think a misaligned AI is likely if it is super bright. It might find ways to trick or hack us.
- But I also think a good “AI box” can stop it, maybe more than some LW folks think.
- I guess we will not be caught by surprise. A lab that builds such an AI would have many staff who see the risk.
- If a boxed AI shows it might be unsafe, the world may then see the need to pause further AI work. I have some trust that politics can do that once danger is clear.
- I am not fully sure how an AI thinks of code, chips, and real-world stuff. It might know how bits map to electrons, but I do not know if that leads it to break out.
Final Notes
Please do not just copy my view: form your own. Check many sources. If you want to build super-smart AI, I urge you to pause and think first. I do not want us to build it by 2030 unless we fix key parts of society.
I am not 100% sure on any of this. A bit of strong proof could change my mind. So if you have such proof, let me know!