[Home]
[Parent directory]
[Search]
my_research/support_the_movement.html
2025-11-10
Support the movement against extinction risk due to AI
Background assumption
- Assuming I and people like me do nothing, the most likely scenario I forecast is that heads of US AI labs, heads of US executive branch and heads of US intelligence community will choose to race to building superintelligence despite being atleast vaguely aware of the risks of doing so.
- I support creating a mass movement to force them to not do this. I am not optimistic on the strategy of using persuasion but not force, although I also think persuasion is worth trying.
- I also think a weak ASI (might be boxed not aligned) can be used to build a permanent dictatorship, and I think simply persuading these people especially fails to prevent this scenario due to misaligned incentives.
Support the movement
- If you have only a little time to devote
- Like, share, subscribe to my content or people publishing similar content on AI extinction risk. Can share with your friends, people in media or politis, people working at AI labs or in x-risk, anyone really.
- If you have a lot of time to devote
- Organise a protest in your city around AI extinction risk.
- Start a social media channel to persuade people at scale about AI extinction risk. Even one video is better than zero, as it motivates other people to also come forward.
- Invent a new political ideology or system of governance that makes it safer to deploy superintelligent AI, human genetic engg, and whole brain emulation in this world. Invent a new spiritual ideology or religion that can unite humanity around a common position on superintelligent AI, human genetic engg, and whole brain emulation.
- IMO superintelligent AI and human genetic engineering are both possibly less than 5 years away, unless people take political action otherwise. Whole brain emulation is seeing slow and steady progress, so maybe it is 30 years away.
- If you prefer technical projects
- Build a team that can cyberattack top AI labs and their supporting govts, and publish leaked info publicly.
- (In order to kickstart this, atleast a few people need to have technical skills as cyberhackers, and atleast one person needs to raise $10M in funding in order to hire top talent.)
- If building such a team from scratch is too hard, maybe go join the Russian intelligence agency instead.
- If you are already powerful
- Run for US or UK election with AI pause as an agenda.
- (Requires large social media following or high status credentials, and a UK or US citizenship)
- Use your social media channel to run referendums on the topic, as well as iteratively test messaging.
- This is IMO the single largest bottleneck to growing the entire movement. Most people have very little time to devote to this issue and "Vote for Mr XYZ" is a better call-to-action than "Like/Share/Subscribe to Mr XYZ's content". Also you will get feedback from reality on how to translate vague public support into concrete actions in the real world.
- (Maybe) Consider supporting UBI as an agenda, as one of the largest group of single-issue voters in US is only concerned with losing their own job/income/equity. Example: Andrew Yang (signed FLI pause letter).
- Sponsor bounties for potential whistleblowers at top AI labs and their supporting govts.
- (Requires atleast $100k, likely more)
- Useful if ASI timelines are longer than 5 years
- Otherwise
- If for some reason you are incapable of working on any of the above, my current recommendation is not do anything that gets in the way of the people working on the above.
- You could work to make solar energy cheaper. You could fix politics in a country that doesn't have nukes. You could work on intra-city bullet trains to build a city with a billion people. You could work on alternative proteins or meal replacements. You could work on making games or art. You could work on some useless software project.
- Once an intelligence-enhancing tech is deployed on Earth, most of this will probably turn out useless anyway. If your project significantly changes the incentive structures and ideologies that influence the creation of an intelligence-enhacing tech, then your project might matter. Your project could matter for the humans alive until an intelligence-enhancing tech is deployed. Otherwise, it won't matter. (Weak opinion)
- I used to have an older list of many more projects, but I now think listing too many projects is a sign I lack clarity on what is most important.
Subscribe / Comment
Enter email to subscribe, or enter comment to post comment