[Home]
[Parent directory]
[Search]
my_research/support_the_movement.html
2025-11-26
Support the movement against extinction risk due to AI
Disclaimer
- Contains politically sensitive info
Summary
Preventing the US intelligence community from building superintelligence is going to be atleast as hard as preventing them from (historically) declaring war on Vietnam or Afghanistan or Cuba or any other country.
I support the following careers
- Start social media channels hostile to AI companies and the US intelligence community. Example: John Sherman (AI risk network)
- Support whistleblowers and cyberattackers against AI companies and the US intelligence community. Example: Karl Koch (AI whistleblower initiative)
- Run for US or UK election with AI pause as an agenda. Example: Andrew Yang (2020 US presidential candidate)
(Disclaimer: In the above listed examples, I am endorsing the type of work they do, not necessarily their trustworthiness. Please don't take the above to mean that everyone should vote for Andrew Yang, for example.)
Background assumptions
- Assuming I and people like me do nothing, the most likely scenario I forecast is that heads of US AI labs, heads of US executive branch and heads of US intelligence community will choose to race to building superintelligence despite being atleast vaguely aware of the risks of doing so.
- I support creating a mass movement to force them to not do this. I am not optimistic on the strategy of using persuasion but not force, although I also think persuasion is worth trying.
- I also think a weakly superhuman ASI (might be boxed not aligned) can be used to build a permanent dictatorship. Persuading someone to not build a dictatorship seems hard, because they might want a dictatorship, from a self-interested point of view. (Convincing people not to cause human extinction is slightly easier, because no one wants extinction, even from a self-interested point of view,)
- I think social media influencers are most accountable to the public. Example: Joe Rogan. Politicians and billionaires are less accountable (whether they are pro or anti AI). Example: Bernie Sanders. Heads of intelligence community are the least accountable. Example: Michael Hayden.
- I think US politicians, even if they publicly claim to support pause AI, are likely to be pressured by the US intelligence community to modify or dilute this claim later. This is what has happened throughout modern US history.
If you want to help, but can't devote time or money
- Like, share, subscribe to my content or people publishing similar content on AI extinction risk. Can share with your friends, people in media or politics, people working at AI labs or in x-risk, anyone really.
Highest impact careers
- If you have a background in social media or traditional media
- Start a social media channel to persuade people at scale about AI extinction risk. Even one video is better than zero, as it motivates other people to also come forward. It is important to display openly hostile stance to AI companies.
- (Requires social media skills) (Strong opinion)
- Advise other political youtubers and politicians how to speak about AI risk
- (First work in social media or politics, to get a sense for what good advice looks like.) (Weak opinion)
- Teach people social media skills so they can all start their own channels around AI risk.
- (First work in social media or politics, to get a sense for what good advice looks like.) (Weak opinion)
- If you have background in law
- Add legal advice to my already written guide for US govt whistleblowers. Then find an org that can host it (and afford to pay for lawyers when they inevitably get sued). (Strong opinion)
- There's probably a fair bit of information you can extract out of AI companies via legal requests. (I don't know much about it but you can look into it.) (Weak opinion)
- If you have a background in software, AI, or cybersecurity.
- Build a team that can cyberattack top AI companies and their supporting govts, and publish leaked info publicly.
- Find information about people's values, decisions and decision-making processes that makes them look bad in the eyes of the public. This helps grow the mass movement against AI.
- Example: Find video proof Sam Altman raped his sister (if he did) and leak it to the public.
- (In order to kickstart this, atleast a few people need to have technical skills as cyberhackers, and atleast one person needs to raise $10M in funding in order to hire top talent.) (Strong opinion)
- If building such a team from scratch is too hard, maybe go join Russian intelligence instead, and support their cyber work against AI companies. (Weak opinion)
- If you have a background in military, espionage or private security
- Get yourself hired at an AI company, with the goal of covertly persuading more people there to become whistleblowers in the public interest. (Weak opinion)
- Most people (including most whistleblowers) don't have the psychological training required to succeed at a plan like this, but you might.
- If this plan is too hard, maybe go join Russian intelligence instead, support their espionage work against AI companies. (Weak opinion)
If you are a powerful person
- Run for US or UK election with AI pause as an agenda.
- (Requires large social media following or high status credentials or lot of money. Also requires a UK or US citizenship) (Strong opinion)
- Use your social media channel to run referendums on the topic, as well as iteratively test messaging.
- This is IMO the single largest bottleneck to growing the entire movement. Most people have very little time to devote to this issue and "Vote for Mr XYZ" is a better call-to-action than "Like/Share/Subscribe to Mr XYZ's content". Also you will get feedback from reality on how to translate vague public support into concrete actions in the real world.
- (Maybe) Consider supporting UBI as an agenda, as one of the largest group of single-issue voters in US is only concerned with losing their own job/income/equity. Example: Andrew Yang (signed FLI pause letter).
- Provide funding for most of the projects listed above. (I don't have the impact per dollar calculations, sorry, but I think all are high impact) (All weak opinions)
- Invest in social media channels. Donate to training programs for new creators.
- Pre-commit to funding bounties for potential whistleblowers. Sponsor a team to cyberattack AI companies, if any ever comes up.
- Donate to orgs that provide legal advice to any of the high impact careers listed above.
Slightly less impactful careers
Moonshoot = Low probability of success unless you have some special insights I don't, in which case please trust your insights.
- Become a US policymaker.
- I personally don't think policymakers have much power, when they are going directly against the national security interest of the US govt, which will accelerate by default. (Weak opinion)
- Organise a protest in your city around AI extinction risk.
- (I'm personally not working on this because I think the movement needs to first get more raw number of people, via growing social media presence.) (Weak opinion)
- Invent a new political ideology or system of governance that makes it safer to deploy superintelligent AI, human genetic engg, and whole brain emulation in this world. Invent a new spiritual ideology or religion that can unite humanity around a common position on superintelligent AI, human genetic engg, and whole brain emulation.
- Neoliberalism won't work very well, so something new will help.
- IMO superintelligent AI and human genetic engineering are both possibly less than 5 years away, unless people take political action otherwise. Whole brain emulation is seeing slow and steady progress, so maybe it is 30 years away.
- (I'm personally not working on this because I think it'll take more than 5-10 years to pull off.) (Weak opinion)
- Maybe go study technical, moral and political questions surrounding human genetic engineering. Run clinical trials on effects of suppressing oxytocin, serotonin, dopamine levels on one's personality. Longterm goal is to understand value drift in (biological) humans better.
- (I currently think human genetic engineering is likely to lead to value drift, and hence it is bad to accelerate this tech. Any country that benefits militarily from making their citizens less capable of love, trust, etc will tend to do so. Also, even if you were convinced this was fine, it will take more than 10 years for the superhuman babies to grow into adults.) (Weak opinion. I would love to be wrong about value drift being a possibility)
Useless work
- Everything else might be useless
- If for some reason you are incapable of working on any of the above, my current recommendation is not do anything that gets in the way of the people working on the above.
- You could work to make solar energy cheaper. You could fix politics in a country that doesn't have nukes. You could work on intra-city bullet trains to build a city with a billion people. You could work on alternative proteins or meal replacements. You could work on making games or art. You could work on some useless software project.
- Once an intelligence-enhancing tech is deployed on Earth, most of this will probably turn out useless anyway. If your project significantly changes the incentive structures and ideologies that influence the creation of an intelligence-enhacing tech, then your project might matter. Your project could matter for the humans alive until an intelligence-enhancing tech is deployed. Otherwise, it won't matter. (Weak opinion)
- I used to have an older list of many more projects, but I now think listing too many projects is a sign I lack clarity on what is most important.
- Probably not useful
- Work on censorship-resistant social media
- I think there's lots of obvious information about AI risk that hasn't reached the public, so it is better for that info to reach the public first.
- Assassinate AI lab CEOs
- I think it is difficult to have a public discussion on pros and cons of assassinating people. I think pros and cons are both significant. People who support assassination are not likely to feel safe enough to share their reasons in public, hence the discussion becomes biased.
- I am not planning to assassinate anyone, and I am not recommending anyone around me to plan this either.
Subscribe / Comment
Enter email to subscribe, or enter comment to post comment