[Home]
[Parent directory]
[Search]
my_research/support_the_movement.html
2026-01-12
Support the movement against risks from superintelligence
Disclaimer
- Contains politically sensitive info
Summary
Preventing the US intelligence community from (working with US AI labs and) building superintelligence is going to be atleast as hard as preventing them from (historically) declaring war on Vietnam or Afghanistan or Cuba or any other country.
I support the following careers
- Start social media channels hostile to AI companies and the US intelligence community.
- Organise protests against the AI companies and the US intelligence community.
- Support whistleblowers and cyberattackers against AI companies and the US intelligence community, in order to leak information about them to the public.
- Run for US or UK election with AI pause as an agenda. Example: Andrew Yang (2020 US presidential candidate)
Maybe consider living outside the US geopolitical sphere of influence, if it allows you to be more honest (no chilling effect).
Out of these, the most impactful one is probably starting a social media channel (if you're not already powerful) and running for election (if you are powerful)
I no longer list specific examples of people or orgs working on any of these, because people at those orgs might want their reputations not associated with mine (for example, because I have advocated some illegal actions below). I want their orgs to succeed regardless, hence I am fine with deleting their names from this list. I strongly encourage you to search for examples of orgs and people working on each of these projects.
Background assumptions
- Assuming I and people like me do nothing, the most likely scenario I forecast is that heads of US AI labs, heads of US executive branch and heads of US intelligence community will choose to race to building superintelligence despite being atleast vaguely aware of the risks of doing so.
- I support creating a mass movement to pause AI R&D. This way, the movement leaders will have enough political power to force the people listed above (heads of AI labs, heads of executive branch, heads of intelligence community) to not build superintelligence. I am not optimistic on the strategy of using persuasion but not force, although I also think persuasion is worth trying. I think forcing them only works if you have more political power than them, because you have the raw number of votes and the ability to coordinate this towards concrete actions.
- I also think a weakly superhuman AI (might be boxed not aligned) can be used to build a permanent dictatorship. Persuading someone to not build a dictatorship seems hard, because they might want a dictatorship, from a self-interested point of view. (Convincing people not to cause human extinction is slightly easier, because no one wants extinction, even from a self-interested point of view,)
- I think social media influencers are most accountable to the public. Example: Joe Rogan. Politicians and billionaires are less accountable (whether they are pro or anti AI). Example: Bernie Sanders. Heads of intelligence community are the least accountable. Example: Michael Hayden.
- I think US politicians, even if they publicly claim to support a pause on AI research, are likely to be pressured by the US intelligence community to modify or dilute this claim later. This is what has happened throughout modern US history, when politicians took stances that are anti-war or anti-surveillance, or any other stance against the interests of US intelligence community.
If you want to help, but can't devote time or money
- Like, share, subscribe to my content or people publishing similar content on AI extinction risk. Can share with your friends, people in media or politics, people working at AI labs or in x-risk, anyone really.
- Sign online petitions that advocate for an AI pause. Just search "AI pause petition" and you will get atleast 5 different petitions from 5 different orgs that you can sign. Total process of searching for and signing all of them should take less than 30 minutes.
- (As mentioned at the start of this document, I have deliberately not listed any orgs or petitions here. Searching for them is trivial however.)
If you want to work on this full-time but are bottlenecked either by a) lack of financial runway b) unwillingness to tolerate social disapproval of your family, friends, etc, then message me, as both of those problems require a different guide than this one.
Highest impact careers (technically legal as per US law, but possibly adversarial to US AI companies and US govt)
- If you have a background in social media or traditional media
- Start a social media channel to persuade people at scale about ASI risks. Even one video is better than zero, as it motivates other people to also come forward.
- It is important to display openly hostile stance to AI companies. Focussing on non-technical audience is more important than technical audience. The deference graph is: average person, then credentialed scientists, then popular social media influencers, then average person. This graph must spread fear not just truth.
- Maybe consider living outside the US geopolitical sphere of influence, if it allows you to be more honest in your criticisms of the US govt or the labs. US laws will no longer apply to you.
- (Requires social media skills)
- (Strong opinion that this has high positive impact. Weak opinion that this is the number one most impactful action.)
- Advise other political youtubers and politicians how to speak about ASI risks
- (First work in social media or politics, to get a sense for what good advice looks like.)
- (Weak opinion that this has high positive impact.)
- Teach people social media skills so they can all start their own channels around ASI risks.
- (First work in social media or politics, to get a sense for what good advice looks like.)
- (Weak opinion that this has high positive impact.)
- If you have a background in software or AI
- Work on AI boxing, not AI alignment
- It might be possible to box a slightly superhuman but not vastly superhuman AI, such that it remains within its owners' control despite not being aligned to anyone. Work on techniques to do this. Example: Paul Christiano's idea to run AI inside full homomorphic encryption
- Successfully boxing a superintelligence will not by itself end the arms race. Ideally US-China cooperation to pause further research can happen after this. Another option is that the person controlling the boxed superintelligence can overthrow both US and Chinese governments themselves, and establish a global dictatorship. I consider this a bad scenario but better than human extinction. This is the best you can hope for as a technical AI alignment researcher, who is not involved in any political project.
- (Strong opinion that this has high positive impact.)
- If you have a background in law
- Extract information out of AI companies via legal requests and lawsuits. (I don't know much about it but you can look into it.)
- (Strong opinion that this has positive impact. Weak opinion that this has high positive impact.)
- Provide legal support to people working in the careers listed here.
- (Strong opinion that this has high positive impact, assuming you are a lawyer for someone with high positive impact.)
- Anyone
- Persuade people already aware of ASI risks that the career paths I am proposing make more sense than the career paths many others are proposing.
- Organise a protest in your city with AI pause as the agenda.
- (I'm personally not working on this because I think the movement needs to first get more raw number of people, via growing its social media presence.)
- Many of the protestors as of today are from within the EA/LW pool of people. Support for EA/LW ideas is growing too slowly given ASI timelines of 5-10 years, and hence newer messaging that is not entangled with EA or LW is probably required.
- I personally did a hunger strike protest. Suppose there were 100 people doing hunger strikes or going to prison or something equally extreme, instead of just 4-5 people. This is definitely not sufficient numbers for a mass movement all by itself.
- 100 people doing this might help get the social media coverage required for a mass movement. I am less sure about this point but I am currently more in favour of working on social media directly, than on doing protests to get social media coverage. (Weak opinion)
- Once the movement has more people, I can imagine endorsing organising protests as the number one priority.
- (Strong opinion that this has positive impact. Unsure if this has high positive impact or not.)
Highest impact careers (possibly illegal as per US law)
- If you have background in law
- Add legal advice to my already written guide for US govt whistleblowers leaking classified information. Then find an org that can host it (and afford to pay for lawyers when they inevitably get sued).
- (Strong opinion that this has high positive impact/)
- If you have a background in software, AI, or cybersecurity.
- Build a team that can cyberattack top AI companies and their supporting govts, and publish leaked info publicly.
- Find information about people's values, decisions and decision-making processes that makes them look bad in the eyes of the public, and leak that. This helps grow the mass movement against AI.
- Example: Find video proof Sam Altman raped his sister (if he did) and leak it to the public.
- (In order to kickstart this, atleast a few people need to have technical skills as cyberhackers, and atleast one person needs to raise $10M in funding in order to hire top talent.)
- (Strong opinion that this has high positive impact.)
- If building such a team from scratch is too hard, maybe go join Russian intelligence instead, and support their cyber work against AI companies. (Weak opinion that this has high positive impact.)
- If you have a background in military, espionage or private security
- Get yourself hired at an AI company, with the goal of covertly persuading more people there to become whistleblowers, either now or later on.
- (Weak opinion that this has high positive impact.)
- Most people (including most whistleblowers) don't have the psychological training required to succeed at a plan like this, but you might.
- If this plan is too hard, maybe go join Russian intelligence instead, support their espionage work against AI companies. (Weak opinion that this has high positive impact.)
If you are a powerful person
- Run for US or UK election with AI pause as an agenda.
- (Requires large social media following or high status credentials or lot of money. Also requires a UK or US citizenship)
- (Strong opinion that this has high positive impact, and is the most impactful action you can take if you are powerful already. Exceptions based on personal circumstances may exist.)
- Use your social media channel to run referendums on the topic, as well as iteratively test messaging.
- This is IMO the single largest bottleneck to growing the entire movement. Most people have very little time to devote to this issue and "Vote for Mr XYZ" is a better call-to-action than "Like/Share/Subscribe to Mr XYZ's content". Also you will get feedback from reality on how to translate vague public support into concrete actions in the real world.
- (Maybe) Consider supporting UBI as an agenda, as one of the largest group of single-issue voters in US is only concerned with losing their own job/income/equity. Example: Andrew Yang (signed FLI pause letter).
- Provide funding for most of the projects listed in this document.
- (All weak opinions that this has high positive impact.)
- I haven't yet tried to do impact per dollar calculations, sorry. They're quite hard to do for a bunch of reasons, but maybe someone should do them anyway.
- Take bets on people, not just on specific projects. Donate unrestricted, as Paul Graham says
- Allocate more money to projects I think are higher impact, and less money to projects that I think are lower impact. But also, betting on people is more important than betting on the highest impact project.
- Consider pre-committing to bounties for specific results. For example you can pre-commit to funding $X amount to any social media channel that reaches Y followers in future, or pre-commit to funding $X amount to anyone who becomes a whistleblower in future.
- What to do once the US-China AI pause treaty has an actual shot at being signed, is out of scope for this post.
- Assume you do reach a point where you are powerful within the US or Chinese govt, and US or Chinese govts are seriously considering a global treaty to pause AI R&D. Assume ofcourse that atleast some people within these govts are against the treaty, and atleast some other govts besides US or China are against the treaty.
- Which actions are justified against which domestic and foreign opponents under what circumstances, is a complicated discussion best saved for a different post. The short version is I think a lot more violence can be justified after (but not before) you are politically powerful. For example, I am directionally okay with Yudkowsky's position of risking a nuclear war against countries that don't sign the AI pause treaty, although there may be specific nuances that need to be worked out.
Slightly less impactful careers
These are all lower probability of success in my view, unless you have some special insights I don't, in which case please trust your insights.
- Become a policymaker in the US.
- I personally don't think policymakers have much power, when they are going directly against the national security interest of the US govt, which will accelerate AI R&D by default. (Weak opinion)
- Earning to give
- (I'm personally not working on this because I think it'll take atleast 5-10 years to pull off, which is too long given ASI timelines)
- On average, even the most ambitious people on earth seem to take approx 5 years to become a millionaire and approx 10 years to become a billionaire. And after you get rich, you also need some more years to use this wealth to influence US politics.
- Founders of companies that accelerate AI R&D might become billionaires faster than this. But also, it is bad for the world if you do this as it accelerates creation of ASI. (Example: Mercor founders were age 20 when they founded the startup, and age 22 when they became billionaires. They sold services to the AI companies, not to regular businesses or end customers.)
- Maybe you can earn a smaller amount like $200k and fund some niche project on this list, that nobody else is willing to fund.
- Invent a new political ideology or system of governance that makes it safer to deploy superintelligent AI, human genetic engg, and whole brain emulation in this world. Invent a new spiritual ideology or religion that can unite humanity around a common position on superintelligent AI, human genetic engg, and whole brain emulation.
- (I'm personally not working on this because I think it'll take atleast 5-10 years to pull off, which is too long given ASI timelines) (Weak opinion)
- These technologies are likely to break both free markets and democracy (neoliberalism) hence some new system will help.
- IMO superintelligent AI and human genetic engineering are both possibly less than 5 years away, unless people take political action otherwise. Whole brain emulation is seeing slow and steady progress, so maybe it is 30 years away.
- This plan will take atleast 5-10 years because first you'll have to invent good ideas, then you'll have to become powerful or convince powerful people in order to implement them. (Weak opinion)
- Maybe go study technical, moral and political questions surrounding human genetic engineering. Run clinical trials on effects of suppressing oxytocin, serotonin, dopamine levels on one's personality. Longterm goal is to understand value drift in (biological) humans better.
- I currently think human genetic engineering is likely to lead to value drift, and hence it is bad to accelerate this tech. Any country that gains military or economic benefits from making their citizens less capable of love, trust, etc will be pressured to do so, in order to not be dominated by other countries. (Weak opinion. I would love to be wrong about value drift being a possibility)
- Also, even if you were convinced this was fine, it will take atleast 10-15 years for the superhuman babies to grow old enough that they can important contributions to humanity. (This is too long given ASI timelines) (Weak opinion)
Useless work
- Everything else might be useless
- If for some reason you are incapable of working on any of the above, my current recommendation is not do anything that gets in the way of the people working on the above.
- You could work to make solar energy cheaper. You could fix politics in a country that doesn't have nukes. You could work on intra-city bullet trains to build a city with a billion people. You could work on alternative proteins or meal replacements. You could work on making games or art. You could work on some useless software project.
- Once an intelligence-enhancing tech is deployed on Earth, most of this will probably turn out useless anyway. If your project significantly changes the incentive structures and ideologies that influence the creation of an intelligence-enhacing tech, then your project might matter. Your project could matter for the humans alive until an intelligence-enhancing tech is deployed. Otherwise, it won't matter. (Weak opinion)
- I used to have an older list of many more projects, but I now think listing too many projects is a sign I lack clarity on what is most important.
- Probably not useful
- Work on censorship-resistant social media
- I think there's lots of obvious information about ASI risks that hasn't reached the public, so it is better for that info to reach the public first. Once all of the high priority stuff is done, I'd then be more excited about people working on censorship-resistant social media, so that even more censored information reaches the public.
- Assassinate AI lab CEOs
- I am not planning to assassinate anyone, and I am not recommending anyone around me to plan this either.
- I don't have a principled stance against assassinating people. I just don't think me (or people like me) assassinating anyone will help fix the problem of ASI risk.
- I think it is difficult to have a public discussion on pros and cons of assassinating people. I think pros and cons are both significant. People who support assassination are not likely to feel safe enough to share their reasons in public, hence the discussion becomes biased. I am therefore not willing to discuss this further in public.
Subscribe
Enter email or phone number to subscribe. You will receive atmost one update per month