[Parent directory]
[Home]
connect_with_me/support_me.html
2025-08-29
Support the movement against extinction risk due to AI
- Low effort
- Like, share, subscribe to my content or people publishing similar content on AI extinction risk. Can share with your friends, people in media or politis, people working at AI labs or in x-risk, anyone really.
- High effort
- Organise a protest in your city around AI extinction risk.
- Start a social media channel to persuade people at scale about AI extinction risk.
- Most impactful
- If you have a large social media following or high status credentials (UK, US citizens only): Run for election with AI pause as an agenda.
- (Maybe) Consider supporting UBI as an agenda, as one of the largest group of single-issue voters in US is only concerned with losing their own job/income/equity. Example: Andrew Yang (signed FLI pause letter).
- If you have >$10M ideal in funds: Sponsor bounties for potential whistleblowers at top AI labs and their supporting govts. Sponsor cyberattacks against top AI labs and their supporting govts, and publish leaked info publicly.
- At minimum, publish info relevant to AI risk, such as values, decisions and capabilities of key decision-makers. At maximum, publish all data that Big Tech has collected on everyone to the public, thereby destroying privacy of every person on Earth with no exceptions. I am supportive of the latter, which I'm aware is a radical stance. Even if you don't agree with me, please atleast do the former.
- I'm trying to figure out a better incentive mechanism than donations, but until then, donations will help.
- Invent a new ideology or religion that can unite humanity around a common position on superintelligent AI, human genetic engg, and whole brain emulation.
- IMO superintelligent AI and human genetic engineering are both less than 5 years away, unless people take political action otherwise. Whole brain emulation is seeing slow and steady progress, so maybe it is 30 years away.
Support me
- Donate to me
- Looking for people funding "outside game" strategies for fixing AI extinction risk (like mass protest, social media channels, whistleblowers) not "inside game" strategies (like alignment reseach at top AI labs, lobbying US policymakers on behalf of top AI labs). Examples: Pierre Omidyar funding The Intercept, Brian Acton funding Signal, etc
- Work with me
- Provide me feedback or do fact-checking for whistleblower guide. Especially interested in people with expertise in US or international law.
Subscribe / Comment
Enter email to subscribe, or enter comment to post comment