I wish to prevent AI companies from creating superintelligence in the next 5-10 years, as I expect the result of building superintelligence could be human extinction or permanent dictatorship. I work on this full-time, and currently independent.
Summary: If you are working at or investing >$100M at an AI company building ASI, you are my enemy and you should not trust me. If you are some random unimportant and powerless person, you probably don't have to worry about me being violent or violating your consent or similar. I am working on plans that break US law, so there could be risks to you if you closely associate with me.
If you work at an AI company building superintelligence
Summary: I recommend that you whistleblow against your company. My guide might help, depending on your situation. Unless you are seriously considering either resigning or becoming a whistleblower, I am probably not interested in any conversation with you.
If you agree with me that there are significant risks from superintelligence in the next few years
Summary: The most impactful thing you can do is to persuade the entire US public to vote against and protest against the frontier AI companies and the US govt. You can start a social media channel or run for politics. Do not build friendships with people at the frontier AI companies, or the US govt, or the EA / LW orgs that want to collaborate with the frontier AI companies or US govt. Do not accept funding from these people. Seek knowledge and power. Reward your allies with wealth or with people's respect, otherwise someone you don't like might reward your allies and poach them from you.
If you don't know about or disagree with me regarding risks from superintelligence in the next few years. For technical audience
Summary: AI is a black box. Since 2012, most progress in AI research is based on heuristics that nobody understands. Extrapolating the trendline of last 5 years into next 5 years seems like a good way of forecasting the future. Also, AI understands human language, and is hence closer to human intelligence than ape intelligence IMO.
Summary: It's hard for me to summarise this section. My current plan is a mix between Eliezer Yudkowsky and Julian Assange. I also have atleast a little bit of respect for almost every person listed below, that is precisely why I took out the time to criticise them in the first place.
Disclaimer - Many of the posts below are quick notes, and have inaccuracies. Use the "I am looking for a cofounder" post as the main post here, which I will hold to a higher quality standard.
Summary: I think working on human genetic engineering is probably extremely harmful for the world, but I am not yet sure. If we have a US-China arms race over human genetic engineering, the winners may have genetically removed their capacity for love and relationships completely.
Summary: LLMs are obviously going to reshape search/discovery/recommendation algos for the internet. I built some apps for this and learned some things about it. I don't have product-market fit yet. I have made an intentional decision to not work on this further, and will not be re-evaluating my decision again.
A lot of the following posts discuss topics that are less important to me, than what happens to ASI. But if your priorities are different from mine, maybe you will like these. Mostly aimed at a technical audience.
(I have increasingly observed that most of the tasks listed above are hard to delegate, even if I am willing to pay for them. Nevertheless, if you see low hanging fruit that I can't, feel free to complete the task and claim the payment.)
Subscribe
Enter email or phone number to subscribe. You will receive atmost one update per month