I am a full-time activist who is figuring out how to prevent AI companies from creating superintelligence in the next 5-10 years, as I am worried the result could be human extinction or permanent dictatorship.
Summary: If you are working at or investing >$100M at an AI company building ASI, you are my enemy and you should not trust me. If you are some random unimportant and powerless person, you probably don't have to worry about me being violent or violating your consent or similar. I am seriously looking into plans that break US law, so there could be risks to you if you closely associate with me.
Graduated IIT Delhi Biochem engg BTech+MTech 2023. (Among top 3 engg colleges in India)
DOB 2001-01-03, Male
Important work - Start reading from here
If you work at an AI company building superintelligence
Summary: I recommend that you whistleblow against your company. My guide might help, depending on your situation. Unless you are seriously considering either resigning or becoming a whistleblower, I am probably not interested in any conversation with you.
If you agree with me that there are significant risks from superintelligence in the next few years
Summary: The most impactful thing you can do is to persuade the entire US public to vote against and protest against the frontier AI companies and the US govt. You can start a social media channel or run for politics. Do not work under the frontier AI companies, or the US govt, or EA / LW orgs that want to collaborate with them. Seek knowledge and power, more than you seek relationships. If you don't reward your allies with wealth or status, someone you don't like might do that and poach your allies.
If you don't know about or disagree with me regarding risks from superintelligence in the next few years
Summary: AI is a black box. Since 2012, most progress in AI research is based on heuristics that nobody understands. Extrapolating the trendline of last 5 years into next 5 years seems like a good way of forecasting the future. Also, AI understands human language, and is hence closer to human intelligence than ape intelligence IMO.
Maybe spend a few months living in a country under dictatorship or at war, to get first-hand experience for what that is like.
My views on cyberattacks and a world with zero privacy (Work in progress, contains politically sensitive info)
Summary: I think a world with zero privacy for everyone is probably better than our world, but I am not yet sure. Our leaders will likely be more accountable to us. All their weapons and dual use capabilities will then be under public oversight. We will learn faster, because we would actually know what each other's experience is like. We will probably still have a significant amount of freedom in this world. The most practical way I know to create a world with zero privacy for everyone, soon, is for cyberattacks to become so much easier than cyberdefence, that literally everyone on Earth give up on cyberdefence altogether. I am still figuring out moral, practical and other implications of these ideas.
Summary: I think working on human genetic engineering is probably extremely harmful for the world, but I am not yet sure. If we have a US-China arms race over human genetic engineering, the winners may have genetically removed their capacity for love and relationships completely.
Summary: LLMs are obviously going to reshape search/discovery/recommendation algos for the internet. I built some apps for this and learned some things about it. I don't have product-market fit yet. I have made an intentional decision to not work on this further, and will not be re-evaluating my decision again.
A lot of the following posts discuss topics that are less important to me, than what happens to ASI. But if your priorities are different from mine, maybe you will like these. Mostly aimed at a technical audience.