2025-01-18
Disclaimer
EA is multiple things
I think ASI risks are the biggest problem facing humanity right now. Also I work on it full-time. Hence I will focus my criticism on that.
Dustin Moskowitz and Jann Tallinn support technical safety researchers on how to align AI to human values, and they support policymakers persuading people in the US govt to implement some governance measures on AI.
I think this plan will fail because there is too much geopolitical power associated with advancing AI as of 2025. Persuading heads of US AI companies, heads of US intelligence community, heads of US executive branch etc about the risks of building ASI won't stop them from building ASI. Hence you have to fight against them not work with them.
I also think think early-stage funding provided by Moskowitz and Tallinn to Amodei actively accelerated towards ASI, and therefore actively accelerated humanity towards human extinction or permanent dictatorship.
As of 2026-01, Moskowitz and Tallinn are not willing to publicly criticise Amodei, or vice versa.
Some EA people want to be kind to Moskowitz despite their disagreements with him, because "atleast he funds some of our EA cause areas", like animal welfare or global health or whatever. I think Moskowitz is one of the most dangerous human beings to ever live on this planet, right up there with Sam Altman and Elon Musk, because of their willingness to throw more than $100M at accelerating AI capabilities. Most billionaires and billionaire philanthropists aren't dangerous in the same way. I am aware Moskowitz probably doesn't derive any joy out of increasing human suffering or out of making me personally suffer, but the net result is that he is actively making the world a worse place IMO.
Paraphrasing a message I wrote to someone, sharing my opinion on EA "community building"
""" The naive way of building a community (on AI safety or LW or EA or whatever label) might make the world a worse place, because it makes it easier for OpenAI, Deepmind, Anthropic, etc to recruit more talent, and does not achieve much in the world besides this.
Maybe you should not build a community, but seek power with just you and your cofounder first, and then hire people using this money later. If however you absolutely insist on building a community, my advice is that you should have a clear plan for what to do next, or atleast some plan to prevent AI companies from hiring people in the community.
We don't live in utopia, most people don't have deep convictions about anything, and can be purchased by the highest bidder. They face pressure from family and friends to just do what is high status (which correlates strongly with which billionaire funds what), as opposed to following any principles of their own. """
See also: I will only persuade or criticise the powerful. If you are some random EA community builder and don't find the above obvious, I am probably not going to spend a lot of time trying to convince you.
Enter email or phone number to subscribe. You will receive atmost one update per month
Enter comment