2025-01-08
Disclaimer
I have criticised EA in many plans at many times, I just haven't bothered to collate all of my thoughts on my website yet. I figured I would rather write a short criticism here too, than write nothing at all.
EA is multiple things
My primary criticism by far is of the two billionaires and the orgs they fund. Since I think ASI risks are one of the biggest problems facing humanity, and also I work on it full-time, I will focus on that.
In short, they support technical safety researchers on how to align AI to human values, and they support policymakers persuading people in the US govt to implement some governance measures on AI.
In short, my criticism is that there is too much geopolitical power associated with advancing AI as of 2025. Persuading heads of US AI companies, heads of US intelligence community, heads of US executive branch etc about the risks of building ASI won't stop them from building ASI. Hence you have to fight against them not work with them.
I also think think early-stage funding provided to Anthropic by both Moskowitiz and Tallinn is actively accelerating ASI, and therefore actively accelerating humanity towards human extinction or permanent dictatorship.
Update: Some EA people want to be kind to Moskowitz despite their disagreements with him, because "atleast he funds some of our EA cause areas". My position is closer to Moskowitz being one of the most dangerous human beings to ever live on this planet, right up there with Sam Altman and Elon Musk, because of their willingness to throw more than $100M at accelerating AI capabilities. Most billionaires and billionaire philanthropists aren't dangerous in the same way. I am aware Moskowitz doesn't derive any joy out of increasing human suffering or out of making me personally suffer, but the net result is the same.
Enter email or phone number to subscribe. You will receive atmost one update per month