This post is more babbling unfiltered thoughts, than really thinking it through. That is why it is sitting on the personal section of my website. I am writing from a place of having to resolve significant amounts of suffering in my own life.
(2026-02-16) Fuck it, we are making this a professional document. If I consider it a priority, I might clean up this document and increase quality some day.
Contains politically sensitive info
Main
I am sorry. I have tried many times to avoid the whole topic of assassinations, but it is becoming increasingly clear to me that I am not going to get internal morality clarity on this unless I am honest with myself and honest with others what my actual views on assassinations are.
Do they lengthen ASI timelines?
My actual view on assassinating the US president, or assassinating the AI company CEOs, or similar, is that it will purchase you only a few months of additional time at best, at preventing ASI, if you actually succeed at assassinating them as of 2026-02. Assassinations make most sense to me if you have "great man theory of history", as in there are a few great founders who have great judgement or motivation or whatever, and hence they are the ones disproprotionately pulling human history forwards. They make less sense if you have a theory of history that says "free markets reign supreme hence progress is inevitable", or "tech progress is inevitable", or "Perfect information and extended church-turing thesis means humans will inevitably make good decisions as a result", or similar.
Side Side Note - This is one of the weirdest things I find about some silicon valley founders who defend "capitalism means X is inevitable" for some X, while also saying they are the ones pulling history forwards by creating X. If capitalism means X is inevitable, then you are contributing zero value to society by creating X, someone else would have created it instead of you, at the exact same time you did. How else do you think causality works? This strikes me as peak motivated reasoning without having actually thought it through, although I also get why a founder like that would think any clear reasoning whatsoever to be a waste of time.
I am sympathetic to great man theory of history, but also I now think there are too many such founders already. If Altman, Amodei, Hassabis and a few others were to drop dead simultaneously for some reason, I'm sure the next crop of highly agentic founders would take their place, and they would be almost as good if not equally as good.
People who have historically planned assassinations are often not naive about this. (They're more likely to be naive if they operated in isolation, as opposed to as a small group with shared aims, because isolation is painful for human psychology I think.) When Operation Valkyrie was planned to assassinate Hitler, it was understood that Himmler and multiple others also need to also be assassinated, simultaneously, in the very same attack. You can read the wikipedia page or watch the movie on this, for more info.
I think the theory of history I am sympathetic to - is that specific developments can in fact delay or accelerate various technologies by short time periods like 10 years. We have some empirical evidence for both accelerations and decelerations like this. However, on a timescale of over 100 years, we don't yet have great empirical evidence that anyone on Earth was able to either accelerate or decelerate the creation of any major technology.
I have some guesses but not a lot of clarity on why exactly this is the case.
I have sometimes wondered if a handful of people like Yudkowsky, Kurzweill and Schmidhuber had just stayed quiet about ASI, or mysteriously dropped dead or similar, back in early 2000s, then ASI would have been delayed by more years. For instance it is clear that atleast some cofounders and early employees at Deepmind were atleast somewhat inspired by Schmidhuber at the IDSIA where they first worked. It is clear that extropians mailing list helped bring these people together. It is clear that the singularity summit organised by Peter Thiel, Kurzweil and Yudkowsky helped bring these people together.
I do think it is true that going from "one guy with crazy idea" to "small group of people with shared crazy idea" is one of the bigger bottlenecks to actually start anything in the real world. This is true whether that crazy idea is starting the world first's ASI company, or starting the world's first group of radicals to assassinate the CEOs of other ASI companies, or a hundred other crazy ideas.
That being said, I am not convinced you could actually purchase a 100 years, or even 10 years of time, by just trying to block small groups from forming.
China seems to heavily rely on the "small group theory of history" in order to suppress mass protests. You have a lot of individual freedom in China, but much less freedom the moment you start organising a small group around anything (even if that group is not organised around anything explicitly political.) This seems to be working out okayish for them. I think dictatorship versus democracy is best seen as a spectrum, with each state on this spectrum being a state in a markov chain, with half-life numbers of either decaying to even more dictatorship, or remaining the same, or improving towards more democracy. I am not yet convinced we have great empirical evidence that China's pursuit of dissidents based on "small group theory of history" has actually increased their half-life by a lot.
What about PR?
Many people seem to think that even if the first order effect of assassinating someone is delaying timelines which is good, this is dominated by the second order effect of PR being negative. "If people knew of the entire sequence of events that led to you assassinating them, they will immediately dissociate from you, and not even want to think about your ideas seriously."
I think this has some truth to it but it is more complicated than this. I think people like Ted Kaczynski or Operation Valkyrie or Aum Shinrikyo or George Fernandes or Bhagat Singh or similar, are in fact more popular for having attempted assassinations, as compared to having done nothing and giving up quietly.
I think it also depends on your own values. I am studying these people more carefully because I do find that I actually relate to parts of what these people are saying. I think some people who find out I am an AI doomer immediately ask me about assassinating AI company CEOs, because they also are able to atleast somewhat relate to the radicals in the past who attempted such things.
My bigger possibly-crackpot theory for this is that you can actually classify the entire population on the MtG colour wheel. If you attempt assassinations you will become more popular among people who are more Black, and less popular among people who are less Black.
I think there is something deeply appealing about radical politics like this for activists who first try the persuasion route and fail. Unilateralist plans are in general easier to succeed on, and become more appealing the more you find coordination with other people not actually working out. Most of capitalism's successes do in fact come from enabling more unilateralism.
Conclusion
Simultaneously assassinating every single "great man" who could run an ASI company as of 2026 seems too difficult a plan for me to even seriously consider it among my list of options for how to solve this problem. It is obvious to me there are atleast 10 people but likely atleast 100 people on this list. You're never going to get them in the same room physically such that a simultaneous assassination becomes possible. You have to do it all simultaneously or in rapid succession because your enemies are also going to retaliate hard, and they do in fact have a lot more power than you, because they indirectly have some control over the US military and you don't.
(You also need an alternate anti-ASI political faction that can seize power after the assassinations. This part seems like the easier part of the problem to me.)
If we were only a year away from ASI, and I was highly confident this was the case, I can imagine pursuing a more radical plan like trying to assassinate the AI company CEOs myself, or commit suicide in front of their building as a desperate final act of protest. If the gameboard is that grim, as Yudkowsky puts it, I expect both plans to probably fail, but what else is the point of living at that point. I think I would rather gamble my own life away trying to stop ASI, than gamble my own life hoping the ASI outcome turns out benevolent.
I am perfectly aware that simply writing a post like this means I am probably gonna eventually end up on the top hitlist of people to watch out for, that Altman and Amodei and others are maintaining. This is not hyperbole, I literally mean that Altman probably has a list of atleast 100 people who he is tracking in his notes right now (or has a subordinate tracking them) as his top enemies, and I think writing this post significantly increases the probability that I personally end up on this list in the next few years.
I am also aware that by writing a post like this I might be providing that one last nudge of motivation required to someone who was already considering assassination anyway, and that I am in some important way responsible for their assassination attempt. And my actual answer to that is, on my head be it. The cost of not having clear thinking on this topic seems too goddamn high to me.
2026-02-14
How do you know the anti-ASI faction that seizes power after the assassinations is benevolent, not another pro-ASI faction in disguise? You don't and there's no way to be sure. It is just your best guess. (For instance, a faction that calls themselves openly anti-ASI has a slightly higher probability of actually meaning it, as opposed to a faction that openly says they will build ASI)
Subscribe
Enter email or phone number to subscribe. You will receive atmost one update per month