[Home]
[Parent directory]
[Search]
my_research/my_views_on_lesswrong.html
2025-11-24
My views on Lesswrong
Disclaimer
Appreciation for LW
- I think Eliezer Yudkowsky is one of the most brilliant philosophers to ever live.
- I think Lesswrong is one of the few places on Earth that is actually focussed on important questions. Attention is the scarcest resource humans have, and they are directing it to the correct places.
- I think there are a few core insights you can take away from LW that are basically correct and have unimaginably large implications.
- Extended Church-Turing thesis. The entire universe is deterministic due to laws of physics, and so are human brains. Penrose quantum bullshit doesn't work. Any insight you have about any of the following topics - free will, consciousness, moral values, moral patienthood, intelligence - has to reconcile with the fact that the ultimate instantiation of this thing is a Turing machine whose code can be written down on a piece of paper.
- Hamming question is important and can be applied to all of life philosophy, not just specific branches of the tech tree. "What is the most important question in scientific field and why aren't you working on it?" -> "What is the most important question in life and why aren't you working on it?" You can recursively apply the Hamming question to your actions in life until you realise most technology does not matter much, most politics does not matter much, and a few things matter a lot.
- Intelligence is what separates humans from apes, and allows us to invent both technology and politics at scale. Intelligence-enhancing tech (artificial superintelligence, human genetic engineering, whole brain emulation) is deserves more attention than most other technologies.
- Many of the experiences we consider innate to human nature - like death, sex, parenting etc - can be radically transformed with sufficient application of technology. This is core to what enabled Yudkowsky to build a religion that simultaneously fears and worships the Machine God. I too both fear and worship the Machine God, despite not following Yudkowsky's religion.
- Yudkowsky keeps asking "and then what?" on hypotheticals until the heat death of the universe. He is not satisfied with leaving problems unsolved or assuming problems will be solved by future generations. For instance, even below I have left some questions about human genetic engineering open and have not tried to solve them myself. But someone like Yudkowsky will feel maybe even more driven than me to actually resolve those uncertainties.
Why I don't affiliate too much with LW today
Differences in actions from LW consensus
- LW has a lot of low-agency Thinkers (not Doers) who want to stay in a comfortable tech job or AI capabilities job, and will retroactively invent justifications for why that this is the best thing to do.
- Get their feedback, sure, but don't make the mistake of thinking you should necessarily become like them.
- Many people on LW are not trying to create a political movement to pause AI research, that is explicitly hostile to AI companies.
- I'm not sure why this is, some of my guesses are listed in the next section.
- I don't want to waste too much of my time on these people, because I have limited time to build this movement.
- To put it in LW jargon - "Using mistake theory to fix AI risk is a mistake, now is the time to use conflict theory"
Differences in opinions from LW consensus
- I support a worldwide ban on deployment of superintelligent AI for atleast next 10 years. (Strong opinion)
- I think loss of control of a rogue ASI is possible, and could lead to human extinction.
- Solving technical AI alignment will not change my position as I also think ASI could lead to a small number of people overthrowing the US and Chinese govts, and establishing a world dictatorship stable for centuries at minimum. They could attain immortality via mind uploading. They could enforce their control via hyperpersuasion or automated military or some other way. A good analogy for hyperpersuasion would be religion, but persuasive enough it actually gets 100% of humanity converted to it. (Strong opinion)
- I support a worldwide ban on deployment of human genetic engineering for atleast next 10 years. (Weak opinion)
- I do not ascribe to utilitarianism, longtermism or universe colonisation as my primary life philosophy.
- I would like to add value to society at scale, and prefer solving bigger problems over smaller ones. I like to think long-term on scale of centuries not billions of years. I am yet to fully figure out my values.
- I think attention is worth more than capital.
- I think acquiring capital not attention might be a common mistake elites in SF are making. You can pay or threaten someone into do a thing, you can't pay them to actually care.
- Religious leaders (actually persuade someone to change their values) > Politicians (use violence as incentive) > Billionaires (use money as incentive)
- Yudkowsky has made a successful attempt to start a religion for atheists. I don't ascribe to it, but I think more religion for atheists is good.
- I think the average person on LW does not understand politics well. My goals are explicitly political hence it again makes less sense for me to engage.
Why do many Lesswrong community not support a pause AI political movement against the AI companies?
I am not sure, so I only have guesses.
- Hypothesis 1: Many people at Lesswrong are low-agency people who want to keep their comfortable tech job and their current friend circles. They are not that ambitious or risk-taking. They either don't have deeper desires in life or are not in touch with what those desires are.
- Hypothesis 2: Many people at Lesswrong value community, harmony or freedom, more than they value acquiring power. If you value acquiring power above all else, you will be willing to sacrifice more of your other moral ideals in order to Win. "Rationality is about Winning" is an idea many people on lesswrong don't take seriously.
- Hypothesis 3: Many people at Lesswrong are not willing to cut off their friendships with researchers inside the AI companies, or natsec and policy people inside the US govt. This also comes down to not caring much about acquiring power, because if you care enough about acquiring power you will be willing to cut people off to do it.
- Hypothesis 4: LW is fundamentally married to (IMO bad) idea of always using mistake theory, and never using conflict theory.
- Hypothesis 5: Status dynamics in Silicon Valley, everyone gets "bad vibes" from politics despite understanding nothing about how politics works or how much power it has or how to do it with integrity. Yudkowsky has changed his mind about engaging with politics but his followers haven't.
Has anyone on lesswrong written actual posts why they are against a pause AI political movement? If I got more information I could be more charitable to people there.
Opinion on LW mods
- I have not had a lot of personal interactions with them but I generally think Oliver Habryka, Ben Pace and Raymond Arnold are acting in good faith.
- I don't directly blame them for all groupthink and cowardice on LW.
- I do get the sense their life philosophy is very pro-longtermist utilitarianism and universe colonisation and immortality and stuff, and I am less motivated by these drives than they might be.
Subscribe / Comment
Enter email to subscribe, or enter comment to post comment