[Home]
[Parent directory]
[Search]
my_research/my_views_on_lesswrong.html
2025-10-13
My views on Lesswrong
Appreciation for LW
- I think Eliezer Yudkowsky is one of the most brilliant philosophers to ever live.
- I think Lesswrong is one of the few places on Earth that is actually focussed on important questions. Attention is the scarcest resource humans have, and they are directing it to the correct places.
- I think there are a few core insights you can take away from LW that are basically correct and have unimaginably large implications.
- Extended Church-Turing thesis. The entire universe is deterministic due to laws of physics, and so are human brains. Penrose quantum bullshit doesn't work. Any insight you have about any of the following topics - free will, consciousness, moral values, moral patienthood, intelligence - has to reconcile with the fact that the ultimate instantiation of this thing is a Turing machine whose code can be written down on a piece of paper.
- Hamming question is important and can be applied to all of life philosophy, not just specific branches of the tech tree. "What is the most important question in scientific field and why aren't you working on it?" -> "What is the most important question in life and why aren't you working on it?" You can recursively apply the Hamming question to your actions in life until you realise most technology does not matter much, most politics does not matter much, and a few things matter a lot.
- Intelligence is what separates humans from apes, and allows us to invent both technology and politics at scale. Intelligence-enhancing tech (artificial superintelligence, human genetic engineering, whole brain emulation) is deserves more attention than most other technologies.
- Many of the experiences we consider innate to human nature - like death, sex, parenting etc - can be radically transformed with sufficient application of technology. This is core to what enabled Yudkowsky to build a religion that simultaneously fears and worships the Machine God. I too both fear and worship the Machine God, despite not following Yudkowsky's religion.
Why I don't affiliate too much with LW today
- Lesswrong has a lot of low-agency Thinkers (not Doers) who want to stay in a comfortable tech job or AI capabilities job, and will retroactively invent justifications for why that this is the best thing to do.
- Get their feedback, sure, but don't make the mistake of thinking you should necessarily become like them.
- I support a worldwide ban on deployment of superintelligent AI for atleast next 10 years.
- Solving technical AI alignment will not change my position as I also believe in the possibility of ASI-enabled dictatorships (enforced via hyperpersuasion or violence) that are stable for >100 years.
- I support a worldwide ban on deployment of human genetic engineering for atleast next 10 years. (Weak opinion)
- I am not yet convinced commoditisation will occur fast enough to prevent dictatorships (enforced via hyperpersuasion or violence).
- I definitely also think people (parents, community leaders, dictators) will attempt to engineer (precursors to) moral values in their kids based on what is competitive, rather than ideal in some philosophical sense.
- I do not ascribe to utilitarianism, longtermism or universe colonisation as my primary life philosophy.
- I would like to add value to society at scale, and prefer solving bigger problems over smaller ones. I like to think long-term on scale of centuries not billions of years. I am yet to fully figure out my values.
- I think attention is worth more than capital.
- I think acquiring capital not attention might be a common mistake elites in SF are making. You can pay or threaten someone into do a thing, you can't pay them to actually care.
- Religious leaders (actually persuade someone to change their values) > Politicians (use violence as incentive) > Billionaires (use money as incentive)
- Yudkowsky has made a successful attempt to start a religion for atheists. I don't ascribe to it, but I think more religion for atheists is good.
Recent comment on LW
Comment posted 2025-10-13. Got downvoted with no replies.
Has anyone on lesswrong written actual posts why they are against a pause AI political movement?
I default to assuming uncharitable explanations like:
- LW has low-agency people who want to keep their tech job and their friend circles
- Status dynamics in Silicon Valley, everyone gets "bad vibes" from politics despite understanding nothing about how politics works or how much power it has or how to do it with integrity. Yudkowsky has changed his mind about engaging with politics but his followers haven't.
- The few high-agency people here are still attempting the weak sauce strategy of persuading US policymakers and natsec circles instead of applying force against them.
If I got more information I could be more charitable to people here.
Subscribe / Comment
Enter email to subscribe, or enter comment to post comment