[Home]
[Parent directory]
[Search]

my_research/why_have_hope_about_fixing_asi_risk.html


2025-01-06

Why have hope about fixing risks from ASI?

Disclaimer

Among the people I talk to who are convinced there are significant risks from fixing ASI, some of them have given up hope on any attempt at fixing the problem. This post is for them.

I am generally not a fan of salesmanship and using self-fulfilling prophecies and stuff in order to motivate people, so this post won't contain that. Instead I will list reasons based on concrete evidence from the world that give me hope.

  1. Most heads of AI companies and governments don't want to die. If they were convinced the risk to their own life is large, there is still non-trivial probability they might just all stop.
  2. A majority of people in the US or world population don't want to die, and don't want a future with superintelligence either. Even a "benevolent superintelligence future", whatever that means to different people, currently appeals to only a small number of people. If they were aware the risk to their life is large, they might take action towards that.
  3. Just because people are not convinced of the problem now, doesn't give you very strong evidence they will remain unconvinced when we are let's say, a year away from building ASI. When we are that close, there will be a lot more evidence that could cause people to take it seriously - there will both be more advanced AI integrated into the economy, and there will be a lot more other people terrified of ASI, making it less crazy for you to believe it as well.

If your meaning-making works anything similar to mine, another thing that is helpful to remember is that the number of people working on plans that make sense to me is very small. (Probably less than 100 full-time people? Unsure of the exact number.) If you manage to do as well as these people or better, and if our species survives as a result, you will hopefully be remembered as one of the great men or women of history who made it happen. I definitely think Yudkowsky's name is going in the history books [1] of 2100, and probably a few more names will go there alongside his.

[1] Well sure, maybe we aren't using books to acquire knowledge then, maybe we will straight upload it via BCIs or something, who knows. But you get my point.

Subscribe

Enter email or phone number to subscribe. You will receive atmost one update per month

Comment

Enter comment