I think there is a spectrum of how cooperative versus uncooperative you are with others.
On one extreme is being fully unilateralist
Atleast some tech founders believe this.
Find a cofounder, raise money, go Do The Thing, and fuck what anyone else thinks about what you are trying to do.
Your goals are your own. If you achieved them good for you. Too bad if your goals also happened to run into conflict with a bunch of other people's goals, and you had to screw over their plans in the process.
For a full unilateralist, even trying to explain what you are doing or why to other people, is a waste of time. Always use force, never persuasion. (I am sure there are people who look at my blog and think I am wasting my time trying to explain myself to other people, and see it as too much of Thinking and not enough Doing.)
On the other extreme is being fully cooperative
Atleast some influential people on lesswrong believe this.
Spend all your time debating your ingroup what the right thing to do is, and only go ahead once you have atleast a 51% consensus in that group that the action you are proposing is right.
An extreme version of this is to do this with all of humanity, not just your ingroup. Don't do anything that goes against the consensus view in society.
Basically never make any enemies. Always use persuasion, never force.
Some spiritual / religious leaders also deploy this tactic, of claiming to be speaking in favour of everyone's enlightened interest, and not actively against anyone's interest ever.
I seem to be somewhere in between
Assume that atleast a few people can be persuaded towards your cause. Assume that lots of people will never be persuaded. Assume that you will make some enemies.
Make the types of plans such that if a hundred people are in favour and a thousand people are against, the hundred people can still succeed at the plan.
Persuade others about your plans so that you can atleast find these hundred people.
But also, actually go Do The Thing, don't waste your life debating others, and especially if you sense diminishing returns to it.
Use persuasion on some people, and force on others.
Why?
I am not married to the idea of being half unilateralist. I am a lot more marrried to the idea of being a consequentialist.
If I knew of a fully unilateralist plan that could fix ASI risks, I would just go do that. I would not be wasting my time writing blogposts. Unfortunately I haven't found such a plan yet.
Examples: Yudkowsky wanted to pursue a fully unilateralist plan at some point, by deciding that he personally will go build the aligned superintelligence. Ilya Sutskever is pursuing a fully unilateralist plan right now, by not giving any public speeches or trying to persuade anyone that building aligned superintelligence is the right thing to do. Other AI companies are also pursuing almost fully unilateralist plans, but spend atleast a little bit of token effort evangelising. They evangelise because it may attract a few more employees or a bit more funding or a bit more governmental and societal approval for them.
My primary reason to not be fully cooperative is that it will take way too long, and is a guaranteed recipe for achieving nothing with your life, assuming your goals require actually doing stuff in the world not just persuading people. This is probably something I would believe even if I was not thinking about ASI risks, but thinking about some other problem.
My second reason is that I think trying to be fully cooperative with the heads of AI companies won't work, and you have to make them your enemy in order to have a chance at success. This reason is specific to this problem.
Subscribe
Enter email or phone number to subscribe. You will receive atmost one update per month