I do not yet strongly endorse most of the conclusions written in this post. I am very much still thinking aloud here.
Contains politically sensitive info
Why this post
I seem to be interally suffering a fair bit, as a result of confusion around morality.
I want to simultaneously achieve two objectives
I want more distance from the moral values of most people around me, both online and irl, both inside EA/LW spaces and outside, because almost everyone's morality is clearly broken in important ways. I think deferring to them too much is bad. Merely being reminded of their existence (by running "shoulder advisors" aka mind simulations of them) causes me to defer to them more, and this is bad. The more access I give all these people to my life, the more I am going to inevitably end up running mind simulations of them.
See also: Eliezer Yudkowsky on lonely dissent
I want moral values or atleast rationalisations that I can actually believe in, long enough that I can last 5 years pursuing a plan with those values in mind. I seem to be increasingly noticing a gap between what moral values I endorse in theory, and what moral values I am actually willing to implement in practice, without interally suffering.
For instance, if I could press a button and every single person building ASI drops dead simultaneously, I might press it. If I had to spend 5 years constructing such a button, I would immediately realise that every step of this process is going to be miserable for me, and the answer becomes a much clearer no.
The moral values can't be something very abstract, they have to be things I can actually implement in some domain like cyberattacks or persuasion or whatever. Both of these are domains involving social dark matter, and there are lots of ways to instrumentalise your access to social dark matter in ways that maybe hurt people on reflection.
For instance, I think atleast some pickup artists use genuine insights they have into persuasion, to exploit security holes in the women's brain, that these women wouldn't endorse on reflection. And maybe the long-term correct thing to do is to patch these holes instead, at a civilisation level.
See also: All the pickup artistry discussions from lesswrong in 2008, because ofcourse lesswrong was more insightful back in 2008 on this topic too. Examples: 1 and its followups. 2
As another example, I have definitely had people weaponise therapy speak against me for what are essentially instrumental political reasons. I have done this a non-zero amount too. It is very common to open twitter and see stuff like this.
As yet another example, I want to make cyberattacks more effective than cyberdefence, so that all the world's secrets open up at once. One downside ofcourse that it could open up public access to everyone's social dark matter at a rate way faster than people can give any informed consent for, or end up endorsing on reflection. I suspect a lot of people won't care about the lack of consent if they end up in a world they endorse on reflection, but I also seem to want a higher degree of confidence here before I am morally okay with investing many years into this plan.
I want to spend atleast some time trying to figure out some utopian ideal ways of resolving all this, before I resort to the (maybe) less ethical ways.
This post is babble, not prune, in the babble-and-prune distinction. It's likely I won't implement most of these solutions.
Main
One of my key learnings from the internet is that if you put in the work, you can open a high "emotional bandwidth" connection with basically anyone on Earth, and if you do, this will probably make you care more about them as people. Like, it is easier to read about okay, XYZ person was killed by ABC dictator in DEF african country, and not care that much. But if I put in the work, it is entirely possible that they use the internet too, and I can personally reach out to them and schedule lots of video calls. And once I have, it is possible that I care more about them.
Maybe that means I need to compile a database of videos of humans suffering? You can already find a lot of videos of people crying on the internet for various personal and political reasons. Maybe I should just compile it all in one place?
One option ofcourse is to do this strictly for ASI risk. Do lots of interviews of people suffering as a result of their fear of ASI risks, and post the videos online. Even if the public thinks we are all crazy and wrong about our beliefs, at the very least they will understand we care a lot about it.
Like, this is my actual reaction to atleast some of the leftist activists who cry about their problems online. It is quite possible your actions will increase human suffering (for example by literally electing a communist dictator), but also you are suffering too, and that is worth noticing.
I don't think this will be enough to actually resolve any differences between anyone though. Most people (myself included) already have pessimistic priors on the utility of trying to communicate across large political divides. I am only willing to read about a lot of different political views because the threat of ASI has made me desperate for any solution whatsoever. Most people are not yet at this level of desperation, and don't really care that much about big picture problems like "solve human morality in 5 years" or "solve global coordination in 5 years".
You can use instrumental reasons to forcibly get people on the same table. For instance some people I meet just want power, so if I pay them $1000 / hour or something ridiculous my guess is they'll probably come to whatever table for whatever reason. Some people I meet just want sex so if I show them they will get access to hot women or whatever they'll probably come to whatever table. Some people I have read about just seem to have generalised distrust in people, so if I give them a space that they think is sufficiently safe they probably will come.
Even if you get everyone on the same table you still can't force everyone to talk though, is what my guess is. People will just pretend to do the minimum amount of talking required, and ask me for the reward next. I am not very confident in this guess. If I say no, I don't think your conversation was authentic and you didn't make an actual effort to resolve the difference, I am not sure what will happen next.
Another way of increasing collision rate is to have a guild of therapists who again use the internet to target different niches. Just have each of them wait till their intended audience is suffering a lot, and have members there come.
First problem with this is that some groups literally never come to therapists and therapists are also aware of this.
Second problem is lots of people with low-grade suffering as a result of political reasons are also not going to show up at a therapist's place. My guess is many therapists' priors for human interaction are likely already skewed a little, because they disproportionately run into people who are in significant suffering. People in significant suffering may need different solutions than people with low-grade suffering. I am guessing this based on both personal experience and some models I won't explain now.
Maybe there is some entirely different way to target audiences with low-grade suffering?
Honestly maybe the meta answer is that lots of people have already cracked the problem of how do you target some niche audience and get hold of some of their secrets. Maybe what I actually need to do is to find existing secret keepers (like therapists across political spectrum, like priests, etc) and increase their collison rate with each other.
What is "on reflection" anyway?
What the hell does this thing even mean?
Suppose you use some insight into human psychology, for making political videos. Your videos make its audience less happy but simultaneously low-grade addicted to your content. (Scott Alexander seems to agree with me that this is very common.)
Suppose you now reveal to your audience what insights you used. It is entirely possible that the public will now scapegoat you instead, and still not actually engage in any honest reflection on what just happened.
I seem to be preferring a reflective approach of either "oh, you successfully hacked my brain, maybe I should be thanking you for revealing these holes to me, so I can patch them" or "oh, I actually do in fact continue to consent to this content, fully knowing that you did XYZ to make it miserable for me"
Where are the grey hat brain hackers lmao. Like, people who deliberately make political propaganda just to later remind people that their brains are vulnerable and need fixing, and actually none of the political views expressed were genuine. Or, people who deliberately attempt pickup artistry and then tell the women to go fix their defective brains, instead of actually have sex with them.
Most of the brain hackers I seem to run into are either too white to actually be successful, or too black for me to actually feel good endorsing them or becoming them for the next 5 years.
Counterpoint - There are some youtubers on the relatively more ethical side (IMO) who are also popular. It is yet to be seen what happens once these youtubers actually enter politics themselves. It is true that these youtubers have a lot of power even if they don't actually enter politics themselves, because they can decide which politician to endorse versus not.
See also: Dominic Cumming's insights on why British politicians are too obsessed with the media, as compared to actual governance (or even actual persuasion or actual trust or actual anything that lasts more than the next election cyle). Atleast some other books I have read also give me a bias that many politicians are obsessed with what the main journalists (either traditional or youtube) are saying about them, to an extent that may or may not be rational, I don't know yet.
Why did I pick the name particle collider?
I think I liked Richard Ngo's article on particle colliders of sociology. Despite differences of opinion, I definitely agree a lot with Ngo's opinions.
I seem to want people's political and personal views both to collide with each other at a much faster rate than is organically happening, because of 5 year ASI timelines.
My intrinsic fascination with "big things go boom"
I liked the movie Oppenheimer because of this.
I liked hearing some story about why mobile phones are not allowed inside petrochemical refineries. To be on the extra conservative side, atleast some places treat it such that even one spark can blow up your entire refinery.
I definitely have some fascination with false vacuum decay and artifically inducing star collapse and other stuff. Just, how do you blow things up at bigger and bigger scales.
Side note: Also seems useful to study offense versus defence at bigger and bigger scales. It seems increasingly to me that there is some inherent physical law or some shit that says blowing stuff up is easier than defending against blowing stuff up. The best defence against a group with guns is not a six inch concrete wall, it is another group carrying guns to threaten retaliation. The best defence against a nuclear bomb is not an anti-aircraft missile, it is threatening to nuke them in return. (Maybe the best defence against a successful cyberattack is also not successful cyberdefence but the threat of retaliatory cyberattack.)
Subscribe
Enter email or phone number to subscribe. You will receive atmost one update per month