Some people on lesswrong have used words like "trauma" and "paranoia" to describe why people find engaging in politics painful.
When I say "engage in politics" - Scott's article is merely talking about people having political discussions. Habryka's article is probably mainly talking about people acquiring significant amounts of political or economic power.
In general discourse, trauma and paranoia are both mental states where a person is reacting too strongly to some threat as compared to what would have been the rationally appropriate response to that threat.
I will now restrict the rest of this post to strictly be talking about the scenario where Alice is engaging in a political discussion with Bob, but suddenly it looks like Alice is finding this discussion painful or traumatic or paranoia-inducing. In this case, how do you know Alice is overestimating the actual threat to her, and overestimating the response required?
I think it is true that people often form their political opinions based on perceived threats from others, and sometimes these opinions are irrational. Suppose Alice perceives she is under threat (either from Bob, or some other Carol). She will likely run mind simulations of her perceived enemy. She might make extremely pessimistic assumption for likelihood of this person being an enemy, and then do a number of irrational things as a result of wrongly assuming this person is now an enemy.
I think it is worth asking more in detail, how do you actually know if their response was rational or irrational.
Often, if you notice that Alice is suddenly acting in extreme ways, you actually have no way to tell who is threatening Alice, or how large the threat is.
It is possible that the threat is actually as big as Alice thinks it is, and Alice is actually rationally responding.
It is possible that Alice is overreacting to the threat. But at the same time, it's possible you also still don't accurately how big the threat is or who is weilding it or why and so on.
Measurements for base rate of threats
I will now restrict the rest of this post to strictly be talking about the scenario where Alice is engaging in a political discussion with Bob, but suddenly finds the discussion painful or traumatic or paranoia-inducing. In this case, how do you know Alice is overestimating the actual threat to her, and overestimating the response required?
The main potential threats to Alice are violence, social exclusion and losing money. It is worth tracking base rates for all three of these in society.
Political violence
There are orgs that publish democracy indices and freedom of speech indices for each country.
There are orgs that try to count number of people affected during each war, genocide, mass migration and so on.
There are legal databases that track political lawsuits in a specific issue, and media houses that may write about them, and so on.
Obviously all this information too is political and biased, but you can use that to get closer to an accurate estimate of how many people are facing actual physical violence in some group, and how many people are being threatened by it.
Losing job due to political opinions
Most billionaires and politicians have some rational (and many irrational) reasons to restrict the freedom of the people working directly under them, assuming they stick to their current strategy of acquiring power.
Not giving someone a job in first place is way more common than giving someone a job but firing them later.
There are some legally protected classes where you can sue your employer and win, if you prove you didn't get a job for belonging to that protected class. In this case, there are legal orgs that are incentivised to help you.
More generally though, I haven't found good stats on people being denied job due to political opinion. Out of scope of this post.
Being socially cut off by family, friends, colleagues, etc
In recent times, this is often coordinated online (as opposed to TV, radio, in-person etc) and is called cancel culture.
Tracking accurate stats for this is hard. But there are people trying recently.
Why measure base rates of threats?
In general, I think providing accurate information about all the three is valuable. This info will necessarily encode your personal political worldview, but you can try your best to be objective. The people reading your content will then be better calibrated in dealing with political discussions (as opposed to being traumatised or paranoid or similar).
Out of these three though, I think being socially cut off is the point most important to track. The probability of you losing a friend or family member over a political opinion is higher, for many opinions, as opposed to you being imprisoned or losing your job.
All three threats succeed when most people are not actively being caught by the system. More people are caught when the system is becoming weak or becoming strong in some way.
Side Note
I initially wrote this while thinking about culture war stuff not related to AI. But it also seems relevant in the AI context.
Maybe providing them more information about the probability of violence, social exclusion, etc for various actions is worth it? But also, I feel the bottlenecks are bigger there.
Many people lack courage to do anything about ASI risk (even among those convinced the problem is real).
Suppose someone is convinced about AI risk, but not willing to share this offline or online because they don't want their family or friends to know, or they don't want their workplace to know. Maybe having info about base rates (of being cancelled) is useful. They can also just go ask this other person if they will be cancelled or not.
I definitely think these base rates are shaped by political views in society that can become popular and unpopular very quickly.
I also think this target audience is small (only a few thousand people are EA/LW). But because some of these people are highly competent, I think convincing even a few people is worth it.
There are definitely also people here who aren't putting active effort to hide their opinions, but also aren't putting active effort to share their opinions with others. Sharing 24x7 is bad but sharing atleast once is better than sharing zero times.
Some people are executing plans I don't believe in, to fix ASI risk.
In this case, they probably have existing social circle already invested in this plan, and they need to painfully disconnect from this person in order to pursue my plan. In this case, honestly, talking to me personally seems like a good idea. Seeing the first few courageous people seems useful. Apart from that, however, it is just straightforwardly true that a significant number of people will cancel you, and me providing more accurate statistics about how many people may not help much.