Weak opinion, not strong opinion. I am willing to take action in the real world based on a weak opinion, however.
I seem to generally be in favour of doing cyberattacks against some people and leaking their secrets. This page is me making lots of messy notes to analyse moral, legal, practical and other consequences of this type of thing.
suppose all govts were transparent under threat of cyberattack.
support one govt (might be irrationl actor) decides to go dark like north korea. they take down their internet, seal their borders, withdraw themselves from all supply chains. Example: North Korea. there is now legitimate fear that this govt will unilaterally race to building ASI or bioweapons in small lab or some other future tech. will other govts more likely pressure this govt to become transparent, or will they go dark themselves?
note that if we actually had a world with zero privacy for everyone, im highly confident north korea-level authoritarian govts would no longer exist. that would be even weirder. I guess this thought experiment is analysing a world where govts dont have privacy but people still have some privacy.
2026-01-05
Update
I should probably write more about the transition shock from suddenly leaking the whole world's secrets at once.
Morally, I think my opinion is that I am basically willing to unilaterally cause a bunch of suffering on innocent people in the short-term, if the long-run consequences are worth it. That might be a reason why I haven't studied the transition period much, and am primarily studying the long-term consequences. But I probably should study the short-term consequences too, just in case I am missing anything.
More specifically, I think I am okay with killing on the order of 10 million innocent people, if the resulting world has a significantly chance of avoiding extinction or dictatorship or other civilisation-level bad outcomes. If the consequences are on the order of killing more than 100 million people, that makes me pause and think first.
I do think people suffering the transition are going to have a very different life experience as compared to people who are born after the transition. For example, if you have spent your whole life keeping secrets from your parents, and these get suddenly revealed, this could cause a bunch of suffering. Or if you have spent your whole life assuming your sex videos are private, and these suddenly get revealed, this could cause a bunch of suffering. Whereas someone who has grown up from birth assuming these things are public information will not suffer in the exact same way.
Right now it's not obvious to me if the suffering caused by the transition is as large as 100 million deaths or not. I think that's a good reason to study it further.
2026-01-03
Update
I just realised I am conflating two different plans, that are both worth discussing separately.
First plan is a one-off cyberattack against a specific target revealing specific information about them.
This has historical precedent. Example: Hacker group Guccifer 2.0 leaked Hillary Clinton's emails about Bernie Sanders. (There are more examples, I need to publish the list some time but I haven't done so yet.)
Because this has been done before, there's strong reason to believe that a team with sufficient skills and money can do it again.
If this is the plan, I generally support only leaking the info most relevant to ASI risk, and redacting everything else.
If this is the plan, I generally support only picking powerful targets like top executives in AI company or heads of US IC or executive branch.
Journalist norms of "punching up not down" basically make sense to me in this case.
I think people as of today should try to execute on this plan. (Strong opinion)
Second plan is to make cyberattacks so much more effective than cyberdefence, that basically no one on Earth is capable of keeping secrets anymore.
This shouldn't just be about you being skilled and succeeding one time. It should become so easy to do it (in terms of skills, money, time) that lots of people can go cyberattack lots of other people multiple times, to the point where humanity as a whole gives up on cyberdefence for anyone.
This is historically unprecedented.
As a theoretical ideal, I think a world with a genie that enforces zero secrets on everyone would probably be better than our world. Imagine everyone being on 24x7 livestream that they cannot switch off. (Weak opinion)
In practice, I have no idea if such a world is actually possible to create.
The best way to find out if it is possible, is for me or someone else to actually get good at cyberattacks, especially AI-enabled cyberattacks. This way you get a deeper understanding of the current state-of-the-art and how much it can be nudged in either direction by small groups of people. AI-enabled cyberattacks are obviously something worth watching out for, as they might quickly shift the offence-defence balance.
If such a world is possible to create, I probably support creating it. (Weak opinion) But first I want to know if this is even possible to do.
I currently like the second plan more than the first plan (although I probably ultimately support both plans). Here are some intuitions why I like second plan more.
Imagine privacy is like a gun. (It is not, in important ways, but imagine it is.)
Many people want a gun to protect themselves against everyone else. Most people want guns for defensive purposes. A few people want guns for offensive purposes. The people who want it for offensive purposes also often claim they want it for defensive purposes.
SImilarly, many people want privacy to protect themselves against everyone else. Most people want privacy for defensive purposes, to protect themselves against being attacked by others. A few people want privacy for offensive purposes, to secretly conspire in order to attack others.
Civilisation is currently in a state of a war of all-against-all, where everyone wants guns and has their guns pointed at everyone else. This includes govts, this includes large companies and religious groups, this includes smaller groups like families and small companies, this includes individuals. Everyone is at war with everyone else, though it is usually a cold war and only sometimes a hot war.
An important question is who has the moral authority to have guns (or who has the moral authority to have privacy).
Some people want to have guns, but don't want their enemies to have guns. They are nakedly pursuing their self-interest. Some people take a principled stand and say everyone should have guns. Some people take a principled stand and say nobody should have guns. The last group does not have any actual plan for how to enforce this, however.
Similarly, some people want privacy for themselves but transparency for their enemies. "Privacy for me, not for thee" as David Brin puts it. They are nakedly pursuing their self-interest. Some people take a principled stand and say everyone should have privacy. Some people take a principled stand and say nobody should have privacy.
I could categorise the bloggers below in these camps. I think Vitalik Buterin is in the camp that says everyone should have privacy. David Brin and Ben Ross Hoffman seem to be in the camp that says nobody should have privacy. Julian Assange seems to have, at one point, atleast toyed with the idea that nobody should have privacy. But he later realised the point about "distributed conspiracy" mentioned below. Michael Slepian and Duncan Sabien recognise some of the pscyhological harms of everyone keeping privacy and using this privacy to pursue a war of all against all. However, they don't have prespective plans for how to fix domestic politics or geopolitics. They maybe haven't analysed more deeply why these conflicts are happening in the first place, and don't have prespective plans for solving them.
I value fairness.
I'm currently experimenting with belonging to the last group, that says nobody should have privacy. But it is important I have both an actual plan for how to enforce that nobody has privacy, and a moral justification for why this world is a better place than our world (for humanity at large, not just for me).
When I come and destroy someone's privacy via a cyberattack, that is like me stealing and destroying their gun.
If Alice and Bob have their guns pointed at each other, and I go destroy Alice's gun, Alice is probably correct in being mad at me, and thinking I am an ally of Bob. I better have a good explanation for why I support Bob here but not Alice. Otherwise I too am just using violence to pursue my self-interest and not do anything altruistic.
This is how I feel if I support cyberattacks against some people but not others. I better have a good explanation for why they deserve to be cyberattacked but everyone else doesn't.
Usually this is a specific story of how they have been abusive. If Alice is being abusive to Bob but Bob has not been abusive to Alice, that could be a moral justification for why you should destroy Alice's gun but not Bob's. Similarly, if Alice is being abusive to Bob, that could be a moral justification for why you should destory Alice's privacy but not Bob's.
I don't believe all billionaires or all politicians are inherently immoral (like let's say, some Marxists might).
I am not a fan of pursuing a class war where I say, all billionaires and politicians don't deserve privacy, but the rest of society deserves privacy. It sounds too much like I am again pursuing self-interest at the expense of altruistic interest. "Privacy for my class, transparency for your class." I have a lot of empathy for the problems faced by billionaires and politicians, and why they do the things they do, including violent actions against others.
I think some journalists are openly pursuing a class war, and I don't want to become like these journalists. It is ideal if whatever moral justifications I use for whatever actions I undertake, are things a politician or billionaire can also agree with, not just people of my class.
I am more in support of leaking only those secrets about those specific billionaires or politicians that have done things that are uniquely immoral even by the standards of the people around them.
Suppose Alice is the president of Alicistan and Bob is the president of Bobistan, and Alicistan and Bobistan are engaged in military conflict with each other. If I take away Alice's weapons but not Bob's, Alice may be correct in saying that I am just an ally of Bob or a patriot of Bobistan, and pursuing Bob's self-interest at the expense of everyone else. The same logic applies if I am taking away Alice's privacy but not Bob's.
However, suppose Alice is pursuing this war in uniquely immoral ways that Bob is not. Maybe Alicistan is using chemical weapons, maybe they are genociding civilians, etc. Then I see a stronger case for why specifically taking away Alice's weapons (or privacy) is an altruistic thing to do.
Now imagine the same scenario but in the middle class.
Suppose Alice and Bob belong to middle class, are divorced and are now engaged in a lengthy lawsuit. If I take away Alice's privacy but not Bob's, that seems like I am being violent in Bob's interest. However, suppose Alice is already fighting this lawsuit against Bob in uniquely immoral ways. Maybe Alice has hired a armed gunman to go intimidate Bob. Maybe Alice has phoned Bob's parents and is manipulating them into intimidating Bob. Then there's a stronger case for why you might want to leak this information to the court.
"Distributed conspiracy" aka "horizontal versus vertical privacy" is a problem that comes in here though.
In reality, the court is not completely fair, it is just a tool of the government. The government too has its own guns and privacy, and is also often pursuing self-interest at the expense of everyone else. Suppose that during Alice and Bob's court case, you privately find out that Alice has had homosexual sex before. Suppose you are living under a government that is against homosexuality, and is willing to kill homosexuals. Then, it seems especially bad to leak this information. This is true even if Alice has also done some other immoral things, and you feel Bob deserves to win the case. This is because the govt is likely to be more abusive than either Alice or Bob. Here it is not just Alice and Bob being self-interested, but Bob colluding with the government to advance his own self-interest, while the govt advances its own self-interest. The government obviously has a lot more guns (and a lot more privacy) than either Alice or Bob.
This is generally a reason why I am not a fan of involving myself in the conflicts of middle class people at all, until I have first made a major dent in the conflicts of the upper class. I am not a fan of cyberattacking random middle class people and leaking their secrets, even if they're abusive, because I know politicians and billionaires are far more abusive on average and will often just side with whoever advances their self-interest. Imagine however, that I lived in a world where most billionaires and politicians were already operating in complete transparency and were quite accountable to the public. Then I see a stronger case for why intervening in the conflicts of middle class people via cyberattacks, is okay.
Vitalik Buterin belongs to the camp that says everyone should have privacy. However, he too makes a very similar point as me about "horizontal versus vertical privacy", in his post why I support privacy.
2025-10-24
Update
As of today, here is my concrete advice for people reading this.
If you are a billionaire, sponsoring successful cyberattacks and social engineering against Big Tech, frontier AI labs and their supporting govts is one of the greatest goods you can do for humanity. The same goes if you are expert cyberhacker who is building such a team or a member of such a team.
If I was in your shoes, I would start by leaking only the information of various elites, and wait for the response.
After that, if (and only if) I felt there were many ills in society that still required correcting, I would leak information of members of other socioeconomic classes.
I haven't been very specific about what conditions lead to leaking what info, because I haven't yet worked those out for myself. (I also think there's some things that are hard to learn from theory alone, and require empirical testing.)
I will probably have empathy for your moral position even if you choose to leak more info than I am currently suggesting.
If you are a cyberhacker, I also recommend sharing more of your technical cyberhacking knowledge to the public domain if possible, so that other cyberhackers can learn from it. I am fine with the fact that this knowledge becoming more widespread reduces privacy of everyone, not just of elites.
2025-10-12
Update
Before I provide a strong opinion on this topic, I need to think more on why I want to leak secrets of everyone on Earth, and not just elites.
Journalist norms are to only leak secrets of elites, by default. "Punching up not down"
From both a utilitarian point of view and a liberal consent-respecting point of view, affecting the lives of billions of people requires a stronger justification than affecting lives of few thousand people.
Obviously ofcourse, I will be affecting future of entire world even if I only enforce zero privacy on a few elites. Many governments will fall, civil war may occur in some countries, etc
Some reasons (not exhaustive) why I want to leak secrets of everyone, not just elites.
Public and communities in public are also often involved in enforcing threats that prevent key info from coming out, including info of various types of abuse committed by the community. I want to attack these sort of distributed conspiracies as well.
Counterpoint - if all the elites protecting that community have zero privacy, probably the community will also not be able to continue this abuse. Unsure
Weapons of the future, such as genetically engineered bioweapons or AI model weights, may no longer require you to be an elite (billionaire or major politician) to unleash them on the world. Hence more openness in society is required by default. All biotech labs and AI labs should livestream by default.
Counterpoint - you can livestream labs in few key domains like AI or bio, without affecting rest of the world. Just need to ensure equipment does not get smuggled out of camera view, while the cameras are under maintenance or whatever.
Zero privacy might be enforcible via hard physical power - for instance if cyberattacks become way easier than cyberdefence. I think this is my strongest motivation. I don't want to rely on the benevolence of any govt or group of people to enforce zero privacy. I want to ensure the balance of hard power is such that zero privacy automatically gets enforced.
Analogy - I don't want govt to give you permission to keep a gun. I want it to be so easy to illegally manufacture and smuggle guns that people keep them regardless of what the govt says.
If you rely on some benevolent dictator or benevolent system of govt to ensure elites have zero privacy, elites will respond by messing with the legal definitions and enforcement mechanisms to ensure they still keep their privacy.
Counterpoint - Cyberattacks are still costly, and hence my plan to create a $1B lab to cyberattack big tech is actually quite difficult to execute on, in practice. Only if cyberattacks become cheap does it actually become possible to enforce zero privacy for everyone.
There is a reason even I myself abandoned trying to execute this plan, namely that executing it is really hard.
(I am working on the weak version of this, which is supporting whistleblowers. That is more targeted at reducing privacy of elites than reducing privacy of everyone, because only when stakes are high will whistleblowers be motivated to make the kind of personal sacrifices required to whistleblow.)
Iteration
If I were hypothetically running a lab that could successfully cyberattack some companies, we could iteratively test the practical effects of leaking more info versus leaking less. Getting actual real world data on the consequences will be useful, beyond any amount of theoretical speculation like I am doing right now.
2025-09-09
How?
Profitability
Non-profit is ideal.
If profitability is important, sell or lease the latest AI capabilities (code, model weights) to other top AI labs worldwide.
You can also attempt the same dual strategy (leak capabilities to competitor, leak values and decisions to public) on lower value targets to get initial funding.
What to publish
At minimum, publish info relevant to AI risk, such as values, decisions and capabilities of key decision-makers.
At maximum, publish all data that Big Tech has collected on everyone to the public, thereby destroying privacy of every person on Earth with no exceptions. I am supportive of this but I'm aware this is a radical stance. Even if you don't agree with me, please atleast publish AI-risk-related info.
Team
(Maybe) Atleast $1M long-term vested per employee (performance-based incentives), less than 1000 employees for secrecy reasons. Many security researchers today are underpaid relative to the market or military value of their work, so you can exploit that by actually paying well.
LLMs are rapidly changing many parts of security research, prefer hiring people who can adapt to that.
Location
Attacking from foreign soil seems like a better bet than staying anonymous. If you are attacking a US company, ensure your lab is in a country outside US geopolitical influence.
Other methods
Social engineering may include a significant part of the attack, depending on how people are willing to work on ground and accept significant personal risk.
Drone surveillance and CCTV surveillance of employees may also be possible, if done discretely.
Stylometric doxxing of employees' online profile is another method.
Why?
I prefer information being in public, over a handful of companies and govts having sole access.
I suspect small groups of colluding actors (such as families, religious institutions, governments and companies) will no longer be able to suppress individual freedom, if these actors lacked privacy themselves.
Majority vote could still suppress individual freedom in such a world, and I am fine with this. I think direct democracy enforced this way would be an improvement over our current democracies.
I suspect most people would not care as much protecting their individual privacy, if they knew no actor would suppress their individual freedom in response to the revealed info.
I think keeping secrets is often psychologically harmful to the keeper, and people accept this harm because they expect to be harmed even more if the secret comes in today's society. People might make very different choices if the latter harm didn't exist.
I think institutions today (and by extension society at large) are low-trust, and it is difficult to navigate a good future involving superintelligent AI, human genetic engg, whole brain emulation, or similar radical tech with such low-trust institutions.
I think a radically transparent society will make it much easier for elites to earn trust of the public, and for elites that haven't earned this trust to lose their power.
A dumb but useful way of modelling today's democracies are that the elites get a million votes each, due to difficulties in switching costs, whereas most people get one vote each. In theory popular consensus can outvote an elite consensus, but this requires a lot more than just 51% in votes in popular consensus.
I think a direct democracy would not vote for a lot of the geopolitical decisions we see today. I think most people of powerful countries don't actually want to bully smaller countries into giving up their resources to them at threat of nuclear war.
I see most interpersonal harm in the world as caused more by bad incentives than by bad values. I think creating one "good" ideology and trying to convert everyone to it has historically been a losing strategy. I like my proposal because it changes incentives.
I think most suffering in today's society is due to interpersonal harm rather than due to scarcity of resources. Scarcity of basic resources (food, water, minimum shelter) indirectly leads to interpersonal harm yes, but I don't think fixing basic resource scarcity alone guarantees a high-trust society that wouldn't architect its own extinction.
I am aware that millions of innocent people are going to be hurt by my proposal. I currently think the benefits outweigh the harms, as per my values.
I am aware most people disagree with me on "greater good justifies some harm", and this especially includes the types of people who study psychology or are emotionally sensitive or otherwise recognise a lot of the invisible harms I'm talking about right now.
I have internalised a lot of Yudkowskyian / Nietzschean heroic responsibility. If I foresee a war or genocide and knew I could have maybe solved it, but did nothing, I see it as my fault in some small way.
I don't see a non-violent way of fixing literally all problems in the world, hence I decided I care more about actually fixing the problems than I care about being non-violent. If you are one of the people I'm calling out here, I want you to actually sit and engage with this.
I generally think the more an institution holds moral claims to secrecy, the more abuse it must be hiding as a result. This includes most religious and political institutions in the world today.
I think a lot of people I see around me on a day-to-day basis have had their agency killed from the inside, and the above colluding list of actors has successfully bullied or co-opted them into submission.
I see it as a good sign if you are a little scared of the implications of your work, it atleast means you're doing something that can matter. In today's world when it comes to politics, talk is cheap but actions are penalised heavily, see my previous point on submission.
I think one point where I'm lacking argumentation is why I support leaking secrets of everyone on Earth, rather than just a few thousand elites (which is the default in say, some journalistic circles). I will write more on this later. The short version is that these conspiracies are distributed, and many people in the larger population are also used to suppress individual freedom of other people.
Fixing AI extinction risk is likely a bigger life priority to me in next few years, than fixing any other problem. If I work on this it'll probably be because it helps fix AI extinction risk.
I'm not rich enough to fund this team myself.
I'm not technically skilled enough to be a cyberhacker on this team as of 2025. It'll take me atleast a few years to become an expert on this. I don't enjoy security work that much. Also, ASI might have killed everyone by then.
There is a lot of low-skill high-value cyberattack work like scanning for open endpoints, that I might still be open to doing. I might or might not also be open to working on social media doxxing or drone surveillance or similar, in future.
Subscribe
Enter email or phone number to subscribe. You will receive atmost one update per month