[Parent directory]
[Home]
my_research/cyberattack_big_tech.html
2025-09-04
Why I support cyberattacks on Big Tech companies, followed by leaking secrets of everyone on Earth to the public
Disclaimer
- Quick Note
- Incomplete
- Weak opinion, not strong opinion. I am willing to take action based on a weak opinion, however.
- This is an opinion I have formed recently. I am yet collate all my thoughts on this in an organised format. I am aware this post is a mess right now.
How?
- Profitability
- Sell or lease the latest AI capabilities (code, model weights) to other top AI labs worldwide if the profitability of your operation is a significant concern.
- Otherwise non-profit is I think the ideal option.
- What to publish
- At minimum, publish info relevant to AI risk, such as values, decisions and capabilities of key decision-makers.
- At maximum, publish all data that Big Tech has collected on everyone to the public, thereby destroying privacy of every person on Earth with no exceptions. I am supportive of this but I'm aware this is a radical stance. Even if you don't agree with me, please atleast publish AI-risk-related info.
- Team
- (Maybe) Atleast $1M long-term vested per employee (performance-based incentives), less than 1000 employees for secrecy reasons. Many security researchers today are underpaid relative to the market or military value of their work, so you can exploit that by actually paying well.
- LLMs are rapidly changing many parts of security research, prefer hiring people who can adapt to that.
- Location
- Attacking from foreign soil seems like a better bet than staying anonymous. If you are attacking a US company, ensure your lab is in a country outside US geopolitical influence.
- Other methods
- Social engineering may include a significant part of the attack, depending on how people are willing to work on ground and accept significant personal risk.
- Drone surveillance and CCTV surveillance of employees may also be possible, if done discretely.
- Stylometric doxxing of employees' online profile is another method.
Why?
- I prefer information being in public, over a handful of companies and govts having sole access.
- I suspect small groups of colluding actors (such as families, religious institutions, governments and companies) will no longer be able to suppress individual freedom, if these actors lacked privacy themselves.
- Majority vote could still suppress individual freedom in such a world, and I am fine with this. I think direct democracy enforced this way would be an improvement over our current democracies.
- I suspect most people would not care as much protecting their individual privacy, if they knew no actor would suppress their individual freedom in response to the revealed info.
- I think keeping secrets is often psychologically harmful to the keeper, and people accept this harm because they expect to be harmed even more if the secret comes in today's society. People might make very different choices if the latter harm didn't exist.
- I think institutions today (and by extension society at large) are low-trust, and it is difficult to navigate a good future involving superintelligent AI, human genetic engg, whole brain emulation, or similar radical tech with such low-trust institutions.
- I think a radically transparent society will make it much easier for elites to earn trust of the public, and for elites that haven't earned this trust to lose their power.
- A dumb but useful way of modelling today's democracies are that the elites get a million votes each, due to difficulties in switching costs, whereas most people get one vote each. In theory popular consensus can outvote an elite consensus, but this requires a lot more than just 51% in votes in popular consensus.
- I think a direct democracy would not vote for a lot of the geopolitical decisions we see today. I think most people of powerful countries don't actually want to bully smaller countries into giving up their resources to them at threat of nuclear war.
- I see most interpersonal harm in the world as caused more by bad incentives than by bad values. I think creating one "good" ideology and trying to convert everyone to it has historically been a losing strategy. I like my proposal because it changes incentives.
- I think most suffering in today's society is due to interpersonal harm rather than due to scarcity of resources. Scarcity of basic resources (food, water, minimum shelter) indirectly leads to interpersonal harm yes, but I don't think fixing basic resource scarcity alone guarantees a high-trust society that wouldn't architect its own extinction.
- I am aware that millions of innocent people are going to be hurt by my proposal. I currently think the benefits outweigh the harms, as per my values.
- I am aware most people disagree with me on "greater good justifies some harm", and this especially includes the types of people who study psychology or are emotionally sensitive or otherwise recognise a lot of the invisible harms I'm talking about right now.
- I have internalised a lot of Yudkowskyian / Nietzschean heroic responsibility. If I foresee a war or genocide and knew I could have maybe solved it, but did nothing, I see it as my fault in some small way.
- I don't see a non-violent way of fixing literally all problems in the world, hence I decided I care more about actually fixing the problems than I care about being non-violent. If you are one of the people I'm calling out here, I want you to actually sit and engage with this.
- I generally think the more an institution holds moral claims to secrecy, the more abuse it must be hiding as a result. This includes most religious and political institutions in the world today.
- I think a lot of people I see around me on a day-to-day basis have had their agency killed from the inside, and the above colluding list of actors has successfully bullied or co-opted them into submission.
- I see it as a good sign if you are a little scared of the implications of your work, it atleast means you're doing something that can matter. In today's world when it comes to politics, talk is cheap but actions are penalised heavily, see my previous point on submission.
- I think one point where I'm lacking argumentation is why I support leaking secrets of everyone on Earth, rather than just a few thousand elites (which is the default in say, some journalistic circles). I will write more on this later. The short version is that these conspiracies are distributed, and many people in the larger population are also used to suppress individual freedom of other people.
- See also: Julian Assange on Conspiracy as governance, Ben Ross Hoffman on Guilt shame and depravity, David Brin on The transparent society, Duncan Sabien on Social dark matter, Michael Slepian on Secret life of secrets, Vitalik Buterin on Why I support privacy
- See also: Open-source weaponry, Longterm view on info, tech and society
Why am I not working on this already?
- Fixing AI extinction risk is likely a bigger life priority to me in next few years, than fixing any other problem. If I work on this it'll probably be because it helps fix AI extinction risk.
- I'm not rich enough to fund this team myself.
- I'm not technically skilled enough to be a cyberhacker on this team as of 2025. It'll take me atleast a few years to become an expert on this. I don't enjoy security work that much. Also, ASI might have killed everyone by then.
- There is a lot of low-skill high-value cyberattack work like scanning for open endpoints, that I might still be open to doing. I might or might not also be open to working on social media doxxing or drone surveillance or similar, in future.
Subscribe / Comment
Enter email to subscribe, or enter comment to post comment