[Home]
[Parent directory]
[Search]
my_research/cyberattack_big_tech.html
2025-10-24
Update
As of today, here is my concrete advice for people reading this.
- If you are a billionaire, sponsoring successful cyberattacks and social engineering against Big Tech, frontier AI labs and their supporting govts is one of the greatest goods you can do for humanity. The same goes if you are expert cyberhacker who is building such a team or a member of such a team.
- If I was in your shoes, I would start by leaking only the information of various elites, and wait for the response.
- After that, if (and only if) I felt there were many ills in society that still required correcting, I would leak information of members of other socioeconomic classes.
- I haven't been very specific about what conditions lead to leaking what info, because I haven't yet worked those out for myself. (I also think there's some things that are hard to learn from theory alone, and require empirical testing.)
- I will probably have empathy for your moral position even if you choose to leak more info than I am currently suggesting.
- If you are a cyberhacker, I also recommend sharing more of your knowledge to the public domain if possible, so that other cyberhackers can learn from it. I am fine with the fact that this knowledge becoming more widespread reduces privacy of everyone, not just of elites.
2025-10-12
Update
- Before I provide a strong opinion on this topic, I need to think more on why I want to leak secrets of everyone on Earth, and not just elites.
- Journalist norms are to only leak secrets of elites, by default. "Punching up not down"
- From both a utilitarian point of view and a liberal consent-respecting point of view, affecting the lives of billions of people requires a stronger justification than affecting lives of few thousand people.
- Obviously ofcourse, I will be affecting future of entire world even if I only enforce zero privacy on a few elites. Many governments will fall, civil war may occur in some countries, etc
- Some reasons (not exhaustive) why I want to leak secrets of everyone, not just elites.
- Public and communities in public are also often involved in enforcing threats that prevent key info from coming out, including info of various types of abuse committed by the community. I want to attack these sort of distributed conspiracies as well.
- Counterpoint - if all the elites protecting that community have zero privacy, probably the community will also not be able to continue this abuse. Unsure
- Weapons of the future, such as genetically engineered bioweapons or AI model weights, may no longer require you to be an elite (billionaire or major politician) to unleash them on the world. Hence more openness in society is required by default. All biotech labs and AI labs should livestream by default.
- Counterpoint - you can livestream labs in few key domains like AI or bio, without affecting rest of the world. Just need to ensure equipment does not get smuggled out of camera view, while the cameras are under maintenance or whatever.
- Zero privacy might be enforcible via hard physical power - for instance if cyberattacks become way easier than cyberdefence. I think this is my strongest motivation. I don't want to rely on the benevolence of any govt or group of people to enforce zero privacy. I want to ensure the balance of hard power is such that zero privacy automatically gets enforced.
- Analogy - I don't want govt to give you permission to keep a gun. I want it to be so easy to illegally manufacture and smuggle guns that people keep them regardless of what the govt says.
- If you rely on some benevolent dictator or benevolent system of govt to ensure elites have zero privacy, elites will respond by messing with the legal definitions and enforcement mechanisms to ensure they still keep their privacy.
- Counterpoint - Cyberattacks are still costly, and hence my plan to create a $1B lab to cyberattack big tech is actually quite difficult to execute on, in practice. Only if cyberattacks become cheap does it actually become possible to enforce zero privacy for everyone.
- There is a reason even I myself abandoned trying to execute this plan, namely that executing it is really hard.
- (I am working on the weak version of this, which is supporting whistleblowers. That is more targeted at reducing privacy of elites than reducing privacy of everyone, because only when stakes are high will whistleblowers be motivated to make the kind of personal sacrifices required to whistleblow.)
- Iteration
- If I were hypothetically running a lab that could successfully cyberattack some companies, we could iteratively test the practical effects of leaking more info versus leaking less. Getting actual real world data on the consequences will be useful, beyond any amount of theoretical speculation like I am doing right now.
2025-09-09
Why I support cyberattacks on Big Tech companies, followed by leaking secrets of everyone on Earth to the public
Disclaimer
- Quick Note
- Incomplete
- Weak opinion, not strong opinion. I am willing to take action based on a weak opinion, however.
- This is an opinion I have formed recently. I am yet collate all my thoughts on this in an organised format. I am aware this post is a mess right now.
How?
- Profitability
- Non-profit is ideal.
- If profitability is important, sell or lease the latest AI capabilities (code, model weights) to other top AI labs worldwide.
- You can also attempt the same dual strategy (leak capabilities to competitor, leak values and decisions to public) on lower value targets to get initial funding.
- What to publish
- At minimum, publish info relevant to AI risk, such as values, decisions and capabilities of key decision-makers.
- At maximum, publish all data that Big Tech has collected on everyone to the public, thereby destroying privacy of every person on Earth with no exceptions. I am supportive of this but I'm aware this is a radical stance. Even if you don't agree with me, please atleast publish AI-risk-related info.
- Team
- (Maybe) Atleast $1M long-term vested per employee (performance-based incentives), less than 1000 employees for secrecy reasons. Many security researchers today are underpaid relative to the market or military value of their work, so you can exploit that by actually paying well.
- LLMs are rapidly changing many parts of security research, prefer hiring people who can adapt to that.
- Location
- Attacking from foreign soil seems like a better bet than staying anonymous. If you are attacking a US company, ensure your lab is in a country outside US geopolitical influence.
- Other methods
- Social engineering may include a significant part of the attack, depending on how people are willing to work on ground and accept significant personal risk.
- Drone surveillance and CCTV surveillance of employees may also be possible, if done discretely.
- Stylometric doxxing of employees' online profile is another method.
Why?
- I prefer information being in public, over a handful of companies and govts having sole access.
- I suspect small groups of colluding actors (such as families, religious institutions, governments and companies) will no longer be able to suppress individual freedom, if these actors lacked privacy themselves.
- Majority vote could still suppress individual freedom in such a world, and I am fine with this. I think direct democracy enforced this way would be an improvement over our current democracies.
- I suspect most people would not care as much protecting their individual privacy, if they knew no actor would suppress their individual freedom in response to the revealed info.
- I think keeping secrets is often psychologically harmful to the keeper, and people accept this harm because they expect to be harmed even more if the secret comes in today's society. People might make very different choices if the latter harm didn't exist.
- I think institutions today (and by extension society at large) are low-trust, and it is difficult to navigate a good future involving superintelligent AI, human genetic engg, whole brain emulation, or similar radical tech with such low-trust institutions.
- I think a radically transparent society will make it much easier for elites to earn trust of the public, and for elites that haven't earned this trust to lose their power.
- A dumb but useful way of modelling today's democracies are that the elites get a million votes each, due to difficulties in switching costs, whereas most people get one vote each. In theory popular consensus can outvote an elite consensus, but this requires a lot more than just 51% in votes in popular consensus.
- I think a direct democracy would not vote for a lot of the geopolitical decisions we see today. I think most people of powerful countries don't actually want to bully smaller countries into giving up their resources to them at threat of nuclear war.
- I see most interpersonal harm in the world as caused more by bad incentives than by bad values. I think creating one "good" ideology and trying to convert everyone to it has historically been a losing strategy. I like my proposal because it changes incentives.
- I think most suffering in today's society is due to interpersonal harm rather than due to scarcity of resources. Scarcity of basic resources (food, water, minimum shelter) indirectly leads to interpersonal harm yes, but I don't think fixing basic resource scarcity alone guarantees a high-trust society that wouldn't architect its own extinction.
- I am aware that millions of innocent people are going to be hurt by my proposal. I currently think the benefits outweigh the harms, as per my values.
- I am aware most people disagree with me on "greater good justifies some harm", and this especially includes the types of people who study psychology or are emotionally sensitive or otherwise recognise a lot of the invisible harms I'm talking about right now.
- I have internalised a lot of Yudkowskyian / Nietzschean heroic responsibility. If I foresee a war or genocide and knew I could have maybe solved it, but did nothing, I see it as my fault in some small way.
- I don't see a non-violent way of fixing literally all problems in the world, hence I decided I care more about actually fixing the problems than I care about being non-violent. If you are one of the people I'm calling out here, I want you to actually sit and engage with this.
- I generally think the more an institution holds moral claims to secrecy, the more abuse it must be hiding as a result. This includes most religious and political institutions in the world today.
- I think a lot of people I see around me on a day-to-day basis have had their agency killed from the inside, and the above colluding list of actors has successfully bullied or co-opted them into submission.
- I see it as a good sign if you are a little scared of the implications of your work, it atleast means you're doing something that can matter. In today's world when it comes to politics, talk is cheap but actions are penalised heavily, see my previous point on submission.
- I think one point where I'm lacking argumentation is why I support leaking secrets of everyone on Earth, rather than just a few thousand elites (which is the default in say, some journalistic circles). I will write more on this later. The short version is that these conspiracies are distributed, and many people in the larger population are also used to suppress individual freedom of other people.
- See also: Julian Assange on Conspiracy as governance, Ben Ross Hoffman on Guilt shame and depravity, David Brin on The transparent society, Duncan Sabien on Social dark matter, Michael Slepian on Secret life of secrets, Vitalik Buterin on Why I support privacy
- See also: Open-source weaponry, Longterm view on info, tech and society
Why am I not working on this already?
- Fixing AI extinction risk is likely a bigger life priority to me in next few years, than fixing any other problem. If I work on this it'll probably be because it helps fix AI extinction risk.
- I'm not rich enough to fund this team myself.
- I'm not technically skilled enough to be a cyberhacker on this team as of 2025. It'll take me atleast a few years to become an expert on this. I don't enjoy security work that much. Also, ASI might have killed everyone by then.
- There is a lot of low-skill high-value cyberattack work like scanning for open endpoints, that I might still be open to doing. I might or might not also be open to working on social media doxxing or drone surveillance or similar, in future.
Subscribe / Comment
Enter email to subscribe, or enter comment to post comment