Busy cleaning up some old notes of mine, and publishing whatever is useful to publish.
Most cheap cyberattacks are spray-and-pray, and most targeted cyberattacks are expensive. Want to understand why. See also: Low value cyberattacks are spray and pray
Potential future projects
Learn hacking.
For now, the goal would just be to understand if I enjoy this and can get good at it quickly. I don't have a lot of money, so I need to figure out some niche in cyberhacking where I can achieve practical results without lot of money.
The ambitious end goal would be to cyberattack AI companies and US govt, to leak secrets about them that make them look bad in the eyes of the public.
Read and write more about politics on AI risk, for technical audience.
For now, the goal would be to myself get more clarity on all aspects of this, be it on morality, on previous protest movements, and so on. After that, the goal would be to rewrite all this in a way that is persuasive to other people working on AI risk, so they follow the plans I endorse.
Multiple people have asked me to sit and read history of lots of previous protest movements. I think this is useful, and maybe I will do this. But also, the biggest missing ingredient is sufficiently persuasive leaders. I don't think I can become such a leader.
If I can't become such a leader myself, I can't get a lot of power this way. I have to pick a path where I end up atleast somewhat powerful if I succeed.
I also have spoken to atleast one person (can't say who) who is attempting to persuade people at scale on ASI risks.
It is not obvious to me how I can help such a person.
The primary advice they need is on how to become persuasive, both on how to be persuasive regarding ASI risk in particular, and how to be persuasive on social media in general.
I don't have good advice for them myself, beyond a few posts here on the lessons I learned trying to be persuasive myself.
I also don't have the ability to send them content or contacts with good advice. There's a lots of half-decent advice online that mostly lacks empirical backing. A lot of people with really good advice are running their own independent channels. Also, beyond a point, the advice does get somewhat specialised to one's target audience.
Apart from this, they just need to automate all other aspects of their work - be it hiring a cameraman or actor or video editor or similar.
This way they can spend full-time just getting good at persuasion. This is not that hard for them to do on their own AFAIK, so I don't see why they would need my help. Actually being persuasive at scale on youtube is way harder than figuring out how to hire a good video editor.
They might ask for funding, although it's not very clear to me how funding helps them.
Funding just helps automate the easy parts of the job (cameraman, video editor etc) so you can figure out the hard part which is persuasion. I am not that convinced that studios with bigger initial budgets will necessarily do that much better compared to studios with smaller initial budgets.
I'm unsure how many people in EA/LW spaces can actually be persuaded in good faith, to organise protests or run social media or similar. This is my key uncertainty for deciding how much time to devote to this plan of trying to persuade others in EA/LW spaces to listen to me.
Many of the junior people in EA/LW/AI safety spaces seem like people with no principles of their own, and will go for whichever job gets them high pay and status. I should just make money and hire these people, not waste my time convincing them.
Many of the senior people in EA/LW/AI safety spaces seem malevolent on purpose. They are well aware that they are colluding with the US govt, and have some vague hope that they will be part of the leadership when ASI does radically centralise power.
There are definitely some people who can be persuaded. But it is not clear if I should really invest my time on this.
Become popular in some way unrelated to ASI risk, and then run ads on ASI risks later. For instance, I could work on some software app (like my discovery / search algo work) or some blog or some youtube channel.
I'm still quite unsure on the impact of this path. Conventional wisdom is that ads primarily add fuel to the fire once organic reach is already successful. If your idea is naturally memetic (r >> 1), then ads could help it grow even faster. If your idea is naturally not memetic (r ~ 1), then ads might not help much.
As of 2026-01, basically nobody on Earth seems to have a sufficiently memetic framing of the problem of ASI risks. There is no piece of content where I can look at the content and say, "okay, let's spend $100M on social media ads on this content, and boom, problem solved".
I can imagine this changing in the near future. Maybe it will be useful to have a platform from which you can signal boost anything, after ASI risks have already become somewhat popular in society, because you can then signal boost that stuff in the future.
Ideally I should get more actual data on this. Which political movements or even ideas and products more broadly, actually became successful faster because of large ad spending or large sponsorships? Until I get data though, I am inclined to assume that ads were not the primary reason.
I am especially suspicious of ads that target users on social media (using keywords, search history, etc) as opposed to ads that are run with one specific social media influencer who is already very famous.
Probably not future projects
Anything to do with curating resources, or search engines, or discovery
This is a hard choice to make, but I have made up my mind. I will not work on this further. See the retrospective on this for more details.
Make scary demo of drone use on people in my city, or scary demo of AI sex video chat, both for non-technical audience.
See below for why I'm avoiding projects involving persuading non-technical people.
Make some game or computer art on AI risk, for technical audience.
The main issue with this is, doing persuasion at scale only makes sense after you have first been able to persuade a few individuals, which I have not been successful at.
Experiment more with persuasion via Krashen's input hypothesis
for example to immerse potential whistleblowers into cybersecurity / opsec culture, or immerse journalists into AI risk culture, or immerse average people into political news of countries other than their own
Obtain expert endorsement of some stuff on my website. For example, obtain expert endorsement of my whistleblower guide. Or obtain expert endorsement of my claim that bioweapons cannot be manufactured by small groups of people as of today. (Small groups can only succeed if large actors like govts are obviously stupid in monitoring the academic labs they fund.)
I have gotten some real value out of networking, but not enough to justify investing more time in this. I would rather build more impressive stuff first.
Anything that makes money, but does not directly advance the cause of preventing ASI risk
Timeline
Independent work until 2026-04-01. After that probably will take job.
Heuristics
My primary heuristic
Figure out some long-term project where I both enjoy (or atleast tolerate) the process, and value the outcome.
I only seem to enjoy political philosophy. But I think studying philosophy probably won't lead to any useful outcome if ASI gets built in next 5 years.
Valuing the outcome means it has to be among the highest impact options within my existing worldview.
other heuristics
All things equal, I will prefer projects I am likely to see more retaliation for, be it social ostracism, finances being cut off, being sued etc
Obviously, don't be a rebel just for the heck of being a rebel. But also, I am playing a zero-sum game here, and my opponents running AI companies and govts are highly intelligent and power-seeking. If they retaliate against something I do, it is atleast a medium strength signal that what I am doing is actually working. If they ignore what I do, it is a weak signal that what I am doing is not working.
All things equal, I will prefer projects where I can get more feedback from reality.
So making tiktok reels scores much better on this, than say writing a whistleblower guide.
All things equal, I will prefer projects that make me personally powerful if I succeed.
Acquiring power for its own sake is not my number one priority.
However, in practice, if I am too far away from the chain of actions that leads to impact or acquiring power, then it will be trivial for someone else to remove me from that chain later on, thereby eliminating the impact of the project in the first place.
All things equal, I will prefer projects that don't cause me to interact with people I don't want to interact with.
For next few months: Avoid projects that involve lot of interaction with people. Make projects whose audience values either technical knowledge or power.
In my life I have generally failed on projects that involve interacting with people, and succeeded at projects that don't.
I am almost never able to meet people whose specific priorities and broader priorities in life, are same as mine. I take a little damage from interacting with people whose specific life priorities are unrelated to AI risk, but who broadly value knowledge or power. I take more damage from interacting with people who value neither.
This includes people in both offline and online circles. I can deliberately silo myself from both, by avoiding work meetings offline and online, and by logging out of all social media platforms where I have work accounts.
Last few months I have interacted with lots of AI safety people online who are misaligned with me, and I learned a lot from it. But now I want to pick my next project to be something I can work on solo.
This also makes me maybe a bad fit for working on projects that involve persuading an audience. You can only get good at something if you do it many times, and you can only do it many times if you don't absolutely hate doing it.
This means I especially am a bad fit for projects that involve persuading non-technical non-power-seeking audience, although I think persuading non-technical people should be the number one goal of the AI pause movement.
Subscribe
Enter email or phone number to subscribe. You will receive atmost one update per month