As of 2026, AI is disrupting many industries, simultaneously
Disclaimer
Quick Note
target audience - technical people who are not yet ASI-pilled. ideally tech founders who have read paul graham
Incomplete
Main
Even if you don't believe in ASI, it may seem worth tracking to you that AI is already disrupting many industries, simultaneously, as of 2026-02.
I will primarily report from personal experience, in this post. I am aware there are lots of salesman in silicon valley making all sorts of "AI will impact X" claims with zero actual evidence. I don't want this post to do that.
AI for software engineering
I have definitely looked into AI software generation.
AI definitely accalerates my work for fast prototyping as a mediocre developer. I often don't need good code or reliable code or maintainable code or anything, I just need the stupidest possible thing that actually works.
As of 2026-02, AI can definitely zero-shot build a number of useful prototypes. In practice I usually don't literally zero-shot.
AI for video generation
I briefly looked into AI video generation.
I don't actually deeply understand what makes videos popular versus not on youtube. Understanding persuasion is hard.
Podcasts are definitely one format that consistently do well on youtube. I think this ties in well with the groupthink hypothesis here. If you manage to network your way to lots of high status people (don't ask me how) and interview them, that works.
I don't yet know how exactly my other hypotheses for persuasion translate to youtube.
Scriptwriting is by far the hardest part of the entire production process of a youtube video.
Yes, high quality production is important (fixing things like lighting, posture etc), good acting skills are important, post-production is important (having a video editor edit your footage). But scriptwriting is harder than all of these, by a very wide margin, for most types of videos.
AI is not yet good enough to replace human actors, because specifying facial expressions for videos via prompts is too much work, and it's easier to hire an actual actor.
AI's skill level today is definitely not close to the skill level required to replace all scriptwriters.
(If it could do that, I do think you've probably solved hyperpersuasion. As in, you've built an AI is skilled enough that it can create a religion that can persuade 100% of humanity to do anything on its behalf on its behalf, including commit suicide or murder, worship the AI, or similar.)
My guess is AI can probably do a bunch of low-to-medium quality video editing, if the pipeline was made easier.
A highly paid human video editor probably has creativity and taste and so on. I don't yet know how much of this AI can replicate.
An entry-level human video editor can mainly do tasks that are simple and repeatable and easy to specify. After trying multiple solutions, I ended up hiring a human video editor for this. But I do have a fairly confident guess that an AI could also eventually do this.
MrBeast (most popular youtuber with over 400 million subscribers) says he uses AI to help him select between thumbnails for his videos.
AI for doxxing
AI for geoguessr is a thing now. We have clear evidence AI for geoguessr is beating even a number of expert-level geoguessr players.
In a real world setting (not game), expert 4chan users who doxx others seem to use a variety of methods, including PII databases, social graphs, doxxing location from video cues and so on. (I haven't tracked which methods are more popular when)
I tried building an AI tool to doxx users using stylometrics.
If your sample set is small, you don't even need that smart an AI to doxx users.
You just need to ask AI (via inference) what are the anomalous words, concepts, or sentence structures that each person is using.
Then once you've compiled this database you can manually check which pairs of users are similar.
I couldn't get this to work against anonymous users who actually put effort into rearranging their words and stuff.
But also, a lot of anon writers can't actually afford to do any of that. Becoming popular online as a writer is hard, and being anonymous doesn't make this easier.
If your sample set is large (such as the entire internet), my guess is you need to start with a PII dataset as a the first stage filter.
I couldn't get this to work for the entire internet. Doing inference over the entire internet is too expensive. (Even doing embedding search is too expensive)
I recommend just reading that writeup in full. To summarise though, a lot of low value cyberattacks are just things like going and implementing years-old CVEs that most people already know about, or exploiting SQL/redis/elasticache/etc misconfigs, or scraping for API keys, and so on. Financially, these are both low investment and low reward.
On the other end of the spectrum is expert hackers being paid full-time by govts and govt contractors, to discover new CVEs, on both the attack and the defence side.
Big Tech also invests heavily in defence. (I'm unsure how much Big Tech invests in offence themselves, as opposed to just sending those researchers to go work for US govt instead.) Some large companies besides Big Tech also invest in defence, although this is often (but not always) security theater for regulatory compliance reasons.
many CVEs takes multiple years of effort to discover, for someone who already has put years into upskilling into that niche
I would be less surprised if AI manages to reduce the investment cost for the low value cyberattacks, such that a lot more of them happen. I would be more surprised if AI could be at par with the experts. There are some people claiming the latter though.
AI for therapy
(I have to share personal examples here.)
In my first-hand experience, AI definitely has a tendency to project back to you the things you are saying to it, or project what essentially feels to me like some random therapist's opinion. (I don't think randomly selected therapists are that good at therapy for me, unless I am literally depressed/suicidal/etc. So this is not a compliment.)
In spite of all this, I think if you give a lot of context, AI is starting to reach the level where it can provide useful insights.
I think AI can give you useful advice with enough context.
I made a long document explaining reasons why I am unhappy, and it did give me atleast one useful insight that made me feel better.
The insight was that I should track my learnings from new projects, separate from the success metrics (likes/votes/dollars) for those projects. My mental state seems to correlate with actual successes, but the AI said maybe it should correlate with the learnings instead, and I should track my learnings.
I don't think it gave the most useful possible insight, or in the best way, but it did give a useful insight.
Given a lot of context about your worldview, and the worldview of another person, AI can quickly point out differences between both.
Some of my "My views on X" posts were actually inspired by this.
As a concrete example, I have read many posts by The Last Psychiatrist. He writes a lot about the interaction between US media and politics, and he has a very psychology-oriented lens to it. I tried asking AI for differences of opinion between us, and the AI said TLP probably thinks that all my intellectualisation is a narcisstic defence mechanism that prevents me from paying more attention to other people. At first I thought AI just projected some random therapist's opinion on me again, but then I clicked more and realised that yeah this is literally TLP's opinion. AI had correctly identified an important difference between me and TLP. I don't yet know if TLP's opinion is correct at the object-level.
AI for search/recommendation/etc
pls just go read my other posts on this
Incomplete
Subscribe
Enter email or phone number to subscribe. You will receive atmost one update per month