Writing this because I like a lot of Naval's writings, and have noticed multiple people around me defer to Naval.
I'm unclear if the target audience of this post is Naval himself, or people who defer to Naval.
This post is not about the things I like about Naval's writings, which are many. For that I might write a separate post another time. This post is specifically about where we disagree.
Naval's disagreements with me probably come in the following broad themes.
Naval thinks ASI is likely not happening in the next 5-10 years. I disagree, I think there's a significant chance it could happen.
Naval thinks business is positive sum and politics is zero sum. And that under most circumstances, you should do business not politics, because it is easier to succeed at business than at politics. I agree with this to some extent, but I think fixing ASI risk is an exception and there's probably no business you can start that fixes ASI risk, all by itself. You still have to fight and win the zero-sum political battle against the people running the AI companies.
Naval thinks business is positive sum and politics is zero sum. And that even if you succeed at politics, you will be less happy, for instance because your monkey brain will get trapped winning status games, or you might make moral compromises. I strongly agree with Naval that you'll probably be less happy becoming a politician than an entrepreneur, on average. I am saying it is worth doing even if it makes you less happy.
Naval thinks your desire to succeed at either business and politics, ultimately comes from your own personal unhappiness. He thinks you should consider becoming a Buddhist and meditate instead, as a solution to your unhappiness. Or maybe get rich first and then become a Buddhist and meditate. I tried this semi-seriously for a few months but I don't think this will work for me. I have to work on fixing ASI risk in order to stay sane and find meaning in my life. My guess is Naval is fine with some people trying to fix problems in the world. I don't think he is claiming every person on Earth should give up their ambitions immediately in order to find happiness or meaning.
1. Naval versus me on timelines to ASI
Our disagreement on point 1 is obviously the most important. Many of other disagreements might actually be downstream of this, I am unsure.
Naval Ravikant quotes David Deutsch on timelines to ASI, and seems to avoid explaining his own views. See the section below for why Naval might be avoiding talking about his specific views. I wish Naval would offer more detail about his own views, instead of hiding behind Deutsch.
I like David Deutsch's writings even when I disagree with them. I wrote a long ass criticism of Deutsch's views on epistemology, but I shifted it to the footer below, because I think it is only loosely relevant to his timelines to ASI.
I agree with Deutsch that an ASI should be able to create new knowledge and learn new skills on its own, not just copy the knowledge and skills humans have already acquired.
However, I think GPT-5 is already showing sparks of the former. I think GPT-2 to GPT-5 represents meaningful progress on the former, not just the latter.
Copy-pasting from my previous post on this:
"Generalisation" and "metalearning" are related. Newer models aren't just memorising more skills, but are more capable of learning new skills on their own without human data guiding them.
There is a difference between training AI such that it learns skills that humans have already have, versus training AI to know how to learn new skills on its own.
The difference between these two is not binary, there are levels to generalisation.
Example of low level of generalisation: Seeing lots of english to french translations, and learning to become good at english to french translation
Example of medium level of generalisation: Seeing many solved competitive programming puzzles and solving another puzzle with an algo similar to one it has seen before for a different puzzle. Seeing lots of english text and a little hindi text, inferring the grammar similarities and differences across human languages, and then learning to speak fluent hindi as a result. (Yeah lol this is a real result, and an old one.)
Example of high level of generalisation: Seeing a theorem from economics and realising an analogous version of it also applies to biology, thereby solving an open problem in biology.
GPT2 to GPT5 has led to immense progress in both learning new skills, and learning how to learn new skills. The latter is more important though.
The best way to test the latter is to give AI problems that a human has never seen, and no human knows how to solve. This could be simple tasks like translation in apocryphal ancient languages or complex tasks like proposing novel molecular bio experiments to run.
I think most of the progress on creating new knowledge is from very black box brute-force heuristic-based techniques and we don't have deep understanding where any of this progress came from. My default expectation is we will continue to get more progress and continue to not have deep understanding where it came from.
I think I have a deep awe (and fear) of black box techniques that nevertheless work, that is difficult to transmit to another person unless they have also studied the history of deep learning a little bit.
I would want to know if Deutsch has studied basics of deep learning, such as backpropagation and self-attention and CNN/RNN/LSTM and so on. He has strong theoretical background so I tend to assume he has already done so. (I could be wrong.) I think it is possible to have good takes on AI timelines without reaching the frontier of knowledge on AI, but it is obviously easier to have good takes if you also have this knowledge.
Likewise, I would want to know if Naval has studied basics of deep learning or not. Naval clearly deeply values knowledge for its own sake, so if he has not studied the basics I would urge him to do so.
Naval has lots of outside view heuristics on why the pessimists are always wrong.
He says many doom predictions of the past (climate change, overpopulation, nuclear war) have turned out wrong. I agree with him on this.
He says media and politics mostly has incentive to attack the status of people creating new wealth via business. Most news is negative and optimised to create negative emotions in your monkey brain. I agree with him on this.
He says pessimism can be self-fulfilling. If you think you are going to fail, you won't try as hard or be as creative, as someone who thinks they are going to succeed. I agree with him on this too. I don't think the doom scenarios are certain. I am actively working to prevent them, and I think my actions can make a difference.
ASI is different, actually
I am however claiming that ASI in particular (and intelligence-enhancing technologies like human genetic engineering or whole brain emulation) are actually different. The outside view heuristics fail. You have to form an inside view. Intelligence is what enabled homo sapiens to dominate the Earth. You can't put this technology in the same mental bucket as changing atmospheric CO2 or SO2 levels.
P.S.
The inside view and the outside view are two alternative approaches to forecasting. Whereas the inside view attempts to make predictions based on an understanding of the details of a problem, the outside view—also called reference class forecasting—instead looks at similar past situations and predicts based on those outcomes. For example, in trying to predict the time it will take a team to design an academic curriculum, a forecaster can either look at the characteristics of the curriculum to be designed and of the curriculum designers (inside view) or consider the time it has taken past teams to design similar curricula (outside view)
I think Deutsch has a lot of respect for human ability to create new knowledge, and how this might enable humans to one day colonise the universe. I think Deutsch might therefore have decent intuitions for why intelligence but more of it, could be scary actually.
Copy-pasting from my previous post.
Human language has features not present in animal language and this is likely an important part of why humans can build spacecraft and colonise the earth, but apes can't. AI already has picked up all these features of human language.
Go read Hockett's views on what features separate human language from animal language.
Humans use language for communication. Humans might also use language as part of their reasoning process (along with other modules such as motor skills, spatial visualisation, etc). Both of these are hypotheses for what separates humans from apes, and I think there's a good chance they're true.
Other hypotheses include bigger birth canal and brain size, and evolutionary pressures to win social games. These hypotheses seem compatible with the hypothesis that language is most important.
Depending on how you measure it, AI may now be the second most complex object in the observable universe. More complex than ape brain but less complex than human brain.
2. Naval versus me whether politics is harder than business, to fix ASI risk
Assume Naval agreed with me that ASI was coming, and that it posed a risk. Would he agree with me on the solutions I'm proposing?
Naval generally defaults to starting a business, not entering politics, as a way to fix big problems.
I am making a very strong claim if true. I am saying there is probably no business you can start that fixes the problem of ASI risk but avoids the zero-sum political fight. I am open to being proved wrong.
Alignment is hard
The most obvious first idea you might have here is to create your company that builds ASI, that simultaneously wins the capabilities race, and invests more in alignment.
Lots of people have written on why this is an obviously bad idea, why alignment is hard to fix in 5-10 years, why even mech interp is hard to solve in 5-10 years, and so on.
Naval has not written his specific opinions on any of this literature, and I don't like guessing his opinions here, if he has not explicitly written them down.
Becoming world dictator is bad actually
In my mind, it seems likely that democracy and capitalism are not going to survive the creation of ASI. Suppose you succeed at building ASI, and by some mix of luck and effort, manage to keep it controlled. This will give you such an overwhelming technological advantage (which then translates to political, economic and military advantage) that there's a significant chance you can simply crown yourself world dictator.
Since the ASI will be faster at creating new knowledge than any human (not just theoretical knowledge, but also implementation-level detail, learning from reality), it will also likely be way faster at acquiring power than any human. If the ASI doesn't use this knowledge to acquire power, the people controlling the ASI would use this to acquire power. The most extreme technologies we humans can foresee are nanotechnology and hyperpersuasion; an ASI might be able to foresee something even more extreme than this.
If you build an ASI and crown yourself world dictator, you are part of the problem.
Separately, even if I started an ASI company, I also don't think the probability that I personally would become world dictator is that high, compared to the probability of my competitor becoming world dictator, or the probability of one of us losing control of an ASI. I think the AI company heads might be making irrational decisions even from their own narrow self-interested perspective. See also: My letter to heads of AI labs
Delegating versus doing it yourself
Naval might still have an inclination that some entrepreneur will find some creative solution to this whole problem.
I am saying, no actually, unless you personally can come up with some creative solution, you don't get to just assume that someone else will do it. If Naval thinks my solution is bad (I don't know if he thinks this), maybe he should take some responsbility of his own, and offer his own improved solution. If he did offer an improved solution, that would actually make me happy.
3. Naval versus me on doing politics while staying happy and being high integrity
I actually agree with a lot of Naval's writings on this topic. I definitely agree with Naval's view of morality and self-esteem a lot more than I agree with utilitarianism or longtermism, for example.
However, I also think Naval's claims are sometimes too strong and not that helpful to people actually engaged in the messy reality of politics. Some examples below:
Naval basically thinks you should never go to jail, or risk going to jail.
What is his opinion of MLK Jr and Bevel's tactic of flooding prisons with school children, as part of the Civil Rights Movement? What is his opinion of Mahatma Gandhi going to prison? What advice would Naval have given Bevel or Gandhi?
What is his opinion of all the cypherpunk activists who ended up going to jail, be it Aaron Schwarz or the Tornado cash founders or the people who worked on PGP or Tor? Is his advice to all of them - "git gud and be smart about avoiding jail"? I might agree with this actually, but I think the risk is always non-zero. Snowden knows there's a non-zero chance he can be assassinated even today.
Naval thinks if you tell enough lies, you might lose your self-respect.
What is his opinion of Churchill giving propaganda speeches so that teenage boys sign up for the army and walk into near-certain death, so that Britain could ultimately beat Nazi Germany in WW2? What advice would Naval have given Churchill?
This is a specific example ofcourse, but military propaganda is a very common thing across history, used by both the "good guys" and the "bad guys".
In my head, the politicians I like (MLK Jr, Churchill, Nehru) and the politicians I dislike (Genghis Khan, Hitler) both have lots of blood of innocent people on their hands, and the differentiating factor between them is more nuanced. (My views on this are best saved for another post.)
I don't currently think Naval has sufficiently good advice for politicians navigating these sorts of moral tradeoffs.
4. Naval versus me on whether you should follow Nietzsche or Buddha
I don't want to write a long ass para here because I don't think Naval himself is making very strong claims, on whether you should become more like Nietzsche (don't accept reality, bend it to your will) or more like Buddha (accept reality as it is). MtG Green versus MtG Black. Many philosophers have written about the clash between these two perspectives.
I can say though I personally don't exactly feel like I have a choice in the matter. I think I am probably ambitious because I am unhappy with reality as it is, rather than being unhappy because I am ambitious. I am not very optimistic that a few months (or even a few years) of practising meditation and acceptance, will make me happy. The knowledge will continue to weigh on me, that I could have actually done something to fix ASI risk but chose not to.
If lots of people become like Buddha, the few people who don't might build AI companies and literally create world dictatorship or cause human extinction. If you are telling everyone to be Buddhist, are you really okay with this being the direct consequence of your life philosophy recommendations?
Meta note: Why Naval avoids getting into specifics
Naval deliberately writes very general principles instead of getting into specifics.
There might be multiple reasons for this. For instance, he might want simpler world models because they help him think better, or because they help him reach a wider audience.
He might want to avoid making enemies of specific people in his network or outside of it, and hence does not name them in his writing. If this is the reason, I would appreciate a bit more meta-honesty from Naval on this. Naval can easily say the following sentence. "Yes, I avoid discussing specifics because there are people who I don't yet want to make enemies of."
Unfortunately, my disagreements with Naval mostly revolve around specific details. Since he has not mentioned many of the specific details, I can only guess what these are, and will sometimes get these guesses wrong.
I think Naval might be making a mistake by avoiding discussing the specific details in public. (Weak opinion) At the very least, he should be willing to think through the specific details in private.
This is extremely important and a major part of why Naval and I are failing to converge on some consensus viewpoint.
Meta note: I am in the arena, Naval is watching from the sidelines
I think even Naval will probably agree with me on this. Since I am actually trying to fix ASI risk and am failing multiple times, it is possible I have insights on how to fix ASI risk that he doesn't have.
Side Note: Deutsch on epistemology
I want to spend more time reading Deutsch's view on epistemology before I confidently offer a critique. Here's my current views, based on my incomplete understanding of Deutsch's views. Please don't take the stuff below too seriously, I will get back to it later.
On Deutsh's epistemology
I like Deutsch's views on epistemology but am more sympathetic to Jaynes or Bayes or Marcus Hutter on epistemology. I like AIXI and simplicity prior and bayesianism and assigning probabilities to world models. I am aware it is often difficult for human brains to implement this in practice (AIXI assumes turing machines with infinite time and space, not human brains). Deutsch seems to not like simplicity prior because he thinks it depends on your laws of physics. I am not happy with Deutsch's critique of simplicity prior. I think an analogue of Kolmogorov complexity can be defined even if you assumed the world was a quantum turing machine as opposed to a classical turing machine, and that both complexity measures have direct linear relationship with each other. I think no matter what laws of physics the world actually followed, if they were deterministic in some sense, intelligent people living in that world could define a complexity measure and a simplicity prior and so on. I think there would likely be a clear relationship between what people in that world thought was simple and what people in our world thought was simple.
I also have some views of my own on epistemology. I can summarise here but I think details are best left elsewhere. I think scientific progress happens in stages - first someone invents a data collection instrument (telescope, microscope, etc), then there's a loop where experimentalists and theoreticians work together over multiple iterations. Lots of experimentalists go and collect data with this instrument, and a lot of theoreticians try to form world models about the system from which they are collecting data. Both theoretical work and experimental work inform each other, and good work by either often accelerates the other. Deutsch gives a lot of weight to the theoreticians here, and thinks they are solving the hard part of the problem. For some fields this is true. For many fields, I think the hardest and most neglected part is inventing the data collection instrument itself. Ofcourse, inventing the data collection instrument requires creativity from theoreticians. It requires previous progress in other fields, such as glass welding or material science or precision manufacturing or similar. Maybe Deutsch actually agrees with me here. I am not sure.
Subscribe
Enter email or phone number to subscribe. You will receive atmost one update per month