Browse parent directory
my_research/one_pager.html
2025-05-02
One pager
I decided to make a summary of some non-obvious insights of mine around similar topics. (A lot of this work is not original but borrowed from others.)
Scroll to the section of the page titled "software and society" to see what I'm considering working on nowadays.
Summary
- Most information about everyone on earth is likely to end up in the public domain soon. Reasons for this include a) people being incentivised to share their information in public, because of the benefits they get in return, and b) cyberhacks and espionage leaking people's information against their will.
- Some possible consequences of this include more direct democracy, improved law and order, less geopolitical monopoly power and improved truth-seeking on currently taboo topics.
- Other possible consequences include highly stable dictatorships and generally damaging the lives of people deviating from societal norms (including deviating in ways that are not harmful).
- I am currently making the uncertain bet that this level of information sharing might net good for humanity. I want to build software for it.
- Cost of CPUs, disks, fiber optic cables etc going down means it'll be possible to build decentralised social media and governance platforms that don't require a lot of money to pay for servers and software developers. A decade ago, building social media required a lot of money, hence Big Tech companies built them and made many self-serving decisions in how they are governed.
- Intelligence-enhacing technologies are becoming possible this century and can radically alter our society. Extinction-causing technologies are also becoming possible this century. Examples: superintelligent AI, human genetic engineering, brain-computer interfaces, human brain connectome research.
- Our current mental models for regulating tech may be inadequate when dealing with this. A more intelligent mind can persuade a less intelligent mind to hand over money or votes for nothing in return. This breaks the foundations of both capitalism and democracy.
- Cost of sensors, transducers etc going down could lead to invention of more scientific instruments to collect data. Invention of such instruments is usually accelerating for scientific fields. Example: microscope, telescope, cyclotron, DNA sequencer, etc
- Offense-defence balances in technology shape incentives. Incentives shape culture. If you want to understand forces that shape how society evolves on multi-century timescales, study offence-defence balances of various technologies. Don't just study which ideology or morality is popular in society today.
- (Incentives include providing physical safety, social approval or money as a reward for some behaviour. Culture includes literally all popular ideas and behaviours society, including ideas that have moral weight in people's minds.)
Information and society
Disclaimer
- This disclaimer might make more sense to you after you read the rest of my post.
- Note on terminology:
- A lot of terminology around people's information being acquired and shared is morally loaded.
- Negatively loaded: surveillence, doxxing, spying, hacking, stealing information, cancelling
- Positively loaded: transparency, journalism, free speech
- Suppose Alice acquires Bob's data and shares it with the public without Bob's consent. People who consider Alice a "good guy" and Bob a "bad guy" will tend to use positively loaded terminology and people who consider Alice a "bad guy" and Bob a "good guy" will tend to use negatively loaded terminology. For example: Alice and Bob belong to two different socioeconomic classes or two different political ideologies.
- There is a strong incentive gradient for many actors to invent persuasive ideology around which "greater good" justifies violating consent (i.e. violating liberal morality).
- Example: A social-left-leaning journalist who believes becoming a billionaire is immoral, and believes they are justified in obtaining and leaking private lives of billionaires without consent.
- Example: A social-right-leaning individual who believes LGBTQ is immoral, and believes they are justified in doxxing and publicly shaming individuals who come out as LGBTQ.
- Example: AFAIK Stalin built two intelligence agencies (KGB and GRU) run by two different branches of govt to spy on and purge dissidents from the other branch. He figured out ideology so that members of each branch felt justified in spying on the other branch.
- I'm going to use the term "data acquisition" as a neutral term so I can separately the discussion of what is physically happening, from the discussion of what moral judgement I assign to the intent behind this data acquisition or the consequences of this data acquisition.
- For the most part, I haven't yet made up my mind on what moral judgements (if any) I should have on this topic.
- This is because I haven't made up my mind yet on which equilibria are stable, and which equilibria are stable depends on technical capabilities and financial incentives, not just on social incentives (such as people's moral judgments of each other).
How?
- Historically, every advancement in transmitting information, be it horseback, ship, semaphore, printing press, telegraph, radio, or now smartphone and fibre-optic internet, has been followed by significant social and political change.
- Cost of computing hardware going down has reduced the cost-to-benefit ratio for cyberhacking and espionage. Individuals and organisations will find it harder to keep secrets in coming world.
- If you wish to protect information on a computer from being stolen by large corporations and nation states, both software and hardware methods fail. The only defence is physical methods - crush hard disks into pieces to wipe them, switch off electricity to wipe RAM, meet in person, use a copper envelope (faraday cage) to block signals etc.
- Example: Various fiber-optic cable wiretaps and router backdoors revealed by Snowden leaks
- Espionage is now within the reach of individuals operating independent of any corporation or nation state.
- Examples: Ulrich Larsen, Edward Snowden, Chelsea Manning
- Information once acquired is rarely lost or destroyed. Every cyberhack or intelligence operation increases the number of actors (N) in the world who now have access to that information. Since N goes up but never down, on a long enough timescale N approaches say 50, at which point it is basically guaranteed to reach the public eye.
- Example: the 800+ leaked torrents for password databases behind haveibeenpwned.com
- Physical and digital worlds are both likely to have increasing amounts of data acquisition.
- As of today there is more public info in digital world than physical world. Conversely it is easier to meet someone in the physical world than in digital world and obtain high level of guarantee that the meeting was 100% private.
- This might change soon. Gigapixel photography from helicopter/aircraft at 10 km altitude can probably acquire data of a city at a resolution precise enough for accurate facial recognition. ~10,000 quadcopter drones can also acquire data of a city precise enough for facial recognition, assuming one can pay enough drone pilots or automate the swarm.
- Individuals and organisations that operate in public get various benefits, such as proving trustworthiness and receiving better feedback.
- Example: popular podcasters like Joe Rogan
- The differences in consequences between 99% data acquisition by a few elites, 99% data acquisition shared to public, 100% data acquisition by a few elites and 100% data acquisition shared to public are very large. I haven't figured out the stable equilibria for this yet, and have fundamental confusions here.
- How?
- Biologically implanted cameras and mics may be one pathway to achieve 100% data acquisition instead of 99% data acquisition. Drone, airplane and satellite footage, smartphones connected to internet, etc all achieve 99% data acquisition.
- If 100% of people's info is public or only privately accessible to a few elites but a few elites' info is not public, it may enable dictatorships with much longer half-lives.
- If 100% of people's info is public and all elites' info is also public, society could be significantly different than today. This could instead enable a more direct democracy. More on this below.
- However 99% of people's info coming out in public seems like the more likely scenario than 100% of info coming out in public.
- Especially motivated actors such as political dissidents will likely still incur the costs (social, psychological, financial) required to avoid having their data acquired by anyone. People who are attempting to leak other people's information will be willing to incur significant costs to ensure their own information does not get leaked.
- It is posible a society where 99% data acquisition occurs eventually slides into a society where 100% data acquisition occurs. I haven't made up my mind on this.
- "Everyone's information being leaked to everyone else" (I'll call this 100% sousveillance) might be the least worst stable equilibrium.
- In general there are two tradeoffs - privacy tradeoff and freedom of speech tradeoff. How much privacy does society provide its members and how much freedom of speech does society provide its member. IMO extreme ends of both tradeoffs are more stable than middleground position.
- It is difficult to ensure only some large group of people know a secret and they don't leak it further, it is easier to ensure either no one or a small group knows the secret, or everyone knows the secret. Small means necessarily less than 50 people, and usually less than 5 people.
- It is difficult to ensure only some topics of content are not allowed to post on a given internet platform. Eventually the platform may be politically co-opted so more and more types of content are restricted. Alternatively one structurally builds platforms such that all types of content are allowed.
- A moral stance that is pro-free speech is also anti-privacy. If Alice has obtained Bob's secret information, a maximally pro-free speech will allow Alice to publish this info online because there is no trustworthy Carol who gets to decide which content one is allowed versus not allowed to publish online.
- A pro-free speech stance is therefore also in conflict with EU's right to be forgotten, as no Carol is trusted to decide which information should or not be should be erased from public eye.
- Another stable equilibrium might be to build a community of a few hundred people and completely disconnect from the rest of society. People and information go in, no person or information ever comes out. Multiple generations of people are raised in the same isolated community. A secret-keeping community of few hundred people will allow more ideological diversity than a group of two (like a married couple) or ten (like a C-suite of a company)
Consequences
- Increased public info about elites may reduce freedom of elites and establish a more direct democracy. Both corporations and governments will be more controllable by the general population.
- This has similarities to town life versus city life. The internet is homogenising global culture and morality by putting town incentives across the globe.
- Truth as a moral virtue is likely to thrive in a highly transparent society as long as multiple actors can defend themselves long enough to persuade others. Nuclear weapons allow this at national level between nuclear-armed countries. Guns allow this at individual level in countries that tolerate guns.
- Geopolitical power is built majorly by maintaining lead time over competitors in various technologies. The leading manufacturer of any product gets export revenue from across the world. The second leading manufacturer with 6 months inferior product gets zero export revenue. (They either survive in domestic market or go bankrupt.)
- Example: Airbus in France and Germany manufactures and exports most of the world's civilian aircraft. China manufactures and exports most of the world's solar PV modules.
- Increased public info about such orgs will significantly reduce lead times, but not eliminate them. Competent actors will likely still be able to remain on the leading edge and get the same geopolitical power they did before.
- There will be stronger incentives towards "use it or lose it" when it comes to inventing any weapon in the future, as competitors will be able to copy the same weapon more quickly. It may sometimes be advantageous for the inventors' political elites to use the weapon immediately after it is invented and before it gets copied.
- Example: US govt was the only actor in the world to possess nukes between August 1945 and August 1949. Many in US govt proposed pre-emptively nuking Soviet Union during this time period to establish nuclear monopoly. If there were not Soviet spies in the Manhattan project, this time period would likely have been longer than 4 years.
- There may be more incentive to be competent as an elite running such an org, and hoarding of technical talent and knowledge will no longer be sufficient moat to keep the org.
- Replicating entire supply chains from scratch in competing nations might become possible with smaller lag time.
- Example: If an arms race is started for human genetic engineering, there will be a smaller difference in relative power between competing nations as the entire biotech supply chain will get replicated in a small number of years in multiple nations.
- Increased public info about citizens (including those committing crimes) may help fix law and order in many countries.
- Stable economic growth requires law and order.
- Example: Acemoglu's nobel prize-winning work, direct correlation between countries with high GDP and stable law and order.
- Lack of law and order has multigenerational psychological effects. Physical safety is low on Maslow's hierarchy and is more important to most people than an abundance of consumer goods.
- Economic metrics such as GDP are a weak proxy for measuring human happiness, including political metrics such as stability of law and order will make it a better proxy.
- I'm still unsure who will get to define what "crime" is in the new equilibrium. Which morality becomes the majority morality? It is possible each nuclear-armed nation and its dependents ends up homegenising moral ideas among its members and gets one universal morality. Liberalism and various religious moralities are top contenders.
- Increased public info about citizens' private lives might damage their lives. But it might also improve society's ability to seek truth on topics currently considered taboo.
- A lot of information about individuals does not reach the public eye, because people have incentives to hide it. I'll call this "Social dark matter" (SDM). See also: Duncan Sabien's article.
- Common topics considered social dark matter by individuals: death, morality, sex, money, mental and physical health, relationship conflicts, religion and politics
- Some professions have higher access to social dark matter, such as psychologists, religious leaders, tech CEOs etc.
- Increased public info about citizens might means a lot of such social dark matter forcibly comes out in public.
- Social dark matter is very useful for empathising with individuals and giving them useful advice. A society fails to make progress on issues when consensus cannot be established in public on them, and social dark matter by definition is not in public view.
- Establishing consensus allows dominant paradigms of thought to fall and new ones to replace them. Common knowledge is needed not just widespread knowledge.
- Example: Blue Eyes puzzle (theoretical example). Nicotine and methamphetamine usage have reduced in the US today compared to 1980s. Increased willingness to discuss substance abuse may be a causal factor.
Software and society
(This section talks about my current plan for next few months or years. This could change in future.)
- I (Samuel) am most keen on building new forms of governance via software.
- Naval Ravikant says there are 3 types of leverage in society: capital, attention and internet-copyable products such as books, videos and software.
- Internet-copyable products are the newest and least competed for.
- Internet can transmit incentives and culture. People can paid over the internet. People can get social approval over the internet. People can be influenced by ideology over the internet.
- Therefore new forms of governance can be built via software. Early examples: cryptocurrency, twitter-influenced public policy
- I'm currently making the bet that 99% data acquisition is difficult to avoid. Also it may have significant benefits if managed well. Hence it may be worth actively accelerating towards such a world, and ensure the transition is managed well. I am not confident this is a good bet but it is the bet I'm currently making.
- A lot of the consequences of 99% data acquisition listed above (direct democracy, improved law and order, improved truth-seeking on SDM topics, no singleton superpower) may occur if 99% data acquisition is built the right way.
- Big Tech companies currently write the software that governs society, whether their execs are fully aware of it or not. Typically, computer hardware is expensive and software developers are expensive. This necessarily meant only a large company (in terms of capital) can manage society's software and hardware stacks.
- Hardware costs
- Within the next 10-20 years, it will be possible to store every word spoken by every person on Earth, on a home server affordable to a group of friends. If information and software gets open sourced, it will be possible to build governance using open source software rather than letting Big Tech alone govern society.
- This is not true for video data however. Video data of every person ever on Earth is still too expensive to store on a home server. Big Tech may still get some influence on how society is governed, by defining rules of access for video storage.
- Software costs
- Software is expensive to write because of complexity. AI + video data + cheap hardware might reduce complexity of various popular applications.
- Often software is complex because hardware is expensive and hence optimised algorithms are needed. Cheap hardware for text-based data may mean it will be possible to use less efficient but also less complex ways of writing software.
- Search is the most popular application of the internet. Be it searching for partners or employers or food or household products.
- Embedding search is a low complexity way of solving search.
- Identity is a necessary application for governance software.
- Cheap video capture and storage may allow for decentralised identity, each person can just upload their own video in public.
- Most internet applications (including search, identity, payments, communication, etc) exist in an adversarial environment where people must either prove trust or operate despite low trust.
- Video data will help scale trust.
- Example: Online conversations on political topics may be higher trust if done via video.
- Conclusion: I think building governance via software needs the following 3 primitives to exist first, and governance software must be built on top of it:
- Low-complexity high-accuracy open source search engine that can be hosted for cheap.
- LLM embedding search is one possible way.
- Uncensorable network of data transmission, along with low complexity ways of dealing with file formats.
- Hard drive dead drops and independently motivated spies are one possible way.
- I don't have a solution to the file format problem though, it seems important to figure out.
- Cheap ways of dealing with video data, be it storage, transmission, embedding generation, converting formats, etc
- I haven't figure this out yet. Also seems important to figure out.
Technology
- Most fields of science and technology get accelerated when someone invents a tool that allows data collection of a system that was previously not possible at same cost and resolution. Example: electron microscope, optical telescope, cyclotron, phosophrescent DNA tagging, etc.
- Cost of electrical components such as transistors, actuators, inducers, etc has gone down, which will generally accelerate all scientific fields as it could lead to invention of new data collectio instruments.
- Materials science is underrated as it plays a significant role in invention of data collection instruments.
- Intelligence-enhancing technologies are worth paying special attention to, as a small differential in intelligence leads to a large differential in power of every kind - offensive and defensive, scientific, engineering, military and political.
- If the intelligence gap is sufficiently large, this breaks the foundations of both capitalism and democracy. If Alice is much more intelligent than Bob, Alice can run simulations of Bob's mind accurately enough to persuade Bob to hand over money or votes in return for nothing valuable.
- This is true whether Alice is an AI or a mind upload or a gene edited human or a group of humans communicating via BCIs.
- Key intelligence enhancing technologies: superintelligent AI, human genetic engineering, human brain-connectome mapping, cognitive-enhancing drugs, nanotechnology, ?
- Research into superintelligent AI is already ongoing at full pace. AlexNet in 2012 was key milestone. If built this will be in a sense the last invention of human history, as the AI will them be faster than us at making new inventions.
- I have estimated 15% probability of superintelligent AI being built by 2030. Scaling laws seem to work but nobody knows why they work or how long they'll keep working.
- Research into human genetic engineering has stalled due to lack of consensus in academia on political consequences. CRISPR invented in 2012 was key milestone. This pause is fragile and powerful actors will be able to accelerate this field soon. Gene editing of humans has already succeeded in China (illegal experiment).
- CRISPR may also enable human-animal hybrids and enhancing human traits besides just cognitive ones (IQ, memory etc)
- Gene drives can cause the extinction or genetic modification of entire populations, not just individual members. This works better in species with small generation time (i.e. not humans)
- Both human genetic engineering and gene drives have massive implications for warfare, economic growth, political structure of society etc.
- Human brain simulation might be possible within 30 years but no one really knows. Fruitfly connectome has been mapped in 2017-2023, and neuroscientists are currently trying to understand implications. Connectome data includes connections between neurons but not signals going through them.
- Research into brain-computer interfaces is ongoing. Example: Neuralink I have not studied it deeply.
- Research into nanotechnology seems to have slowed to a crawl. I looked at it surface-level but haven't understood deeper reasons why the field has slowed. Fundamental breakthroughs are likely needed.
- Research into cognitive-enhancing drugs is not something I've looked a lot into. Many such programs were illegally run in 20th century, this might have influence on getting motivated researchers or academia grants for it today. In general we lack knowledge of biochemical pathways to directly affect higher-level rational brain, instead of affecting lower-level emotional brain and affecting rational brain indirectly. Examples: injecting oxytocin, adrenaline, LSD, barbiturates etc
- Extiction-related technologies are worth paying special attention to.
- CRISPR invented in 2012 may make it possible to produce bioweapons in the next 10 years, which could cause human extinction.
- Gene drives may also cause significant population-level changes which could affect food supply, incide of natural disease etc. This could affect human population significantly, but is unlikely to cause human extinction.
- Superintelligent AI if invented could cause human extinction. My blind guess is this has 30% probability of occuring assuming superintelligent AI is invented.
- Reduced cost of data acquisition may have some influence on nuclear balance of power, as all nations will get much better visibility into each other's nuclear deployed arsenals, manufacturing facilities and supply chains. This is unlikely to change the fundamental rules IMO, so odds of human extinction are not significantly affected by this.
Technology and society
- Offense-defence balances inherent in technology shapes incentives. Incentives shape culture. Culture shapes laws.
- If you want to predict or influence societal structure far into the future, you should probably study offense-defense balanaces inherent in technology.
- I basically believe in a "might makes right" theory of history but I think offense-defence balances decide what type of person or organisation wins a conflict, and what ideology they are likely to have.
- Usually when people say "might makes right" they imply the winner of a conflict decides of their own free agency which ideology will be popular in the future. This is not what I mean.
- This will make a lot more sense with actual examples, when I prioritise this I'll list a lot of examples.
- Culture shapes laws
- Law enforcement cannot enforce a law if most of the lawyers and policemen and general population of a region don't believe that law is moral. Eventually the law will get changed in favour of the new culture.
- Incentives shape culture
- Incentives don't easily change a person's values, but they may change a person's behaviour. Incentives however place selection effects for people who already agree with the type of behaviour being rewarded, and those people become high-status in society. People who have not yet decided their values are more likely to copy and internalise the values of whoever is high-status.
- Sometimes there is clash between financial incentives and social incentives. Sometimes people alter their behaviour to make money at the expense of what behaviour their social circle expects of them. This tends to make them lonely in the short-term, but in the long-term their social circle too is likely to emulate the same behaviour.
- To do: Examples. This section is incomplete. (I've avoided sharing contemporary examples as they're often politically sensitive, but I could probably find some historical examples and post those here.)
- In the absence of incentives grinding culture into a specific form, culture progresses via mutation and remixing. Most new ideas are near neighbours of old ideas. Human brains are machines whose output depends on input. Fundamentally new ideas that don't depend on old ideas don't usually exist.
- Affecting the distribution of popular ideas in collective attention affects the likely new ideas your society comes up even though you can't predict the new ideas themselves.
- Many technologies in today's society seem a direct consequence of the cultural environment their inventors grew up in. For example: superintelligent AI research ongoing today because Yudkowsky and other singularitarians increased collective attention focussed on these ideas.
- Morality is a key aspect of how culture transmits.
- When two cultures clash, how much members of both cultures tolerate each other depends on the moral judgement they assign to the opposing culture. Your culture has won a person to it when it has shaped that person's morality.
- More tolerant cultures spread under certain circumstances and less tolerant cultures spread under different circumstances. Often a highly self-replicating culture has both more and less tolerant versions of it so it can spread in both environments. (There's probably some link between these idea and ideas around common knowledge and preference cascades which I haven't figured out yet.)
- Technological offense-defence balances shape incentives
- To do: Examples. This section is incomplete.
- Competition for capital and attention exists at every level of societal organisation, not just between corporations and governments. For examples individuals, families, ethnic groups, etc. Competitions at different levels of structure affect each other. A nation that is in wartime competition to produce more steel than its opponent is also likely to force more competition between its citizen steel workers for example.
- If you want to influence society in any way, you have to atleast be competitive enough to survive. What "competitive enough to survive" looks like depends on the situation.
- Some technologies can be produced and used by small groups, whereas others can only be used by large groups.
- For example: uranium centrifuges and solar PV modules require a large group of people to manufacture them, whereas guns and radio can be manufactured by a small group of people.
- Tech that can only be produced by large groups fuels a lot of geopolitics. Nations and corporations try to become the first to build some tech, prevent other nations and corporations from catching up, and then use this as a bargainining chip to export whatever morality holds that group together in the first place.
- Deliberately choosing to build tech that can be produced and used by small groups has consequences for societal structure. The open source software movement is one example of this.