Your response to the AI 2027 folks' questions about what the world will look like in 2035, 2045, etc., is spot-on. It's also a handy go-to for some of the more vexing provocations from hypeists. As you say, "This kind of scenario forecasting is only a meaningful activity within their worldview. We are concrete about the things we think we can be concrete about." It's not disrespectful to point out a difference in premises, but when you do so, you're signaling that you won't play games on their terms.
This essay is a breath of fresh air in the AI discourse—grounded, insightful, and deeply pragmatic. I appreciate how you clarify that “normal technology” doesn’t mean “trivial,” but instead provides a constructive, resilience-focused framework for thinking about AI’s evolving impact. Thank you for bringing nuance and historical perspective back to the conversation. Looking forward to your continued work and the expanded book!
As a humanities professor, I am witnessing the STEM colonization of every facet of the world and it looks like the Ragnorok or maybe the Greek version when the Olympians killed the Titans, their parents. The hubris and accompanying ignorance of the human big picture is appalling, but yet it continues. I have been following this clusterfuck since the "deep neural net" came online around the mid teens and occasionally I present on technology to humanities people. It's depressing how many humanities type academics are so brow beaten by their STEM overlords who have treated us so badly for at least 30 years like we are completely irrelevant and YET here we are: facing the big questions because humanity is being literally steered by data and its accompanying projected profit margins. I heard what you said: that what depresses you is the lack of pick up. Sad. Stem people have been "developing" this project for going on 80 years now, spent hundreds of trillions on it, and yet here we are. Not sure what it is, not sure what it can and can't do, and what kind of impact it will have (except to literally destroy what's left of our planet's ecosystem, drain our previous fresh water, etc) and yet we "need" it to make our world great. The kind of mythological and magical thinking that I am seeing borders on some kind of pathological delusional state; however, I am not a psychology professor. I am a humanities survey person, and I look at big pictures of humanity going back to before antiquity. The evolution of human thinking goes back hundreds of thousands of years not ten thousand. And yes, the technology is helping us with knowing that fact. Stem people need to realize that they have not developed the brain cells for big parts of the equations they are trying to solve. Those neural pathways belong to other disciplines now thanks to the Enlightenment.
I think one of the biggest difference between the superintelligence worldview and the normalist worldview is their form of centrism.
The superintelligence worldview is AI-centric — everything is centred around the AI (or AGI, or ASI, whatever you want to call it).
The process goes: AI becomes more intelligent / capable --> AI develops misaligned goals --> AI kills everyone.
So the approach is mostly upstream, e.g. measuring capabilities and propensities, solving alignment, pausing (or stopping) AI development, etc. AI development is what matters — everything that happens downstream of deployment doesn't matter anymore, i.e. if the AI is out of the box it's already too late.
On the other hand, the normalist worldview is decentric.
The process goes: society operates in a certain way --> new things gets introduced to society (AI is one of them) --> things change accordingly.
So the approach is mostly downstream, e.g. managing technological diffusion, improving societal resilience, etc.
Excellent piece like always, and I especially like the section title 'If disappointment about GPT-5 has nudged you towards AI as normal technology, it’s possible you don’t quite understand the thesis'. It is a really powerful (yet appropriate) rhetorical device. Your thesis about societal impact due to diffusion is independent of the performance of GPT-5 or whatever the next model is. Despite the capabilities of these models, the institutional bottlenecks in diffusion will persist.
I am also glad to hear of the upcoming joint statement between you and AI 2027. I am pleasantly surprised that the authors of these two different frameworks are in conversation, and am confident that this will quench some of the polarization on this topic in wider discourse. Such constructive public debate between people explicating two competing worldviews is essential, and I hope the difficulty of having to talk across these worldviews does not stop you guys from engaging in this exercise.
Really appreciate this post (and the related ones). The hype is frustrating, and needs theory-informed frameworks for thinking productively about AI.
One comment, concerning diffusion and how people use AI. You write:
"For example, almost a year after the vaunted release of “thinking” models in ChatGPT, less than 1% of users used them on any given day...this kind of number is so low that it is hard for us to intuitively grasp, and frankly pretty depressing."
Many (most?) individuals don’t need to rely on “thinking” models to get useful answers. Everyday use cases are often lightweight:
- practical “how-to” advice (how to cook x, how to fix a bike chain, how to clean sneakers)
- quick recommendations (what to see in New York, which books are similar to X)
- simple productivity help (draft a polite email, generate a packing list)
- entertainment and curiosity (make a bedtime story, draft a quiz, invent a recipe)
- bachelor level student questions (what is selection bias, what is a quasi-experiment - the latter all models get wrong btw!)
In other words, if one argues that diffusion implies "users adapting their workflows to productively incorporate AI.", then yes - diffusion is arguably slow'ish. However, if one draws on all kinds of usage, the diffusion pattern looks different because most value doesn’t hinge on the advanced reasoning tier.
This recent report supports the idea that most ChatGPT usage is for more everyday use cases - and, relatedly, the assumption that most use cases don't really need thinking models? https://openai.com/index/how-people-are-using-chatgpt/
"ChatGPT’s economic impact extends to both work and personal life. Approximately 30% of consumer usage is work-related and approximately 70% is non-work"
I'm looking forward to your deeper analysis of diffusion. I agree with the following points:
(1) The chart compares AI adoption with the adoption of a potentially biased set of technologies.
(2) 2 months is not enough time to measure the important kinds of diffusion, it might just reflect initial buzz and curiosity.
(3) We don't care that much about how many people have access to the technology. We care more about how and how much the technology is being used. For example, maybe a lot of professions "use AI" in the sense of "have ever used it, for a very minor part of their job", and this inflates adoption statistics.
I find the evidence so far pretty mixed but I think I still overall disagree with you a fair amount (as in, I think you are probably underestimating the speed and amount of diffusion). For example, trends in ARR have been some of the fastest ever, excluding a couple exceptions like Moderna (ARR also has flaws of course, eg for API, part of the increase is due to models generating more tokens even for a fixed query, although tokens have also gotten cheaper over time. For Chat, people might be paying for a product that they’re not using much.)
One area where the “normalization” frame might go further is in how you account for human intent and desire as a driver of technology use. Technologies don’t just land in institutions—they are also shaped by the wishes, motivations, and articulated needs of the people who adopt and contest them.
Another point worth developing is how governance might work across federated contexts rather than just within a single institution or regulator. Many harms arise precisely when central authorities impose one-size-fits-all controls.
How do you see the role of play and experimentation in governance?
I recommend you make some infographic(s) around your thesis, that someone can understand in five seconds. The new Newsletter name sounds too much like Technological Optimism, prefered the previous name for its rigor of critical analysis.
Thanks for sharing this in such depth. I was mentally in the normal technology world even before your first post so both of these have really helped me clarify what than means and why. I would slightly take issue with your last point "why AI hits different." Part of the reason I was instinctively with the normal team is that it doesn't feel very different to me. Maybe that is an age thing - I am 63. Living through more waves as an adult has just conditioned me more deeply!
Thanks for doing this! Based on a skim, I have only one comment. I understand why you think I follow contextualizing norms, but I strongly disagree. I suspect what you're picking up on is what we refer to as the difficulty of communication across worldviews, a problem that we discuss both in the original essay and in this one.
It is true that interpreting my claims correctly requires a bunch of clarifying context, assumptions, etc. (By "context" I mean the logical antecedents of the claim, not the implications.) But this is completely symmetric! If it appears that I speak in a needlessly convoluted way, it is only because the reader/listener is situated in a different worldview. Someone situated in the normalist wordview would have the same reaction to someone talking about superintelligence.
For example, you say regarding the issue of AI-run companies, "Either they are saying something obviously false, or, there is some difference in our background assumptions." To illustrate the point about symmetry, this topic was similarly confusing for me — I felt I was saying something tautological, and while it was clear that Ajeya's (and many others') view was due to different background assumptions, I struggled to identify what those assumptions were.
What do you think about the economics of LLMs? There is some evidence that the cost to run inference and train models is higher than what AI companies are charging. If this is true, does that change your medium term predictions on diffusion the technology?
The history of costs in computation has shown rapid, prolonged falls. It is said that the Apple Watch has considerably more computational power than the third-generation supercomputer, the $44 million Cray 2 (1985), for instance.
Within LLMs there have been multiple reports of dramatic efficiency gains both in training and in deployment computational requirements. I have seen it said that there are multiple avenues still to explore. Just today, I saw a tweet that a leading researcher had only just learned about Cayley's theorem (as it relates to matrix algebra) for instance.
There is plenty of room for ordinary engineering effort to reduce costs.
Edit: why isn't that "ordinary engineering" being done now, you reasonably ask. Because growing market share and capability is almost the sole focus of the frontier companies at present. Efficiency (as economists do not always understand) is second priority, once the dust has settled.
This is the same reason why the frontier AI companies are making losses. If they are not making losses, they are not investing enough, and will be overtaken. They all have to spend right up to the very limits of their investors' tolerance for risk.
I may end up posting a few comments (hoping that's okay...).
You state: "There is a long causal chain between AI capability increases and societal impact. Benefits and risks are realized when AI is deployed, not when it is developed. This gives us (individuals, organizations, institutions, policymakers) many points of leverage for shaping those impacts. So we don’t have to fret as much about the speed of capability development; our efforts should focus more on the deployment stage both from the perspective of realizing AI’s benefits and responding to risks. All this is not just true of today’s AI, but even in the face of hypothetical developments such as self-improvement in AI capabilities. Many of the limits to the power of AI systems are (and should be) external to those systems, so that they cannot be overcome simply by having AI go off and improve its own technical design."
This is a rather interesting take philosophically. It seems you're advocating for something a little like Brusseau's Acceleration AI Ethics? It's a pretty controversial overall take.
Regardless of that, the benefits / risks are of course 'realised' post deployment, but that doesn't mean wide-boundary thinking upfront (an on an ongoing basis, embedded into sociotechnical design, development and use lifecycles) isn't the 'best' approach (especially given the need to internalise what we typically externalise). Here I feel called to reference Digitalization and the Anthropocene, Creutzig et al. (https://www.annualreviews.org/content/journals/10.1146/annurev-environ-120920-100056). Although this is challenging, and isn't some simple linear like process (it's more like a living systems approach, at least in my perspective), this upfront and ongoing responsibility seems absolutely critical if we are to steer towards what Creutzig et al. refer to as 'Deliberate for the good'.
Your response to the AI 2027 folks' questions about what the world will look like in 2035, 2045, etc., is spot-on. It's also a handy go-to for some of the more vexing provocations from hypeists. As you say, "This kind of scenario forecasting is only a meaningful activity within their worldview. We are concrete about the things we think we can be concrete about." It's not disrespectful to point out a difference in premises, but when you do so, you're signaling that you won't play games on their terms.
I had a "hallelujah" moment when I read this part of the article.
This essay is a breath of fresh air in the AI discourse—grounded, insightful, and deeply pragmatic. I appreciate how you clarify that “normal technology” doesn’t mean “trivial,” but instead provides a constructive, resilience-focused framework for thinking about AI’s evolving impact. Thank you for bringing nuance and historical perspective back to the conversation. Looking forward to your continued work and the expanded book!
As a humanities professor, I am witnessing the STEM colonization of every facet of the world and it looks like the Ragnorok or maybe the Greek version when the Olympians killed the Titans, their parents. The hubris and accompanying ignorance of the human big picture is appalling, but yet it continues. I have been following this clusterfuck since the "deep neural net" came online around the mid teens and occasionally I present on technology to humanities people. It's depressing how many humanities type academics are so brow beaten by their STEM overlords who have treated us so badly for at least 30 years like we are completely irrelevant and YET here we are: facing the big questions because humanity is being literally steered by data and its accompanying projected profit margins. I heard what you said: that what depresses you is the lack of pick up. Sad. Stem people have been "developing" this project for going on 80 years now, spent hundreds of trillions on it, and yet here we are. Not sure what it is, not sure what it can and can't do, and what kind of impact it will have (except to literally destroy what's left of our planet's ecosystem, drain our previous fresh water, etc) and yet we "need" it to make our world great. The kind of mythological and magical thinking that I am seeing borders on some kind of pathological delusional state; however, I am not a psychology professor. I am a humanities survey person, and I look at big pictures of humanity going back to before antiquity. The evolution of human thinking goes back hundreds of thousands of years not ten thousand. And yes, the technology is helping us with knowing that fact. Stem people need to realize that they have not developed the brain cells for big parts of the equations they are trying to solve. Those neural pathways belong to other disciplines now thanks to the Enlightenment.
I think one of the biggest difference between the superintelligence worldview and the normalist worldview is their form of centrism.
The superintelligence worldview is AI-centric — everything is centred around the AI (or AGI, or ASI, whatever you want to call it).
The process goes: AI becomes more intelligent / capable --> AI develops misaligned goals --> AI kills everyone.
So the approach is mostly upstream, e.g. measuring capabilities and propensities, solving alignment, pausing (or stopping) AI development, etc. AI development is what matters — everything that happens downstream of deployment doesn't matter anymore, i.e. if the AI is out of the box it's already too late.
On the other hand, the normalist worldview is decentric.
The process goes: society operates in a certain way --> new things gets introduced to society (AI is one of them) --> things change accordingly.
So the approach is mostly downstream, e.g. managing technological diffusion, improving societal resilience, etc.
(I also wrote some very rough thoughts recently at https://airisks.substack.com/p/on-different-worldviews-7c8)
Excellent piece like always, and I especially like the section title 'If disappointment about GPT-5 has nudged you towards AI as normal technology, it’s possible you don’t quite understand the thesis'. It is a really powerful (yet appropriate) rhetorical device. Your thesis about societal impact due to diffusion is independent of the performance of GPT-5 or whatever the next model is. Despite the capabilities of these models, the institutional bottlenecks in diffusion will persist.
I am also glad to hear of the upcoming joint statement between you and AI 2027. I am pleasantly surprised that the authors of these two different frameworks are in conversation, and am confident that this will quench some of the polarization on this topic in wider discourse. Such constructive public debate between people explicating two competing worldviews is essential, and I hope the difficulty of having to talk across these worldviews does not stop you guys from engaging in this exercise.
Really appreciate this post (and the related ones). The hype is frustrating, and needs theory-informed frameworks for thinking productively about AI.
One comment, concerning diffusion and how people use AI. You write:
"For example, almost a year after the vaunted release of “thinking” models in ChatGPT, less than 1% of users used them on any given day...this kind of number is so low that it is hard for us to intuitively grasp, and frankly pretty depressing."
Many (most?) individuals don’t need to rely on “thinking” models to get useful answers. Everyday use cases are often lightweight:
- practical “how-to” advice (how to cook x, how to fix a bike chain, how to clean sneakers)
- quick recommendations (what to see in New York, which books are similar to X)
- simple productivity help (draft a polite email, generate a packing list)
- entertainment and curiosity (make a bedtime story, draft a quiz, invent a recipe)
- bachelor level student questions (what is selection bias, what is a quasi-experiment - the latter all models get wrong btw!)
In other words, if one argues that diffusion implies "users adapting their workflows to productively incorporate AI.", then yes - diffusion is arguably slow'ish. However, if one draws on all kinds of usage, the diffusion pattern looks different because most value doesn’t hinge on the advanced reasoning tier.
This recent report supports the idea that most ChatGPT usage is for more everyday use cases - and, relatedly, the assumption that most use cases don't really need thinking models? https://openai.com/index/how-people-are-using-chatgpt/
"ChatGPT’s economic impact extends to both work and personal life. Approximately 30% of consumer usage is work-related and approximately 70% is non-work"
I'm looking forward to your deeper analysis of diffusion. I agree with the following points:
(1) The chart compares AI adoption with the adoption of a potentially biased set of technologies.
(2) 2 months is not enough time to measure the important kinds of diffusion, it might just reflect initial buzz and curiosity.
(3) We don't care that much about how many people have access to the technology. We care more about how and how much the technology is being used. For example, maybe a lot of professions "use AI" in the sense of "have ever used it, for a very minor part of their job", and this inflates adoption statistics.
I find the evidence so far pretty mixed but I think I still overall disagree with you a fair amount (as in, I think you are probably underestimating the speed and amount of diffusion). For example, trends in ARR have been some of the fastest ever, excluding a couple exceptions like Moderna (ARR also has flaws of course, eg for API, part of the increase is due to models generating more tokens even for a fixed query, although tokens have also gotten cheaper over time. For Chat, people might be paying for a product that they’re not using much.)
Cf this post we recently put out on the topic: https://epoch.ai/gradient-updates/after-the-chatgpt-moment-measuring-ais-adoption
I appreciate the move away from hype.
One area where the “normalization” frame might go further is in how you account for human intent and desire as a driver of technology use. Technologies don’t just land in institutions—they are also shaped by the wishes, motivations, and articulated needs of the people who adopt and contest them.
Another point worth developing is how governance might work across federated contexts rather than just within a single institution or regulator. Many harms arise precisely when central authorities impose one-size-fits-all controls.
How do you see the role of play and experimentation in governance?
I recommend you make some infographic(s) around your thesis, that someone can understand in five seconds. The new Newsletter name sounds too much like Technological Optimism, prefered the previous name for its rigor of critical analysis.
Agreed. Very jarring to see this change, especially in such a dystopian political climate.
Thanks for sharing this in such depth. I was mentally in the normal technology world even before your first post so both of these have really helped me clarify what than means and why. I would slightly take issue with your last point "why AI hits different." Part of the reason I was instinctively with the normal team is that it doesn't feel very different to me. Maybe that is an age thing - I am 63. Living through more waves as an adult has just conditioned me more deeply!
https://www.sciencedirect.com/science/article/pii/S2666675825000864?via%3Dihub
Thanks for this! I like the open and collaborative approach to disagreement.
I have trying to understand your worldview and as first step wrote summary of the discussion between you and Ayeja: https://www.lesswrong.com/posts/3znRZMetoAzrFd9Dw/a-distillation-of-ajeya-cotra-and-arvind-narayanan-on-the
If you have a spare 5 mins, I'd appreciate you reading to check for any glaring misunderstandings.
Thanks for doing this! Based on a skim, I have only one comment. I understand why you think I follow contextualizing norms, but I strongly disagree. I suspect what you're picking up on is what we refer to as the difficulty of communication across worldviews, a problem that we discuss both in the original essay and in this one.
It is true that interpreting my claims correctly requires a bunch of clarifying context, assumptions, etc. (By "context" I mean the logical antecedents of the claim, not the implications.) But this is completely symmetric! If it appears that I speak in a needlessly convoluted way, it is only because the reader/listener is situated in a different worldview. Someone situated in the normalist wordview would have the same reaction to someone talking about superintelligence.
For example, you say regarding the issue of AI-run companies, "Either they are saying something obviously false, or, there is some difference in our background assumptions." To illustrate the point about symmetry, this topic was similarly confusing for me — I felt I was saying something tautological, and while it was clear that Ajeya's (and many others') view was due to different background assumptions, I struggled to identify what those assumptions were.
Thanks for the distinction between 'contextual norms' vs 'tons of differing context'. I'll edit the post.
Look forward to better understanding your positions and views!
To be fair, while GPT5 was over-hyped, Altman contributed to the hype by comparing it to the Death Star (for some reason) ahead of its release.
What do you think about the economics of LLMs? There is some evidence that the cost to run inference and train models is higher than what AI companies are charging. If this is true, does that change your medium term predictions on diffusion the technology?
I would doubt that it does, Andrew.
The history of costs in computation has shown rapid, prolonged falls. It is said that the Apple Watch has considerably more computational power than the third-generation supercomputer, the $44 million Cray 2 (1985), for instance.
Within LLMs there have been multiple reports of dramatic efficiency gains both in training and in deployment computational requirements. I have seen it said that there are multiple avenues still to explore. Just today, I saw a tweet that a leading researcher had only just learned about Cayley's theorem (as it relates to matrix algebra) for instance.
There is plenty of room for ordinary engineering effort to reduce costs.
Edit: why isn't that "ordinary engineering" being done now, you reasonably ask. Because growing market share and capability is almost the sole focus of the frontier companies at present. Efficiency (as economists do not always understand) is second priority, once the dust has settled.
This is the same reason why the frontier AI companies are making losses. If they are not making losses, they are not investing enough, and will be overtaken. They all have to spend right up to the very limits of their investors' tolerance for risk.
Wondering if you've read this paper, 'Naturalizing relevance realization: why agency and cognition are fundamentally not computational'? https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1362658/full
Feels deeply relevant to any cogsci / philosophy of cogsci informed perspective on the nature / function of these technologies.
I may end up posting a few comments (hoping that's okay...).
You state: "There is a long causal chain between AI capability increases and societal impact. Benefits and risks are realized when AI is deployed, not when it is developed. This gives us (individuals, organizations, institutions, policymakers) many points of leverage for shaping those impacts. So we don’t have to fret as much about the speed of capability development; our efforts should focus more on the deployment stage both from the perspective of realizing AI’s benefits and responding to risks. All this is not just true of today’s AI, but even in the face of hypothetical developments such as self-improvement in AI capabilities. Many of the limits to the power of AI systems are (and should be) external to those systems, so that they cannot be overcome simply by having AI go off and improve its own technical design."
This is a rather interesting take philosophically. It seems you're advocating for something a little like Brusseau's Acceleration AI Ethics? It's a pretty controversial overall take.
Regardless of that, the benefits / risks are of course 'realised' post deployment, but that doesn't mean wide-boundary thinking upfront (an on an ongoing basis, embedded into sociotechnical design, development and use lifecycles) isn't the 'best' approach (especially given the need to internalise what we typically externalise). Here I feel called to reference Digitalization and the Anthropocene, Creutzig et al. (https://www.annualreviews.org/content/journals/10.1146/annurev-environ-120920-100056). Although this is challenging, and isn't some simple linear like process (it's more like a living systems approach, at least in my perspective), this upfront and ongoing responsibility seems absolutely critical if we are to steer towards what Creutzig et al. refer to as 'Deliberate for the good'.
I'd love to explore this further.