Your response to the AI 2027 folks' questions about what the world will look like in 2035, 2045, etc., is spot-on. It's also a handy go-to for some of the more vexing provocations from hypeists. As you say, "This kind of scenario forecasting is only a meaningful activity within their worldview. We are concrete about the things we think we can be concrete about." It's not disrespectful to point out a difference in premises, but when you do so, you're signaling that you won't play games on their terms.
Really appreciate this post (and the related ones). The hype is frustrating, and needs theory-informed frameworks for thinking productively about AI.
One comment, concerning diffusion and how people use AI. You write:
"For example, almost a year after the vaunted release of “thinking” models in ChatGPT, less than 1% of users used them on any given day...this kind of number is so low that it is hard for us to intuitively grasp, and frankly pretty depressing."
Many (most?) individuals don’t need to rely on “thinking” models to get useful answers. Everyday use cases are often lightweight:
- practical “how-to” advice (how to cook x, how to fix a bike chain, how to clean sneakers)
- quick recommendations (what to see in New York, which books are similar to X)
- simple productivity help (draft a polite email, generate a packing list)
- entertainment and curiosity (make a bedtime story, draft a quiz, invent a recipe)
- bachelor level student questions (what is selection bias, what is a quasi-experiment - the latter all models get wrong btw!)
In other words, if one argues that diffusion implies "users adapting their workflows to productively incorporate AI.", then yes - diffusion is arguably slow'ish. However, if one draws on all kinds of usage, the diffusion pattern looks different because most value doesn’t hinge on the advanced reasoning tier.
I'm looking forward to your deeper analysis of diffusion. I agree with the following points:
(1) The chart compares AI adoption with the adoption of a potentially biased set of technologies.
(2) 2 months is not enough time to measure the important kinds of diffusion, it might just reflect initial buzz and curiosity.
(3) We don't care that much about how many people have access to the technology. We care more about how and how much the technology is being used. For example, maybe a lot of professions "use AI" in the sense of "have ever used it, for a very minor part of their job", and this inflates adoption statistics.
I find the evidence so far pretty mixed but I think I still overall disagree with you a fair amount (as in, I think you are probably underestimating the speed and amount of diffusion). For example, trends in ARR have been some of the fastest ever, excluding a couple exceptions like Moderna (ARR also has flaws of course, eg for API, part of the increase is due to models generating more tokens even for a fixed query, although tokens have also gotten cheaper over time. For Chat, people might be paying for a product that they’re not using much.)
One area where the “normalization” frame might go further is in how you account for human intent and desire as a driver of technology use. Technologies don’t just land in institutions—they are also shaped by the wishes, motivations, and articulated needs of the people who adopt and contest them.
Another point worth developing is how governance might work across federated contexts rather than just within a single institution or regulator. Many harms arise precisely when central authorities impose one-size-fits-all controls.
How do you see the role of play and experimentation in governance?
As a humanities professor, I am witnessing the STEM colonization of every facet of the world and it looks like the Ragnorok or maybe the Greek version when the Olympians killed the Titans, their parents. The hubris and accompanying ignorance of the human big picture is appalling, but yet it continues. I have been following this clusterfuck since the "deep neural net" came online around the mid teens and occasionally I present on technology to humanities people. It's depressing how many humanities type academics are so brow beaten by their STEM overlords who have treated us so badly for at least 30 years like we are completely irrelevant and YET here we are: facing the big questions because humanity is being literally steered by data and its accompanying projected profit margins. I heard what you said: that what depresses you is the lack of pick up. Sad. Stem people have been "developing" this project for going on 80 years now, spent hundreds of trillions on it, and yet here we are. Not sure what it is, not sure what it can and can't do, and what kind of impact it will have (except to literally destroy what's left of our planet's ecosystem, drain our previous fresh water, etc) and yet we "need" it to make our world great. The kind of mythological and magical thinking that I am seeing borders on some kind of pathological delusional state; however, I am not a psychology professor. I am a humanities survey person, and I look at big pictures of humanity going back to before antiquity. The evolution of human thinking goes back hundreds of thousands of years not ten thousand. And yes, the technology is helping us with knowing that fact. Stem people need to realize that they have not developed the brain cells for big parts of the equations they are trying to solve. Those neural pathways belong to other disciplines now thanks to the Enlightenment.
I agree that the chart on ChatGPT adoption is a little bit misleading, but I think so is your discussion of it. You write that
> What is reflected in this chart are early adopters who will check out an app if there’s buzz around it, and there was deafening buzz around ChatGPT. Once you exhaust these curious early users, the growth curve looks very different. In fact, a year later, ChatGPT had apparently only grown from 100M to 200M users, which meant that the curve evidently bent sharply to the right.
But according to CNBC last month, ChatGPT is now at 700mn weekly active users (https://www.cnbc.com/2025/08/04/openai-chatgpt-700-million-users.html). That is (they report) a 4x increase YOY, and obviously the number of monthly active users would be even higher. For comparison (noting that some of this data is probably fairly rough):
So ChatGPT already seems to be ahead of Netflix, Reddit, X, and Spotify, despite those others having had plenty of time for network effects to carry through, and ChatGPT also seems to have lots of momentum.
“Benefits and risks are realized when AI is deployed, not when it is developed.”
Does this not just kind of gloss over all the costs and harms caused by spinning up and training these massive models? Underpaid Global South work to categorize training data causes psychological trauma and exploitation. The energy costs of training massive models are causing backslides in carbon reduction. The massive resource and capital expenses that governments are taking on to try to get an edge on AI, building data centres, are all happening at the expense of other infrastructure like housing and public transit projects. How are these not risks?
Obviously this also leaves out the “benefits” of jobs to build all this infrastructure, too, but seeing as how this work could be going towards other infrastructure projects I don’t know if that’s new work that wouldn’t be happening otherwise.
I appreciate the intention of wanting a more “level headed” approach to examining AI, but the hype is SO BIG that I don’t think neutrality really counts as an actual middle ground. It’s failing to expose the reality of how much risk is involved in this speculative bubble. It puts so much weight on figuring out what all this is for after we’ve invested so much into building it (and, as a result, we’ll have not invested in other things).
Curious to see where you guys take this, in any case.
I agree that it is important to examine the harms inherent in the way that large models are trained, and we have written about it at length in the AI Snake Oil book and elsewhere.
The main thing the quoted sentence trying to say w.r.t. risks/harms is that unlike the superintelligence folks, we don't think models "escaping" and deciding to end humanity is a real concern that should drive policy.
Thank you, Arvind and Sayash, for sharing this wonderful and comprehensive guide. The thoughtful way you explained the value and shared your predictions truly shows how much care you put into it. I sincerely appreciate your effort and kindness. I'm excited to reflect on my own ideas after reading this 💭👏🏻
Your response to the AI 2027 folks' questions about what the world will look like in 2035, 2045, etc., is spot-on. It's also a handy go-to for some of the more vexing provocations from hypeists. As you say, "This kind of scenario forecasting is only a meaningful activity within their worldview. We are concrete about the things we think we can be concrete about." It's not disrespectful to point out a difference in premises, but when you do so, you're signaling that you won't play games on their terms.
I had a "hallelujah" moment when I read this part of the article.
Really appreciate this post (and the related ones). The hype is frustrating, and needs theory-informed frameworks for thinking productively about AI.
One comment, concerning diffusion and how people use AI. You write:
"For example, almost a year after the vaunted release of “thinking” models in ChatGPT, less than 1% of users used them on any given day...this kind of number is so low that it is hard for us to intuitively grasp, and frankly pretty depressing."
Many (most?) individuals don’t need to rely on “thinking” models to get useful answers. Everyday use cases are often lightweight:
- practical “how-to” advice (how to cook x, how to fix a bike chain, how to clean sneakers)
- quick recommendations (what to see in New York, which books are similar to X)
- simple productivity help (draft a polite email, generate a packing list)
- entertainment and curiosity (make a bedtime story, draft a quiz, invent a recipe)
- bachelor level student questions (what is selection bias, what is a quasi-experiment - the latter all models get wrong btw!)
In other words, if one argues that diffusion implies "users adapting their workflows to productively incorporate AI.", then yes - diffusion is arguably slow'ish. However, if one draws on all kinds of usage, the diffusion pattern looks different because most value doesn’t hinge on the advanced reasoning tier.
I'm looking forward to your deeper analysis of diffusion. I agree with the following points:
(1) The chart compares AI adoption with the adoption of a potentially biased set of technologies.
(2) 2 months is not enough time to measure the important kinds of diffusion, it might just reflect initial buzz and curiosity.
(3) We don't care that much about how many people have access to the technology. We care more about how and how much the technology is being used. For example, maybe a lot of professions "use AI" in the sense of "have ever used it, for a very minor part of their job", and this inflates adoption statistics.
I find the evidence so far pretty mixed but I think I still overall disagree with you a fair amount (as in, I think you are probably underestimating the speed and amount of diffusion). For example, trends in ARR have been some of the fastest ever, excluding a couple exceptions like Moderna (ARR also has flaws of course, eg for API, part of the increase is due to models generating more tokens even for a fixed query, although tokens have also gotten cheaper over time. For Chat, people might be paying for a product that they’re not using much.)
Cf this post we recently put out on the topic: https://epoch.ai/gradient-updates/after-the-chatgpt-moment-measuring-ais-adoption
I appreciate the move away from hype.
One area where the “normalization” frame might go further is in how you account for human intent and desire as a driver of technology use. Technologies don’t just land in institutions—they are also shaped by the wishes, motivations, and articulated needs of the people who adopt and contest them.
Another point worth developing is how governance might work across federated contexts rather than just within a single institution or regulator. Many harms arise precisely when central authorities impose one-size-fits-all controls.
How do you see the role of play and experimentation in governance?
As a humanities professor, I am witnessing the STEM colonization of every facet of the world and it looks like the Ragnorok or maybe the Greek version when the Olympians killed the Titans, their parents. The hubris and accompanying ignorance of the human big picture is appalling, but yet it continues. I have been following this clusterfuck since the "deep neural net" came online around the mid teens and occasionally I present on technology to humanities people. It's depressing how many humanities type academics are so brow beaten by their STEM overlords who have treated us so badly for at least 30 years like we are completely irrelevant and YET here we are: facing the big questions because humanity is being literally steered by data and its accompanying projected profit margins. I heard what you said: that what depresses you is the lack of pick up. Sad. Stem people have been "developing" this project for going on 80 years now, spent hundreds of trillions on it, and yet here we are. Not sure what it is, not sure what it can and can't do, and what kind of impact it will have (except to literally destroy what's left of our planet's ecosystem, drain our previous fresh water, etc) and yet we "need" it to make our world great. The kind of mythological and magical thinking that I am seeing borders on some kind of pathological delusional state; however, I am not a psychology professor. I am a humanities survey person, and I look at big pictures of humanity going back to before antiquity. The evolution of human thinking goes back hundreds of thousands of years not ten thousand. And yes, the technology is helping us with knowing that fact. Stem people need to realize that they have not developed the brain cells for big parts of the equations they are trying to solve. Those neural pathways belong to other disciplines now thanks to the Enlightenment.
I agree that the chart on ChatGPT adoption is a little bit misleading, but I think so is your discussion of it. You write that
> What is reflected in this chart are early adopters who will check out an app if there’s buzz around it, and there was deafening buzz around ChatGPT. Once you exhaust these curious early users, the growth curve looks very different. In fact, a year later, ChatGPT had apparently only grown from 100M to 200M users, which meant that the curve evidently bent sharply to the right.
But according to CNBC last month, ChatGPT is now at 700mn weekly active users (https://www.cnbc.com/2025/08/04/openai-chatgpt-700-million-users.html). That is (they report) a 4x increase YOY, and obviously the number of monthly active users would be even higher. For comparison (noting that some of this data is probably fairly rough):
- Netflix has 300mn subscribers (https://www.npr.org/2025/01/22/g-s1-44212/netflix-price-hike-subscribers)
- Reddit has 600mn monthly active users (https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/)
- X has 600mn monthly active users (https://techcrunch.com/2025/08/12/threads-now-has-more-than-400-million-monthly-active-users/)
- Spotify has 675mn monthly active users (https://techcrunch.com/2025/02/04/spotify-reports-its-full-year-of-profitability-adds-35m-monthly-active-users/)
- Instagram has 2bn monthly active users (https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/)
- Facebook has 3bn monthly active users (https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/)
So ChatGPT already seems to be ahead of Netflix, Reddit, X, and Spotify, despite those others having had plenty of time for network effects to carry through, and ChatGPT also seems to have lots of momentum.
Great article, keep up the good work. We need some pushback against the 'rationalist', 'LessWrong', 'AI 2027' community and their worldview.
A lot of the rationalist community at LessWrong would do good reading this article.
I don't know about normal (or normative?)--I must dive deeper in your post to understand your thesis.
I've been playing with the tool and post occasionally about what I've been finding. My interest in the moral and ethical implications. My latest post, From "Tulipmania to AI Fever," explores this: https://remembertheworld.substack.com/p/from-tulipmania-to-ai-fever.
I'd be interested in your thoughts.
“Benefits and risks are realized when AI is deployed, not when it is developed.”
Does this not just kind of gloss over all the costs and harms caused by spinning up and training these massive models? Underpaid Global South work to categorize training data causes psychological trauma and exploitation. The energy costs of training massive models are causing backslides in carbon reduction. The massive resource and capital expenses that governments are taking on to try to get an edge on AI, building data centres, are all happening at the expense of other infrastructure like housing and public transit projects. How are these not risks?
Obviously this also leaves out the “benefits” of jobs to build all this infrastructure, too, but seeing as how this work could be going towards other infrastructure projects I don’t know if that’s new work that wouldn’t be happening otherwise.
I appreciate the intention of wanting a more “level headed” approach to examining AI, but the hype is SO BIG that I don’t think neutrality really counts as an actual middle ground. It’s failing to expose the reality of how much risk is involved in this speculative bubble. It puts so much weight on figuring out what all this is for after we’ve invested so much into building it (and, as a result, we’ll have not invested in other things).
Curious to see where you guys take this, in any case.
You're quoting from a one-paragraph summary :)
I agree that it is important to examine the harms inherent in the way that large models are trained, and we have written about it at length in the AI Snake Oil book and elsewhere.
The main thing the quoted sentence trying to say w.r.t. risks/harms is that unlike the superintelligence folks, we don't think models "escaping" and deciding to end humanity is a real concern that should drive policy.
Thank you, Arvind and Sayash, for sharing this wonderful and comprehensive guide. The thoughtful way you explained the value and shared your predictions truly shows how much care you put into it. I sincerely appreciate your effort and kindness. I'm excited to reflect on my own ideas after reading this 💭👏🏻