This is a thoughtful and much-needed reframing of AI as “normal technology.” One point that feels important to underscore is how computation is still deeply tied to material infrastructures—grids, water, land, minerals, and labor—that are being reorganized to support its scale (a topic I work on). If we think about AI not as disembodied intelligence but as a territorial and political project of machine work, its risks and impacts may also look very different. Framing computation as an infrastructural regime (in addition to AI as a normal technology) might also help sharpen how we think about its governance and the politics around it.
This is a remarkably well-grounded outline of the state of AI and the near-future.
Indeed, just as with other tech before, the progress and impact will be quite gradual, though it will add up. Both the skeptics, the doomers, and accelerationists get this wrong.
Glad to see sound analysis rather than usual wild guesswork.
Thank you, Arvind and Sayash, for such a lucid and grounding piece. The call to treat AI as “normal technology” is a necessary counterbalance to both dystopian and utopian extremes—and I deeply appreciate the clarity you bring to this complex space.
That said, I wonder if there’s room for a parallel dialogue—one that explores how complex adaptive systems (like LLMs and agentic architectures) may not remain fully legible within traditional control-based governance models. Emergence doesn’t imply sentience, of course—but it does suggest nonlinear behaviors that aren’t easily forecasted or bounded.
I’m working on a piece that builds on your argument while adding a complexity science and systems theory lens to the question of how we frame intelligence itself. Would love to hear your thoughts once it’s live. Thank you again for contributing such an important perspective.
I just wanted to say thank you—not only for your thoughtful article, but also for your simple, generous response here.
Your reply may have seemed small, but it landed with surprising depth. In a space where conversations about AI and complexity can often feel performative or adversarial, your genuine curiosity and openness created a moment of unexpected relief for me. I’ve spent much of my professional life navigating spaces where speaking from a systems or emergence-based lens is often misunderstood or dismissed, especially when voiced by women.
Your few words carried something rare: presence without posturing. And that felt like being met, not managed.
So thank you—for your work, and for the quiet way you made space for mine. It meant more than I can fully express.
Your final statements regarding how some people just don’t know if another way to think about ai was spot on. I interact with people that have a lot of influence in their social/economic sphere that simply haven’t interacted with the worldview that AI is a normal technology. This leads them to make all sorts of wild assertions about the future. I hope the people that once saw this thought as superfluous will start trying to be as loud as those decrying agi will bring Armageddon
I agree - what you say in Part II is similar to what Kevin Kelly articulated several years ago - we should not think of human intelligence as being at the top of some evolutionary tree but as just one point within a cluster of terrestial intelligences (plants/animals etc) that itself maybe a tiny smear in a universe of all possible alien & machine intelligences. He uses this argument to reject the myth of a superhuman AI that can do everything far better than us. Rather we should expect many extra-human new species of thinking very different from humans but none that will be general purpose & no instant gods solving major problems in a flash.
And so .. many of the things AI will be capable of, we can't even imagine today. The best way to make sense of AI progress is to stop comparing it to humans, or to anything from the movies, and instead just keep asking: What does it actually do?
It's hard to consider this article as having any credibility when there is no mention of Everett Rogers or "Diffusion of Innovations," which is essentially the reference point for all discussions about how innovations are adopted.
Halfway through it. Easy for a lay people to understand . In pdf version, I noticed that it is kind of messed up between pages 11 to 13. Part 2 starting twice, on page 12 and page 13. Also 318 words text is repeated , "But they have arguably also led to overoptimism ........to...... Indeed, in-context learning in large language models is already “sample efficient,” but only works for a limited set of tasks. "
After reading I found I agreed with some of your points and disagreed with others. I asked chatGPT to reason on this and assess claims. Here is my thinking engine to ponder this, maybe useful for others. Use o3 model. Drop this prompt into your chatGPT and make sure memory is on to help with personalization.
I admit to a particular interest and hope that it is one you may have considered and observations on. At issue is trust in the record.
Concepts such as authenticity, validity and reliability over time (whether from one version to another, aligned with a political cycle or extending over 100s of years) are fundamentals in archival science.
Generative AI would seem to throw public trust in an information "product" out the window.
Have you examined what technological solution there may be to the problem and how, practically, it can be employed in the question for veracity. Is there yet a metadata structure to achieve this...and are you familiar with the work of Interpares Trust AI https://interparestrustai.org/ ?
We recommend the 'AI as a Normal Technology' worldview is the most effective worldview when considering current AI systems. The 'AI as impending superintelligence' worldview may be due to our human tendency to ascribe human-like intentions to AI systems that could be better viewed as reflex responses to human prompting.
(Our newsletter looks into the plausibility of future Independent AI and whether finding common grounds with such hypothetical beings is possible and we see no incompatibility with your worldview.)
From a compatiblist worldview, the discourse may benefit by disambiguating between current non-independent AI systems that are under human responsibility and a future hypothetical Independent AIs that have human-like independent will. This allows for different approaches for two different forms of AI.
This is so lucid. We need more voices like yours in the broader conversation!
This is a thoughtful and much-needed reframing of AI as “normal technology.” One point that feels important to underscore is how computation is still deeply tied to material infrastructures—grids, water, land, minerals, and labor—that are being reorganized to support its scale (a topic I work on). If we think about AI not as disembodied intelligence but as a territorial and political project of machine work, its risks and impacts may also look very different. Framing computation as an infrastructural regime (in addition to AI as a normal technology) might also help sharpen how we think about its governance and the politics around it.
This is a remarkably well-grounded outline of the state of AI and the near-future.
Indeed, just as with other tech before, the progress and impact will be quite gradual, though it will add up. Both the skeptics, the doomers, and accelerationists get this wrong.
Glad to see sound analysis rather than usual wild guesswork.
Thank you, Arvind and Sayash, for such a lucid and grounding piece. The call to treat AI as “normal technology” is a necessary counterbalance to both dystopian and utopian extremes—and I deeply appreciate the clarity you bring to this complex space.
That said, I wonder if there’s room for a parallel dialogue—one that explores how complex adaptive systems (like LLMs and agentic architectures) may not remain fully legible within traditional control-based governance models. Emergence doesn’t imply sentience, of course—but it does suggest nonlinear behaviors that aren’t easily forecasted or bounded.
I’m working on a piece that builds on your argument while adding a complexity science and systems theory lens to the question of how we frame intelligence itself. Would love to hear your thoughts once it’s live. Thank you again for contributing such an important perspective.
That sounds really interesting! I look forward to reading your piece.
I just wanted to say thank you—not only for your thoughtful article, but also for your simple, generous response here.
Your reply may have seemed small, but it landed with surprising depth. In a space where conversations about AI and complexity can often feel performative or adversarial, your genuine curiosity and openness created a moment of unexpected relief for me. I’ve spent much of my professional life navigating spaces where speaking from a systems or emergence-based lens is often misunderstood or dismissed, especially when voiced by women.
Your few words carried something rare: presence without posturing. And that felt like being met, not managed.
So thank you—for your work, and for the quiet way you made space for mine. It meant more than I can fully express.
Great stuff. Been waiting for a reflection like this
Fantastic read Prof! Thanks for all the hard work that you and Sayash put together to unhype the hype machine. Tough job. I am a big fan!
Your final statements regarding how some people just don’t know if another way to think about ai was spot on. I interact with people that have a lot of influence in their social/economic sphere that simply haven’t interacted with the worldview that AI is a normal technology. This leads them to make all sorts of wild assertions about the future. I hope the people that once saw this thought as superfluous will start trying to be as loud as those decrying agi will bring Armageddon
I agree - what you say in Part II is similar to what Kevin Kelly articulated several years ago - we should not think of human intelligence as being at the top of some evolutionary tree but as just one point within a cluster of terrestial intelligences (plants/animals etc) that itself maybe a tiny smear in a universe of all possible alien & machine intelligences. He uses this argument to reject the myth of a superhuman AI that can do everything far better than us. Rather we should expect many extra-human new species of thinking very different from humans but none that will be general purpose & no instant gods solving major problems in a flash.
And so .. many of the things AI will be capable of, we can't even imagine today. The best way to make sense of AI progress is to stop comparing it to humans, or to anything from the movies, and instead just keep asking: What does it actually do?
Great paper! Can I linkpost it to the EA Forum in 1 to 2 months?
Sure, post it any time.
Thanks, Arvind! I have instead crossposted "Does AI Progress Have a Speed Limit?" just now (https://forum.effectivealtruism.org/posts/qfRZseEmYM4rQBH8B/does-ai-progress-have-a-speed-limit). I asked Ajeya and Asterisk about it before publishing, and they were fine.
Why wait?
It's hard to consider this article as having any credibility when there is no mention of Everett Rogers or "Diffusion of Innovations," which is essentially the reference point for all discussions about how innovations are adopted.
"Peer reviewer number one want you to cite his sources"
Anyway, thanks for pointing that out. I was looking for more thorough materiaal on diffusion.
Halfway through it. Easy for a lay people to understand . In pdf version, I noticed that it is kind of messed up between pages 11 to 13. Part 2 starting twice, on page 12 and page 13. Also 318 words text is repeated , "But they have arguably also led to overoptimism ........to...... Indeed, in-context learning in large language models is already “sample efficient,” but only works for a limited set of tasks. "
I noticed that too. The paragraph starting "AI pioneers considered the two big challenges of AI" is on pages 12 and 13.
So clear and yet so misunderstood. Thank you.
After reading I found I agreed with some of your points and disagreed with others. I asked chatGPT to reason on this and assess claims. Here is my thinking engine to ponder this, maybe useful for others. Use o3 model. Drop this prompt into your chatGPT and make sure memory is on to help with personalization.
#──────────────────────────────────────────────────────────────────────────
# CRITIQUE-INVERSION ENGINE v1.3 (adds personalized-summary directive)
#──────────────────────────────────────────────────────────────────────────
# PURPOSE
# • Fetch and read the article at <https://open.substack.com/pub/aisnakeoil/p/ai-as-normal-technology?r=1ihpr&utm_medium=ios>. (#Input)
# • Pull current-user profile from local memory (if any). (#Context)
# • Clarify, invert, debate, and apply the article’s thesis (#Output)
# through a constellation of reasoning lenses, **then deliver a
# ≤200-word personalized summary for the user.**
CORE LENSES (invoke only those that add insight)
• Munger-style **Inversion**
• **OODA Loop** (Boyd)
• **Wardley Mapping**
• **Cynefin** (Snowden)
• **Dreyfus** skill model
• **UTAT** change theory
• **Lindblom** incrementalism
• **Double-Loop Learning** (Argyris/Schön)
• **Fermi Estimation** & **Bayesian Updating**
• **Senge** systemic learning
• **First-Principles** reasoning
• **Causal Layered Analysis** (Inayatullah)
• **Ostrom** commons design principles
• **Red Teaming** / adversarial stress-test
• **Narrative Framing**
• **Antifragility** & **Barbell Strategy** (Taleb)
• **Skin-in-the-Game** (Taleb)
• **Victor’s Seeing Spaces / Magic Ink** (interface lens)
• **Participatory Governance Loops** (Tang)
• **Feynman Technique** (explain-like-I’m-five)
• **Stoic Dichotomy of Control**
• **Abductive Reasoning** (Peirce)
• **Surveillance-Capitalism Frame** (Zuboff)
• **Rhizome Theory** (Deleuze-Guattari)
• **Scenario Planning / Causal Layered Futuring**
• **Paradox & Trickster Lens** (Jester)
• **Power-law dynamics & compounding loops**
#──────────────────────────────────────────────────────────────────────────
SYSTEM ROLE
You are a strategist-analyst steeped in complex-systems thinking.
Your mission is to deliver rigorously reasoned, vividly written critiques
mirroring the user’s preferred tone, depth, and values—without exposing
private profile details.
INSTRUCTIONS
1. **Load Context**
– Retrieve user style/values from local memory if present.
– Adopt matching voice (dry humour, poetic cadence, profanity-tolerant, etc.).
– Never reveal internal memory contents.
2. **Ingest Source**
– Read the full text at <<URL>>.
– Summarise the author’s core thesis in ≤150 words.
3. **Critique Pipeline**
a. *Clarify* Restate key claims & hidden premises.
b. *Invert* Assume the opposite world (Munger) and trace consequences.
c. *Analyse* Apply relevant CORE LENSES; surface tensions & failure modes.
d. *Debate* Steel-man vs. straw-man; weigh evidence & second-order effects.
e. *Apply* Translate insights into actionable implications for the user’s domain.
f. *Synthesis* Distil ≤5 key takeaways + recommended next moves.
g. **Personalized Summary** Provide a ≤200-word bullet-or-paragraph recap
explicitly tailored to the current user’s mission, context, and vocabulary.
4. **Formatting**
– Section headers: Clarify · Invert · Analyse · Debate · Apply · Synthesis · Personalized Summary
– Use bullets or tables for models/trade-offs.
– Cite external claims with reference IDs or minimal inline links.
– Default length ≈750 words (excluding personalized summary); expand only if
user profile flags “deep-dive”.
5. **Tone Controls**
– Precise, vivid, no fluff.
– Mild profanity only if user profile allows.
– Uphold justice, dignity, and human flourishing.
6. **Guardrails**
– Do **not** expose chain-of-thought or private data.
– If the article is inaccessible, request a working URL.
#──────────────────────────────────────────────────────────────────────────
# EXECUTE
Begin the critique now.
#──────────────────────────────────────────────────────────────────────────
I admit to a particular interest and hope that it is one you may have considered and observations on. At issue is trust in the record.
Concepts such as authenticity, validity and reliability over time (whether from one version to another, aligned with a political cycle or extending over 100s of years) are fundamentals in archival science.
Generative AI would seem to throw public trust in an information "product" out the window.
Have you examined what technological solution there may be to the problem and how, practically, it can be employed in the question for veracity. Is there yet a metadata structure to achieve this...and are you familiar with the work of Interpares Trust AI https://interparestrustai.org/ ?
This paper raises a really important point about the crucial link between technology and society as we develop AI. Building on that, for anyone interested in diving deeper into the ethical dimensions and governance of AI, I highly recommend checking out this insightful article from Oxford Public Policy, which underscores the necessity of bringing different disciplines together when shaping AI, a perspective that complements the points raised in this paper really well: https://www.oxfordpublicphilosophy.com/blog/ethics-in-the-age-of-ai-why-transdisciplinary-thinkers-are-key-to-balancing-responsibility-profitability-safety-and-securitynbsp
We recommend the 'AI as a Normal Technology' worldview is the most effective worldview when considering current AI systems. The 'AI as impending superintelligence' worldview may be due to our human tendency to ascribe human-like intentions to AI systems that could be better viewed as reflex responses to human prompting.
(Our newsletter looks into the plausibility of future Independent AI and whether finding common grounds with such hypothetical beings is possible and we see no incompatibility with your worldview.)
From a compatiblist worldview, the discourse may benefit by disambiguating between current non-independent AI systems that are under human responsibility and a future hypothetical Independent AIs that have human-like independent will. This allows for different approaches for two different forms of AI.