This is a thoughtful and much-needed reframing of AI as “normal technology.” One point that feels important to underscore is how computation is still deeply tied to material infrastructures—grids, water, land, minerals, and labor—that are being reorganized to support its scale (a topic I work on). If we think about AI not as disembodied intelligence but as a territorial and political project of machine work, its risks and impacts may also look very different. Framing computation as an infrastructural regime (in addition to AI as a normal technology) might also help sharpen how we think about its governance and the politics around it.
This is a remarkably well-grounded outline of the state of AI and the near-future.
Indeed, just as with other tech before, the progress and impact will be quite gradual, though it will add up. Both the skeptics, the doomers, and accelerationists get this wrong.
Glad to see sound analysis rather than usual wild guesswork.
Thank you, Arvind and Sayash, for such a lucid and grounding piece. The call to treat AI as “normal technology” is a necessary counterbalance to both dystopian and utopian extremes—and I deeply appreciate the clarity you bring to this complex space.
That said, I wonder if there’s room for a parallel dialogue—one that explores how complex adaptive systems (like LLMs and agentic architectures) may not remain fully legible within traditional control-based governance models. Emergence doesn’t imply sentience, of course—but it does suggest nonlinear behaviors that aren’t easily forecasted or bounded.
I’m working on a piece that builds on your argument while adding a complexity science and systems theory lens to the question of how we frame intelligence itself. Would love to hear your thoughts once it’s live. Thank you again for contributing such an important perspective.
I just wanted to say thank you—not only for your thoughtful article, but also for your simple, generous response here.
Your reply may have seemed small, but it landed with surprising depth. In a space where conversations about AI and complexity can often feel performative or adversarial, your genuine curiosity and openness created a moment of unexpected relief for me. I’ve spent much of my professional life navigating spaces where speaking from a systems or emergence-based lens is often misunderstood or dismissed, especially when voiced by women.
Your few words carried something rare: presence without posturing. And that felt like being met, not managed.
So thank you—for your work, and for the quiet way you made space for mine. It meant more than I can fully express.
Your final statements regarding how some people just don’t know if another way to think about ai was spot on. I interact with people that have a lot of influence in their social/economic sphere that simply haven’t interacted with the worldview that AI is a normal technology. This leads them to make all sorts of wild assertions about the future. I hope the people that once saw this thought as superfluous will start trying to be as loud as those decrying agi will bring Armageddon
I agree - what you say in Part II is similar to what Kevin Kelly articulated several years ago - we should not think of human intelligence as being at the top of some evolutionary tree but as just one point within a cluster of terrestial intelligences (plants/animals etc) that itself maybe a tiny smear in a universe of all possible alien & machine intelligences. He uses this argument to reject the myth of a superhuman AI that can do everything far better than us. Rather we should expect many extra-human new species of thinking very different from humans but none that will be general purpose & no instant gods solving major problems in a flash.
And so .. many of the things AI will be capable of, we can't even imagine today. The best way to make sense of AI progress is to stop comparing it to humans, or to anything from the movies, and instead just keep asking: What does it actually do?
It's hard to consider this article as having any credibility when there is no mention of Everett Rogers or "Diffusion of Innovations," which is essentially the reference point for all discussions about how innovations are adopted.
Halfway through it. Easy for a lay people to understand . In pdf version, I noticed that it is kind of messed up between pages 11 to 13. Part 2 starting twice, on page 12 and page 13. Also 318 words text is repeated , "But they have arguably also led to overoptimism ........to...... Indeed, in-context learning in large language models is already “sample efficient,” but only works for a limited set of tasks. "
Professor Narayanan of @princeton cues miscues and in doing so miscues: task efficiency benchmarks are misleading for both tech enthusiasts and tech nihilists. The first promotes tech task efficiencies and the latter pokes holes in the efficiency focus….inducing a continuous entanglement that evolves into a vicious cycle. That threats of AI, social media enterprise tech has a much lower threshold:
1. Its speed of development outpaces adaption or adoption
2. Even at its lowest task efficiencies, it reshapes the labour landscape with unforeseeable social effects
3. At its “dumbest”, which is social media - we see already the divisive power, the surveillance protocols, the limbic colonialism, the trading of persons as product in the buying and selling that which is the foundation of our constitutional polity (privacy) and it grants asymmetrical power to distort choice and undermine social initiatives, whilst suborning a neuroatrophic crisis.
4. Its design cannot succumb to governance and the drivers of this tech acceleration are driven by anti-social, anti-democratic and anti-humane motives anathema to human flourishing as Aristotle defined it.
The "normal technology" framing becomes even more relevant when you look at what companies are doing right now with AI agents. OpenAI just published a paper this week showing they monitor 99.9% of their internal coding agent traffic for misalignment, using their most powerful model as a watchdog. About 1,000 conversations got flagged over five months. The irony is telling: even the company building the technology treats it as something that requires constant, mundane, infrastructural oversight. Not because agents are dangerous in some sci-fi sense, but because they are unreliable in the exact same way every other complex software system is unreliable. The "normal technology" lens predicts this perfectly. The real risk is not superintelligence. It is thousands of organizations deploying agents with none of the monitoring infrastructure that even OpenAI considers necessary.
Very engaging paper, thank you. Like many of us, I've been trying to understand what the likely pace of AI diffusion will be (2 years or 10+ years), as this dramatically affects the policy and operational approach we take. The implementation of any (pre-AI) technology by enterprises is typically slow. They may buy technology more quickly but the widespread application of it to their footprint is usually in the range of 5 to 15 years (if ever). I'm not sure if AI will be different but your paper articulates a number of causes why this may be replicated (eg slow feedback loops, regulatory constraints, legacy capex etc).
After reading I found I agreed with some of your points and disagreed with others. I asked chatGPT to reason on this and assess claims. Here is my thinking engine to ponder this, maybe useful for others. Use o3 model. Drop this prompt into your chatGPT and make sure memory is on to help with personalization.
This is so lucid. We need more voices like yours in the broader conversation!
This is a thoughtful and much-needed reframing of AI as “normal technology.” One point that feels important to underscore is how computation is still deeply tied to material infrastructures—grids, water, land, minerals, and labor—that are being reorganized to support its scale (a topic I work on). If we think about AI not as disembodied intelligence but as a territorial and political project of machine work, its risks and impacts may also look very different. Framing computation as an infrastructural regime (in addition to AI as a normal technology) might also help sharpen how we think about its governance and the politics around it.
This is a remarkably well-grounded outline of the state of AI and the near-future.
Indeed, just as with other tech before, the progress and impact will be quite gradual, though it will add up. Both the skeptics, the doomers, and accelerationists get this wrong.
Glad to see sound analysis rather than usual wild guesswork.
Thank you, Arvind and Sayash, for such a lucid and grounding piece. The call to treat AI as “normal technology” is a necessary counterbalance to both dystopian and utopian extremes—and I deeply appreciate the clarity you bring to this complex space.
That said, I wonder if there’s room for a parallel dialogue—one that explores how complex adaptive systems (like LLMs and agentic architectures) may not remain fully legible within traditional control-based governance models. Emergence doesn’t imply sentience, of course—but it does suggest nonlinear behaviors that aren’t easily forecasted or bounded.
I’m working on a piece that builds on your argument while adding a complexity science and systems theory lens to the question of how we frame intelligence itself. Would love to hear your thoughts once it’s live. Thank you again for contributing such an important perspective.
That sounds really interesting! I look forward to reading your piece.
I just wanted to say thank you—not only for your thoughtful article, but also for your simple, generous response here.
Your reply may have seemed small, but it landed with surprising depth. In a space where conversations about AI and complexity can often feel performative or adversarial, your genuine curiosity and openness created a moment of unexpected relief for me. I’ve spent much of my professional life navigating spaces where speaking from a systems or emergence-based lens is often misunderstood or dismissed, especially when voiced by women.
Your few words carried something rare: presence without posturing. And that felt like being met, not managed.
So thank you—for your work, and for the quiet way you made space for mine. It meant more than I can fully express.
Great stuff. Been waiting for a reflection like this
Fantastic read Prof! Thanks for all the hard work that you and Sayash put together to unhype the hype machine. Tough job. I am a big fan!
Your final statements regarding how some people just don’t know if another way to think about ai was spot on. I interact with people that have a lot of influence in their social/economic sphere that simply haven’t interacted with the worldview that AI is a normal technology. This leads them to make all sorts of wild assertions about the future. I hope the people that once saw this thought as superfluous will start trying to be as loud as those decrying agi will bring Armageddon
I agree - what you say in Part II is similar to what Kevin Kelly articulated several years ago - we should not think of human intelligence as being at the top of some evolutionary tree but as just one point within a cluster of terrestial intelligences (plants/animals etc) that itself maybe a tiny smear in a universe of all possible alien & machine intelligences. He uses this argument to reject the myth of a superhuman AI that can do everything far better than us. Rather we should expect many extra-human new species of thinking very different from humans but none that will be general purpose & no instant gods solving major problems in a flash.
And so .. many of the things AI will be capable of, we can't even imagine today. The best way to make sense of AI progress is to stop comparing it to humans, or to anything from the movies, and instead just keep asking: What does it actually do?
Great paper! Can I linkpost it to the EA Forum in 1 to 2 months?
Sure, post it any time.
Thanks, Arvind! I have instead crossposted "Does AI Progress Have a Speed Limit?" just now (https://forum.effectivealtruism.org/posts/qfRZseEmYM4rQBH8B/does-ai-progress-have-a-speed-limit). I asked Ajeya and Asterisk about it before publishing, and they were fine.
Why wait?
It's hard to consider this article as having any credibility when there is no mention of Everett Rogers or "Diffusion of Innovations," which is essentially the reference point for all discussions about how innovations are adopted.
"Peer reviewer number one want you to cite his sources"
Anyway, thanks for pointing that out. I was looking for more thorough materiaal on diffusion.
Halfway through it. Easy for a lay people to understand . In pdf version, I noticed that it is kind of messed up between pages 11 to 13. Part 2 starting twice, on page 12 and page 13. Also 318 words text is repeated , "But they have arguably also led to overoptimism ........to...... Indeed, in-context learning in large language models is already “sample efficient,” but only works for a limited set of tasks. "
I noticed that too. The paragraph starting "AI pioneers considered the two big challenges of AI" is on pages 12 and 13.
Professor Narayanan of @princeton cues miscues and in doing so miscues: task efficiency benchmarks are misleading for both tech enthusiasts and tech nihilists. The first promotes tech task efficiencies and the latter pokes holes in the efficiency focus….inducing a continuous entanglement that evolves into a vicious cycle. That threats of AI, social media enterprise tech has a much lower threshold:
1. Its speed of development outpaces adaption or adoption
2. Even at its lowest task efficiencies, it reshapes the labour landscape with unforeseeable social effects
3. At its “dumbest”, which is social media - we see already the divisive power, the surveillance protocols, the limbic colonialism, the trading of persons as product in the buying and selling that which is the foundation of our constitutional polity (privacy) and it grants asymmetrical power to distort choice and undermine social initiatives, whilst suborning a neuroatrophic crisis.
4. Its design cannot succumb to governance and the drivers of this tech acceleration are driven by anti-social, anti-democratic and anti-humane motives anathema to human flourishing as Aristotle defined it.
The "normal technology" framing becomes even more relevant when you look at what companies are doing right now with AI agents. OpenAI just published a paper this week showing they monitor 99.9% of their internal coding agent traffic for misalignment, using their most powerful model as a watchdog. About 1,000 conversations got flagged over five months. The irony is telling: even the company building the technology treats it as something that requires constant, mundane, infrastructural oversight. Not because agents are dangerous in some sci-fi sense, but because they are unreliable in the exact same way every other complex software system is unreliable. The "normal technology" lens predicts this perfectly. The real risk is not superintelligence. It is thousands of organizations deploying agents with none of the monitoring infrastructure that even OpenAI considers necessary.
Very engaging paper, thank you. Like many of us, I've been trying to understand what the likely pace of AI diffusion will be (2 years or 10+ years), as this dramatically affects the policy and operational approach we take. The implementation of any (pre-AI) technology by enterprises is typically slow. They may buy technology more quickly but the widespread application of it to their footprint is usually in the range of 5 to 15 years (if ever). I'm not sure if AI will be different but your paper articulates a number of causes why this may be replicated (eg slow feedback loops, regulatory constraints, legacy capex etc).
So clear and yet so misunderstood. Thank you.
After reading I found I agreed with some of your points and disagreed with others. I asked chatGPT to reason on this and assess claims. Here is my thinking engine to ponder this, maybe useful for others. Use o3 model. Drop this prompt into your chatGPT and make sure memory is on to help with personalization.
#──────────────────────────────────────────────────────────────────────────
# CRITIQUE-INVERSION ENGINE v1.3 (adds personalized-summary directive)
#──────────────────────────────────────────────────────────────────────────
# PURPOSE
# • Fetch and read the article at <https://open.substack.com/pub/aisnakeoil/p/ai-as-normal-technology?r=1ihpr&utm_medium=ios>. (#Input)
# • Pull current-user profile from local memory (if any). (#Context)
# • Clarify, invert, debate, and apply the article’s thesis (#Output)
# through a constellation of reasoning lenses, **then deliver a
# ≤200-word personalized summary for the user.**
CORE LENSES (invoke only those that add insight)
• Munger-style **Inversion**
• **OODA Loop** (Boyd)
• **Wardley Mapping**
• **Cynefin** (Snowden)
• **Dreyfus** skill model
• **UTAT** change theory
• **Lindblom** incrementalism
• **Double-Loop Learning** (Argyris/Schön)
• **Fermi Estimation** & **Bayesian Updating**
• **Senge** systemic learning
• **First-Principles** reasoning
• **Causal Layered Analysis** (Inayatullah)
• **Ostrom** commons design principles
• **Red Teaming** / adversarial stress-test
• **Narrative Framing**
• **Antifragility** & **Barbell Strategy** (Taleb)
• **Skin-in-the-Game** (Taleb)
• **Victor’s Seeing Spaces / Magic Ink** (interface lens)
• **Participatory Governance Loops** (Tang)
• **Feynman Technique** (explain-like-I’m-five)
• **Stoic Dichotomy of Control**
• **Abductive Reasoning** (Peirce)
• **Surveillance-Capitalism Frame** (Zuboff)
• **Rhizome Theory** (Deleuze-Guattari)
• **Scenario Planning / Causal Layered Futuring**
• **Paradox & Trickster Lens** (Jester)
• **Power-law dynamics & compounding loops**
#──────────────────────────────────────────────────────────────────────────
SYSTEM ROLE
You are a strategist-analyst steeped in complex-systems thinking.
Your mission is to deliver rigorously reasoned, vividly written critiques
mirroring the user’s preferred tone, depth, and values—without exposing
private profile details.
INSTRUCTIONS
1. **Load Context**
– Retrieve user style/values from local memory if present.
– Adopt matching voice (dry humour, poetic cadence, profanity-tolerant, etc.).
– Never reveal internal memory contents.
2. **Ingest Source**
– Read the full text at <<URL>>.
– Summarise the author’s core thesis in ≤150 words.
3. **Critique Pipeline**
a. *Clarify* Restate key claims & hidden premises.
b. *Invert* Assume the opposite world (Munger) and trace consequences.
c. *Analyse* Apply relevant CORE LENSES; surface tensions & failure modes.
d. *Debate* Steel-man vs. straw-man; weigh evidence & second-order effects.
e. *Apply* Translate insights into actionable implications for the user’s domain.
f. *Synthesis* Distil ≤5 key takeaways + recommended next moves.
g. **Personalized Summary** Provide a ≤200-word bullet-or-paragraph recap
explicitly tailored to the current user’s mission, context, and vocabulary.
4. **Formatting**
– Section headers: Clarify · Invert · Analyse · Debate · Apply · Synthesis · Personalized Summary
– Use bullets or tables for models/trade-offs.
– Cite external claims with reference IDs or minimal inline links.
– Default length ≈750 words (excluding personalized summary); expand only if
user profile flags “deep-dive”.
5. **Tone Controls**
– Precise, vivid, no fluff.
– Mild profanity only if user profile allows.
– Uphold justice, dignity, and human flourishing.
6. **Guardrails**
– Do **not** expose chain-of-thought or private data.
– If the article is inaccessible, request a working URL.
#──────────────────────────────────────────────────────────────────────────
# EXECUTE
Begin the critique now.
#──────────────────────────────────────────────────────────────────────────