Discussion about this post

User's avatar
Emma Stamm's avatar

Your response to the AI 2027 folks' questions about what the world will look like in 2035, 2045, etc., is spot-on. It's also a handy go-to for some of the more vexing provocations from hypeists. As you say, "This kind of scenario forecasting is only a meaningful activity within their worldview. We are concrete about the things we think we can be concrete about." It's not disrespectful to point out a difference in premises, but when you do so, you're signaling that you won't play games on their terms.

Expand full comment
Carsten Bergenholtz's avatar

Really appreciate this post (and the related ones). The hype is frustrating, and needs theory-informed frameworks for thinking productively about AI.

One comment, concerning diffusion and how people use AI. You write:

"For example, almost a year after the vaunted release of “thinking” models in ChatGPT, less than 1% of users used them on any given day...this kind of number is so low that it is hard for us to intuitively grasp, and frankly pretty depressing."

Many (most?) individuals don’t need to rely on “thinking” models to get useful answers. Everyday use cases are often lightweight:

- practical “how-to” advice (how to cook x, how to fix a bike chain, how to clean sneakers)

- quick recommendations (what to see in New York, which books are similar to X)

- simple productivity help (draft a polite email, generate a packing list)

- entertainment and curiosity (make a bedtime story, draft a quiz, invent a recipe)

- bachelor level student questions (what is selection bias, what is a quasi-experiment - the latter all models get wrong btw!)

In other words, if one argues that diffusion implies "users adapting their workflows to productively incorporate AI.", then yes - diffusion is arguably slow'ish. However, if one draws on all kinds of usage, the diffusion pattern looks different because most value doesn’t hinge on the advanced reasoning tier.

Expand full comment
9 more comments...

No posts