Discussion about this post

User's avatar
Albrecht Zimmermann's avatar

I've always wondered if the relative ease of symbolic systems dealing with formalized settings is because both are artificial: we invented those systems to abstract from reality, removing the noise, formulating general rules etc., and then we built computers (and formulated the algorithms that they run) based on those formalizations.

In the same way that using machine learning for dealing with sports data is easier for e-sports than for offline ones: there's more data, actions are limited by the game physics and constraints, and one has access to all variables.

Yusuke Tanaka's avatar

This really resonated with me — especially the idea that Moravec’s paradox persists largely because of *selection effects* and storytelling convenience, not evidence.

What I found most useful is your point that prediction-focused thinking (what AI will do next) distracts us from the much harder and more important work: **how organizations actually absorb, resist, and reshape AI over time**. The electricity vs. steam analogy is spot-on.

I’ve been writing from a very applied angle — looking at AI adoption inside factories and legacy organizations — and I keep seeing the same pattern: capability headlines move fast, but institutional change moves painfully slow, often in non-obvious ways.

Your framing helped me clarify why “AI reasoning vs. robotics” debates often miss the real bottleneck. I tried to explore this gap from the ground level in a recent piece, connecting diffusion delays with organizational incentives and failure modes. Happy to share if it’s of interest.

5 more comments...

No posts

Ready for more?