7 Comments
User's avatar
Albrecht Zimmermann's avatar

I've always wondered if the relative ease of symbolic systems dealing with formalized settings is because both are artificial: we invented those systems to abstract from reality, removing the noise, formulating general rules etc., and then we built computers (and formulated the algorithms that they run) based on those formalizations.

In the same way that using machine learning for dealing with sports data is easier for e-sports than for offline ones: there's more data, actions are limited by the game physics and constraints, and one has access to all variables.

Yusuke Tanaka's avatar

This really resonated with me — especially the idea that Moravec’s paradox persists largely because of *selection effects* and storytelling convenience, not evidence.

What I found most useful is your point that prediction-focused thinking (what AI will do next) distracts us from the much harder and more important work: **how organizations actually absorb, resist, and reshape AI over time**. The electricity vs. steam analogy is spot-on.

I’ve been writing from a very applied angle — looking at AI adoption inside factories and legacy organizations — and I keep seeing the same pattern: capability headlines move fast, but institutional change moves painfully slow, often in non-obvious ways.

Your framing helped me clarify why “AI reasoning vs. robotics” debates often miss the real bottleneck. I tried to explore this gap from the ground level in a recent piece, connecting diffusion delays with organizational incentives and failure modes. Happy to share if it’s of interest.

Peter's avatar

You miss the point of Moravec's Paradox. When I saw your video, skimming too fast, I thought you had misunderstood the definition, so confused the stuff that followed seemed to me.

Moravec's Paradox compares the difficulty of dealing with physical reality to the world of abstractions. Seen through the lens of information theory there is no paradox. The state of an abstracted chessboard can be reduced to 32 bytes or less; exabytes do not suffice to fully describe the physical state of a flower. The difficulties in managing the world in which the latter exists should be obvious to anyone with the appropriate education.

But the lens we are used to seeing through is not that of information theory. We come with the understanding that it is to be expected that the village idiot can identify safely edible items in his surroundings but will be lucky to play a chess game without making illegal moves. This is because we come from a long line of forebears equipped by evolution specially to deal with this fabulously information rich environment, and these capabilities seem natural to us.

Hence the "paradox".

I find your piece lacking in a comprehension of the complexity of reality and of the consequences that brings for any machine or program that is taken away from neat abstractions and forced to deal with it. Robotics is hard because reality is messy. AI ie much worse at perception than the one year old even now, whether or not we are proud of our progress in computer vision. We are not going to see this upended by some breakthrough overnight: progress will come, sometimes in grinds, sometimes in leaps, but overtaking biological entities in their capacity to deal with the mess will be a long drawn out process.

gregvp's avatar
5dEdited

You are conflating at least three different things into 'hard': difficult to learn/requiring much experience to do well, laborious /time-consuming, and boring/tiresome/unpleasant/causes physiological damage. Reading ancient Sumerian, identifying and cataloguing archaelogical finds, firefighting.

Your examples are also poor. Soccer is difficult to learn, and therefore hard for most humans. Web search is easy for humans to do, conditional on already knowing how to read and write and not being time-limited. It is merely boring and time-consuming.

"Robotics is hard" to me means laborious and tiresome. It is mechanical engineering, and feedback control systems, that have to be developed and improved over time, requiring understanding of material properties, energies, and which parts of the system need close supervision and which don't. Or instead you can have a robot that works for fifteen minutes and then breaks, suitable for sensationalist videos on YouTube. Robot reliability is hard in this sense but there is no magic there.

Absurd Person Singular's avatar

>>'Instead of relying on prediction, we should get better at shaping and adapting to the tech that we actually know for sure is coming.'

But isn't that also knowing the future for sure? Like how would we know which tech for sure is coming? Even until six months ago, people were giving up on driverless cars and consolidating investments. And living where I live (nowhere close to California), it still seems impossible given my road conditions and how people navigate it. How could we as a society know better that the tech will adapt to our situation faster than we adapt to understanding it?

Nice piece, the comments' section is fun too!

Sheila Hayman's avatar

I'm afraid that you're looking in completely the wrong place for an understanding of the paradox. It's not surprising that AI people don't support it; they don't understand it either. You should be looking at evolutionary biology and neuroscience, including computational neuroscience. For vindication and explanation by top researchers, please start with, for example, the work of Christof Koch, former President and CEO of AllenAI, with 'Being You' by Anil Seth, or even Yann LeCun's famous pronouncement about the most sophisticated AI being less intelligent than a housecat. Humans (and house cats, and spiders) have evolved over millions of years in intelligent bodies with exquisitely complicated sensoria, to interact with the physical world, understand it, predict how it will behave, build models of it, and know what to do in it - with no calculation involved. When I carry a cup of coffee downstairs in the dark without spilling a drop, I'm not calculating the physics involved - if I tried, I'd probably spill it. When a computer tries to understand 'cat', it has to be shown thousands of examples of images of cats in different situations - and even then, if it sees an image of a cat up a tree, or in a box, or obscured by a dog, it may not recognise it. The famous human toddler described by Moravec can play with two or three cats and recognise any cat as a cat, for life. We can infer general rules: 'catness', gravity - from individual examples, through experience. No machine can do that. Even the neuroscience of touch is extraordinary: the same tiny square millimetre of skin holds different sensors for a slap, a caress, a burn or a tickle, that are then sent by different pathways to be processed and responded to by different areas of the brain - instantaneously. Okay, one more illustration. Should Musk ever succeed in building humanoid robots that don't overheat, or require constant recharging, that can achieve even a pallid imitation of a human hand and arm's versatility and mobility, the hand will be no use to the robot unless it also understands what it's holding. A small round yellow thing might be a tennis ball, a muffin or a day-old chick - all requiring radically different responses. We know so much that we don't even realise we know - and I haven't even got to the extended mind, that can swallow a motorbike, a fork or a grand piano into its model of the body, as though it were part of us. For more, my Substack about those robots, 'What a Piece of Work' should help you understand. But please, have a bit of humility; I know it's hard for techbros....

Willem's avatar

Aren't you overthinking the paradox? The claim never meant to be exhaustive nor refer to the frontier of human knowledge. Just the irony that things we consider hard, very much **including** chess and calculation, are trivial for a simple computer, while trivial tasks like filling the dishwasher or walking on an uneven path are hard for computers.