I've always wondered if the relative ease of symbolic systems dealing with formalized settings is because both are artificial: we invented those systems to abstract from reality, removing the noise, formulating general rules etc., and then we built computers (and formulated the algorithms that they run) based on those formalizations.
In the same way that using machine learning for dealing with sports data is easier for e-sports than for offline ones: there's more data, actions are limited by the game physics and constraints, and one has access to all variables.
This really resonated with me — especially the idea that Moravec’s paradox persists largely because of *selection effects* and storytelling convenience, not evidence.
What I found most useful is your point that prediction-focused thinking (what AI will do next) distracts us from the much harder and more important work: **how organizations actually absorb, resist, and reshape AI over time**. The electricity vs. steam analogy is spot-on.
I’ve been writing from a very applied angle — looking at AI adoption inside factories and legacy organizations — and I keep seeing the same pattern: capability headlines move fast, but institutional change moves painfully slow, often in non-obvious ways.
Your framing helped me clarify why “AI reasoning vs. robotics” debates often miss the real bottleneck. I tried to explore this gap from the ground level in a recent piece, connecting diffusion delays with organizational incentives and failure modes. Happy to share if it’s of interest.
You miss the point of Moravec's Paradox. When I saw your video, skimming too fast, I thought you had misunderstood the definition, so confused the stuff that followed seemed to me.
Moravec's Paradox compares the difficulty of dealing with physical reality to the world of abstractions. Seen through the lens of information theory there is no paradox. The state of an abstracted chessboard can be reduced to 32 bytes or less; exabytes do not suffice to fully describe the physical state of a flower. The difficulties in managing the world in which the latter exists should be obvious to anyone with the appropriate education.
But the lens we are used to seeing through is not that of information theory. We come with the understanding that it is to be expected that the village idiot can identify safely edible items in his surroundings but will be lucky to play a chess game without making illegal moves. This is because we come from a long line of forebears equipped by evolution specially to deal with this fabulously information rich environment, and these capabilities seem natural to us.
Hence the "paradox".
I find your piece lacking in a comprehension of the complexity of reality and of the consequences that brings for any machine or program that is taken away from neat abstractions and forced to deal with it. Robotics is hard because reality is messy. AI ie much worse at perception than the one year old even now, whether or not we are proud of our progress in computer vision. We are not going to see this upended by some breakthrough overnight: progress will come, sometimes in grinds, sometimes in leaps, but overtaking biological entities in their capacity to deal with the mess will be a long drawn out process.
You are conflating at least three different things into 'hard': difficult to learn/requiring much experience to do well, laborious /time-consuming, and boring/tiresome/unpleasant/causes physiological damage. Reading ancient Sumerian, identifying and cataloguing archaelogical finds, firefighting.
Your examples are also poor. Soccer is difficult to learn, and therefore hard for most humans. Web search is easy for humans to do, conditional on already knowing how to read and write and not being time-limited. It is merely boring and time-consuming.
"Robotics is hard" to me means laborious and tiresome. It is mechanical engineering, and feedback control systems, that have to be developed and improved over time, requiring understanding of material properties, energies, and which parts of the system need close supervision and which don't. Or instead you can have a robot that works for fifteen minutes and then breaks, suitable for sensationalist videos on YouTube. Robot reliability is hard in this sense but there is no magic there.
I think you've misunderstood what Moravec's paradox was all about.
No one is saying that *everything* that is hard for computers is easy for people or vice versa. (Well, okay, people say that, but that's not what they mean.) They're saying something like "when you look at the set of things where human and computer capabilities diverge, there are many cases that are *surprising*".
In modern language, Moravec's paradox considers the difference between what cognitive science callsl "System 1" problem solving from "System 2" problem solving. It's saying: "Human level performance on System 1 problems requires millions of times more compute than human level performance on System 2 problems." And (here's the "paradox" part) that was surprising in 1980.
(1)
You have to consider the context in which the paradox was first discovered.
Cognitivism (that is, the set of ideas coming out of the "cognitive revolution" of 1956) suggested that human problem solving consisted of the manipulation of discrete mental objects like "plans", "goals", "known facts". One part of this is "symbolic AI" or "GOFAI": hand-coded knowledge, reasoning-as-search, etc.. However, by the 80s people were discovering that symbolic AI was unlikely to "scale up" to general intelligence, and Moravec's paradox was one way to talk about exactly where it was failing.
Cognitivism, at least in its naive form, is dead. Modern consensus in cognitive science divides human problem solving into types: System 1 and System 2. (I'm guessing you're familiar with these.) These are *exactly* the two classes of problems addressed by Moravec's paradox. It's asking: isn't it odd that achieving human-level performance on these two types of problems is so different?
Cognitivism and GOFAI believed that System 1 problems could, eventually, be solved using scaled-up System 2 methods. Moravec's paradox was part of the realization that GOFAI was wrong: that System 1 problems existed and GOFAI couldn't solve them but people could. Infants and dogs could.
No one in AI took (what would one day be called) "System 1" seriously in 1970. I'm thinking here of Hubert Dreyfus and Frank Rosenblatt, who, from opposite directions, tried to get AI to address System 1 problems. Both were humiliated and ridiculed, which shows you how strong the consensus was after the "cognitive revolution". It was intellectual warfare.
These debates are long settled, so the paradox is not nearly as relevant today.
(2)
You say that "it's never been empirically tested". I disagree. The evidence is the historic record. Computers exhibited human level performance on many "System 2" problems decades before human level performance on "System 1" problems. The computers that achieved human level performance on System 1 problems were millions of times faster than the computers that solved System 2 problems.
1956: Formal logical proof. 1966: Symbolic integration vs. 1989: reliable character recognition 2010ish: reliable facial recognition. (Forgive me for not fact-checking two dozen examples to prove my point, but you get the idea, and I'm sure you agree.)
(3)
To understand the "paradox" part of the paradox, you have to assume that "Reason" (that is, logic and mathematics, sound and valid arguments) is the "what makes man different than the animals", along with the assumption that mankind is "higher" than animals. They would laugh you out of the room if you suggested that "recognizing a face" requires "intelligence", noting that a dog can do it. If a dog can do it, it's not intelligence, because intelligence is what separates man from the animals.
That's the premise of the paradox, the thing that is contradicted by the evidence. It's an assumption that goes back to Plato, essential to Renaissance Humanism, the Scientific Revolution and the Age of Enlightenment and was assumed by billions of people in the mid-century. It still is.
Moravec's paradox suggests that "reason" is not the transcendent spiritual quality
we always assumed it was. It can't tell you which beings are "higher" or "lower" on a "chain of being". Maybe we are just ordinary animals. Maybe "intelligent" aliens are just animals from space. Maybe so-called "superintelligent" machines won't be "better" than us in any important way, certainly not infallible and omniscient. At best, they will be guessing machines and they will always make mistakes, just like we do.
When your assumptions are contradicted, that's a paradox. Of course, if you don't make those assumptions in the first place, then it's not a paradox or, for that matter, even particularly interesting.
Predicting the future is new and exciting, and doing the work of integrating in the present and near term sounds like a slog. I agree though that it is more impactful for our society to be doing this legwork. What do you think makes people pay attention to that part instead of just the next capability prediction?
>>'Instead of relying on prediction, we should get better at shaping and adapting to the tech that we actually know for sure is coming.'
But isn't that also knowing the future for sure? Like how would we know which tech for sure is coming? Even until six months ago, people were giving up on driverless cars and consolidating investments. And living where I live (nowhere close to California), it still seems impossible given my road conditions and how people navigate it. How could we as a society know better that the tech will adapt to our situation faster than we adapt to understanding it?
I'm afraid that you're looking in completely the wrong place for an understanding of the paradox. It's not surprising that AI people don't support it; they don't understand it either. You should be looking at evolutionary biology and neuroscience, including computational neuroscience. For vindication and explanation by top researchers, please start with, for example, the work of Christof Koch, former President and CEO of AllenAI, with 'Being You' by Anil Seth, or even Yann LeCun's famous pronouncement about the most sophisticated AI being less intelligent than a housecat. Humans (and house cats, and spiders) have evolved over millions of years in intelligent bodies with exquisitely complicated sensoria, to interact with the physical world, understand it, predict how it will behave, build models of it, and know what to do in it - with no calculation involved. When I carry a cup of coffee downstairs in the dark without spilling a drop, I'm not calculating the physics involved - if I tried, I'd probably spill it. When a computer tries to understand 'cat', it has to be shown thousands of examples of images of cats in different situations - and even then, if it sees an image of a cat up a tree, or in a box, or obscured by a dog, it may not recognise it. The famous human toddler described by Moravec can play with two or three cats and recognise any cat as a cat, for life. We can infer general rules: 'catness', gravity - from individual examples, through experience. No machine can do that. Even the neuroscience of touch is extraordinary: the same tiny square millimetre of skin holds different sensors for a slap, a caress, a burn or a tickle, that are then sent by different pathways to be processed and responded to by different areas of the brain - instantaneously. Okay, one more illustration. Should Musk ever succeed in building humanoid robots that don't overheat, or require constant recharging, that can achieve even a pallid imitation of a human hand and arm's versatility and mobility, the hand will be no use to the robot unless it also understands what it's holding. A small round yellow thing might be a tennis ball, a muffin or a day-old chick - all requiring radically different responses. We know so much that we don't even realise we know - and I haven't even got to the extended mind, that can swallow a motorbike, a fork or a grand piano into its model of the body, as though it were part of us. For more, my Substack about those robots, 'What a Piece of Work' should help you understand. But please, have a bit of humility; I know it's hard for techbros....
Aren't you overthinking the paradox? The claim never meant to be exhaustive nor refer to the frontier of human knowledge. Just the irony that things we consider hard, very much **including** chess and calculation, are trivial for a simple computer, while trivial tasks like filling the dishwasher or walking on an uneven path are hard for computers.
I've always wondered if the relative ease of symbolic systems dealing with formalized settings is because both are artificial: we invented those systems to abstract from reality, removing the noise, formulating general rules etc., and then we built computers (and formulated the algorithms that they run) based on those formalizations.
In the same way that using machine learning for dealing with sports data is easier for e-sports than for offline ones: there's more data, actions are limited by the game physics and constraints, and one has access to all variables.
This really resonated with me — especially the idea that Moravec’s paradox persists largely because of *selection effects* and storytelling convenience, not evidence.
What I found most useful is your point that prediction-focused thinking (what AI will do next) distracts us from the much harder and more important work: **how organizations actually absorb, resist, and reshape AI over time**. The electricity vs. steam analogy is spot-on.
I’ve been writing from a very applied angle — looking at AI adoption inside factories and legacy organizations — and I keep seeing the same pattern: capability headlines move fast, but institutional change moves painfully slow, often in non-obvious ways.
Your framing helped me clarify why “AI reasoning vs. robotics” debates often miss the real bottleneck. I tried to explore this gap from the ground level in a recent piece, connecting diffusion delays with organizational incentives and failure modes. Happy to share if it’s of interest.
You miss the point of Moravec's Paradox. When I saw your video, skimming too fast, I thought you had misunderstood the definition, so confused the stuff that followed seemed to me.
Moravec's Paradox compares the difficulty of dealing with physical reality to the world of abstractions. Seen through the lens of information theory there is no paradox. The state of an abstracted chessboard can be reduced to 32 bytes or less; exabytes do not suffice to fully describe the physical state of a flower. The difficulties in managing the world in which the latter exists should be obvious to anyone with the appropriate education.
But the lens we are used to seeing through is not that of information theory. We come with the understanding that it is to be expected that the village idiot can identify safely edible items in his surroundings but will be lucky to play a chess game without making illegal moves. This is because we come from a long line of forebears equipped by evolution specially to deal with this fabulously information rich environment, and these capabilities seem natural to us.
Hence the "paradox".
I find your piece lacking in a comprehension of the complexity of reality and of the consequences that brings for any machine or program that is taken away from neat abstractions and forced to deal with it. Robotics is hard because reality is messy. AI ie much worse at perception than the one year old even now, whether or not we are proud of our progress in computer vision. We are not going to see this upended by some breakthrough overnight: progress will come, sometimes in grinds, sometimes in leaps, but overtaking biological entities in their capacity to deal with the mess will be a long drawn out process.
You are conflating at least three different things into 'hard': difficult to learn/requiring much experience to do well, laborious /time-consuming, and boring/tiresome/unpleasant/causes physiological damage. Reading ancient Sumerian, identifying and cataloguing archaelogical finds, firefighting.
Your examples are also poor. Soccer is difficult to learn, and therefore hard for most humans. Web search is easy for humans to do, conditional on already knowing how to read and write and not being time-limited. It is merely boring and time-consuming.
"Robotics is hard" to me means laborious and tiresome. It is mechanical engineering, and feedback control systems, that have to be developed and improved over time, requiring understanding of material properties, energies, and which parts of the system need close supervision and which don't. Or instead you can have a robot that works for fifteen minutes and then breaks, suitable for sensationalist videos on YouTube. Robot reliability is hard in this sense but there is no magic there.
I think you've misunderstood what Moravec's paradox was all about.
No one is saying that *everything* that is hard for computers is easy for people or vice versa. (Well, okay, people say that, but that's not what they mean.) They're saying something like "when you look at the set of things where human and computer capabilities diverge, there are many cases that are *surprising*".
In modern language, Moravec's paradox considers the difference between what cognitive science callsl "System 1" problem solving from "System 2" problem solving. It's saying: "Human level performance on System 1 problems requires millions of times more compute than human level performance on System 2 problems." And (here's the "paradox" part) that was surprising in 1980.
(1)
You have to consider the context in which the paradox was first discovered.
Cognitivism (that is, the set of ideas coming out of the "cognitive revolution" of 1956) suggested that human problem solving consisted of the manipulation of discrete mental objects like "plans", "goals", "known facts". One part of this is "symbolic AI" or "GOFAI": hand-coded knowledge, reasoning-as-search, etc.. However, by the 80s people were discovering that symbolic AI was unlikely to "scale up" to general intelligence, and Moravec's paradox was one way to talk about exactly where it was failing.
Cognitivism, at least in its naive form, is dead. Modern consensus in cognitive science divides human problem solving into types: System 1 and System 2. (I'm guessing you're familiar with these.) These are *exactly* the two classes of problems addressed by Moravec's paradox. It's asking: isn't it odd that achieving human-level performance on these two types of problems is so different?
Cognitivism and GOFAI believed that System 1 problems could, eventually, be solved using scaled-up System 2 methods. Moravec's paradox was part of the realization that GOFAI was wrong: that System 1 problems existed and GOFAI couldn't solve them but people could. Infants and dogs could.
No one in AI took (what would one day be called) "System 1" seriously in 1970. I'm thinking here of Hubert Dreyfus and Frank Rosenblatt, who, from opposite directions, tried to get AI to address System 1 problems. Both were humiliated and ridiculed, which shows you how strong the consensus was after the "cognitive revolution". It was intellectual warfare.
These debates are long settled, so the paradox is not nearly as relevant today.
(2)
You say that "it's never been empirically tested". I disagree. The evidence is the historic record. Computers exhibited human level performance on many "System 2" problems decades before human level performance on "System 1" problems. The computers that achieved human level performance on System 1 problems were millions of times faster than the computers that solved System 2 problems.
1956: Formal logical proof. 1966: Symbolic integration vs. 1989: reliable character recognition 2010ish: reliable facial recognition. (Forgive me for not fact-checking two dozen examples to prove my point, but you get the idea, and I'm sure you agree.)
(3)
To understand the "paradox" part of the paradox, you have to assume that "Reason" (that is, logic and mathematics, sound and valid arguments) is the "what makes man different than the animals", along with the assumption that mankind is "higher" than animals. They would laugh you out of the room if you suggested that "recognizing a face" requires "intelligence", noting that a dog can do it. If a dog can do it, it's not intelligence, because intelligence is what separates man from the animals.
That's the premise of the paradox, the thing that is contradicted by the evidence. It's an assumption that goes back to Plato, essential to Renaissance Humanism, the Scientific Revolution and the Age of Enlightenment and was assumed by billions of people in the mid-century. It still is.
Moravec's paradox suggests that "reason" is not the transcendent spiritual quality
we always assumed it was. It can't tell you which beings are "higher" or "lower" on a "chain of being". Maybe we are just ordinary animals. Maybe "intelligent" aliens are just animals from space. Maybe so-called "superintelligent" machines won't be "better" than us in any important way, certainly not infallible and omniscient. At best, they will be guessing machines and they will always make mistakes, just like we do.
When your assumptions are contradicted, that's a paradox. Of course, if you don't make those assumptions in the first place, then it's not a paradox or, for that matter, even particularly interesting.
Charles Gillingham
Predicting the future is new and exciting, and doing the work of integrating in the present and near term sounds like a slog. I agree though that it is more impactful for our society to be doing this legwork. What do you think makes people pay attention to that part instead of just the next capability prediction?
Isn’t there some David Ricardo logic that what is “easy” and “hard” is always relative. So the paradox is inevitably circular logic?
>>'Instead of relying on prediction, we should get better at shaping and adapting to the tech that we actually know for sure is coming.'
But isn't that also knowing the future for sure? Like how would we know which tech for sure is coming? Even until six months ago, people were giving up on driverless cars and consolidating investments. And living where I live (nowhere close to California), it still seems impossible given my road conditions and how people navigate it. How could we as a society know better that the tech will adapt to our situation faster than we adapt to understanding it?
Nice piece, the comments' section is fun too!
I'm afraid that you're looking in completely the wrong place for an understanding of the paradox. It's not surprising that AI people don't support it; they don't understand it either. You should be looking at evolutionary biology and neuroscience, including computational neuroscience. For vindication and explanation by top researchers, please start with, for example, the work of Christof Koch, former President and CEO of AllenAI, with 'Being You' by Anil Seth, or even Yann LeCun's famous pronouncement about the most sophisticated AI being less intelligent than a housecat. Humans (and house cats, and spiders) have evolved over millions of years in intelligent bodies with exquisitely complicated sensoria, to interact with the physical world, understand it, predict how it will behave, build models of it, and know what to do in it - with no calculation involved. When I carry a cup of coffee downstairs in the dark without spilling a drop, I'm not calculating the physics involved - if I tried, I'd probably spill it. When a computer tries to understand 'cat', it has to be shown thousands of examples of images of cats in different situations - and even then, if it sees an image of a cat up a tree, or in a box, or obscured by a dog, it may not recognise it. The famous human toddler described by Moravec can play with two or three cats and recognise any cat as a cat, for life. We can infer general rules: 'catness', gravity - from individual examples, through experience. No machine can do that. Even the neuroscience of touch is extraordinary: the same tiny square millimetre of skin holds different sensors for a slap, a caress, a burn or a tickle, that are then sent by different pathways to be processed and responded to by different areas of the brain - instantaneously. Okay, one more illustration. Should Musk ever succeed in building humanoid robots that don't overheat, or require constant recharging, that can achieve even a pallid imitation of a human hand and arm's versatility and mobility, the hand will be no use to the robot unless it also understands what it's holding. A small round yellow thing might be a tennis ball, a muffin or a day-old chick - all requiring radically different responses. We know so much that we don't even realise we know - and I haven't even got to the extended mind, that can swallow a motorbike, a fork or a grand piano into its model of the body, as though it were part of us. For more, my Substack about those robots, 'What a Piece of Work' should help you understand. But please, have a bit of humility; I know it's hard for techbros....
Aren't you overthinking the paradox? The claim never meant to be exhaustive nor refer to the frontier of human knowledge. Just the irony that things we consider hard, very much **including** chess and calculation, are trivial for a simple computer, while trivial tasks like filling the dishwasher or walking on an uneven path are hard for computers.