Last updated on October 1, 2025
Consequentialism, the moral theory that judges actions by their outcomes, has long been attractive for its simplicity. Do what produces the best results, and avoid what produces the worst. It seems like common sense. But to follow consequentialism, we must be able to predict the future.
However, philosophers since Descartes have worried about whether we can trust our perceptions at all. In one of his most famous thought experiments, Descartes imagined an “evil demon” deceiving our senses, making the world seem utterly different from how it really is. If that were true, humans would not be able to create accurate predictions because the world we observe – the basis for predictions – is fundamentally a lie.
Modern variations only further this doubt on consequetionalism. Physicists at the Higgs Centre for Theoretical Physics, for instance, point to the “Boltzmann Brain” hypothesis. In an infinite universe, it is statistically more likely for a disembodied brain, complete with false memories and imagined perceptions, to appear for a moment than for the entire universe to arise from a Big Bang. If that is the case, then memory and experience are not reliable guides for predictions.
Another theory is the “five-minute hypothesis” which suggests that the universe could have come into existence only minutes ago, with false histories built in. If nothing empirical can disprove these scenarios, how can we know anything about the world, let alone predict its future?
Even if we set aside such extreme possibilities, the problem of induction remains. David Hume famously argued that all scientific reasoning rests on the assumption that the future will resemble the past. We observe that light has always bent when passing from air into glass, and we expect it always will. But this expectation is circular: we rely on past experience to justify the principle that past experience can predict the future. There is no logical guarantee. As philosopher Max Black put it, inferences from particular instances to general rules are always vulnerable to exceptions. If induction itself is unstable, then any moral theory built on predicting consequences rests on shaky ground.
That weakness makes consequentialism far less useful than it first appears. At best, it lets us look backward, judging past actions by the results they produced. But as the philosopher Shelly Kagan noted, it cannot reliably guide decisions in real time. We never know what the future will hold. Black swan events, improbable but transformative, can upend even the most careful calculations. A choice that seems good in the moment may lead to disastrous outcomes, and vice versa. Without certainty, consequentialism cannot give us firm rules for action, only post-hoc assessments.
And yet, consequentialism remains compelling to many. Policymakers and ordinary people alike want a moral framework that is grounded in results, not abstract principles. In a world where actions ripple outward with global effects — on climate, technology, and security — focusing on consequences feels practical. The problem is not the desire to weigh outcomes, but the confidence that we can know them in advance.
That is the paradox at the heart of consequentialism. It promises clarity by reducing morality to results, yet those results are precisely what we cannot know with certainty. As a way of evaluating the past, it may offer insight. But as a guide for living in an uncertain world, it leaves us with little more than guesswork.


Be First to Comment