Irrational decision or helpful evolutionary adaptation? A philosopher on the rationality wars behind

People use cognitive shortcuts to make choices that sometimes have outcomes that don’t serve their own interests or preferences. But to call these choices irrational might be missing the big picture.

Author: Alejandro Hortal-Sánchez on Mar 24, 2026
 
Source: The Conversation
A classic example of a nudge is making the healthy choices easier to grab in a cafeteria. Maskot via Getty Images

Twelve-year-old Jaysen Carr died in July 2025. While he swam in Lake Murray, a reservoir a few miles from Columbia, South Carolina, Naegleria fowleri – a rare amoeba found in warm fresh water – entered through his nose, causing a rapidly fatal brain infection.

Each year in the United States, drowning causes roughly 4,500 deaths, while infections from brain-eating amoebas typically number only two or three. Yet the vividness of these rare deaths powerfully shapes how people perceive and respond to risk. After a 2025 amoeba-related death made headlines in Iowa, for example, open-water swimmers began questioning whether lakes were safe, even as health officials emphasized how rare such infections remain.

Is it irrational to avoid swimming in lakes on hot summer days? How rational is it to fear flying? How many people worry about contaminants in their drinking water yet never think twice about skipping sunscreen, despite skin cancer being the most common, and largely preventable, cancer in the United States?

These reactions raise a deeper question: What does it mean to call a response “rational” or “irrational”? These are the kinds of ideas I explore in my research on behavioral public policy. How do the assumptions scientists make about human rationality shape the tools governments use to improve social welfare?

When mistakes aren’t really mistakes

Behavioral economists, following Daniel Kahneman, emphasize how heuristics – the mental shortcuts or rules of thumb people use to make quick decisions – produce systematic biases or predictable errors in judgment. From this perspective, these biases born from shortcuts lead people to make choices that do not serve their own interests or stated preferences.

Evolutionary psychologists such as Gerd Gigerenzer instead see those same shortcuts as adaptive responses to uncertainty. Rather than errors, they’re efficient strategies shaped by the environments in which human reasoning actually evolved.

These two perspectives are in disagreement about what counts as rational – and why that matters for policy.

Patient sitting with white-coated doctor looking at tablet
How a care team frames the risks of a procedure affects a patient’s choice. Halfpoint Images/Moment via Getty Images

Consider a few familiar examples. Frame the same medical procedure as having a 90% survival rate rather than a 10% mortality rate and patients respond very differently. Set one option as the default – whether in organ donation, retirement savings or privacy settings – and most people stick with it simply because opting out takes effort.

From a behavioral economics perspective, these are clear cases of bias: judgments shaped by framing, whatever feels most vivid, or inertia rather than careful deliberation.

From an evolutionary perspective, however, the picture changes. In complex environments with limited time, information and attention, relying on defaults or whatever feels most vivid or familiar can be an efficient way to decide without becoming overwhelmed. What looks like a mistake when judged against idealized models of rational choice may instead be a sensible response to real-world uncertainty.

This perspective helps explain why small changes in choice environments – nudges such as placing salad bars directly in cafeteria serving lines or listing vegetarian options first on menus – can significantly shift behavior without forcing anyone to choose differently. In other words, nudges work precisely because they align with, not fight against, the shortcuts people already use, making the desired behavior the path of least resistance.

Behavioral economists defend nudges as tools for correcting cognitive biases. Gigerenzer criticizes them as ethically problematic and argues that public policy should emphasize education over subtle choice manipulation.

Should policy correct or educate? This divide, called the “rationality wars,” reflects a deeper disagreement about human rationality itself.

If human rationality is seen as deeply flawed, nudges appear attractive because they make better decisions easier without demanding reflection.

If, instead, rationality is viewed as adaptive and teachable, policy should focus on strengthening people’s capacity to learn, adapt and decide for themselves.

Rationality isn’t just one thing

From bestselling books such as behavioral economist Dan Ariely’s “Predictably Irrational” to the worldwide expansion of behavioral “nudge” units in government, many contemporary developments suggest that people are poor decision-makers. Struggles with retirement savings, health, weight loss and environmental protection seem to confirm that view.

And yet, as a species, humans have been extraordinarily successful – adapting to diverse environments, building complex societies and accumulating knowledge across generations.

My claim is that this apparent contradiction dissolves once you recognize that rationality is not a single thing. Human beings can be both rational and irrational, depending on the scientific lens in use. From a behavioral economics perspective, many decisions appear biased and suboptimal. From an ecological or evolutionary perspective, those same decisions can look adaptive, efficient and sensible given the environments in which they are made.

At this point, the disagreement is not merely empirical but conceptual. People often assume that “rationality” names a single property of human behavior, when in fact its meaning depends on the scientific framework being applied.

Consider love. In neuroscience, love appears as patterns of brain activity and hormones. In psychology, it is studied through attachment and emotion. In sociology, it takes the form of social bonds and norms.

None of these accounts is wrong – but none captures love in full. I suggest rationality works in much the same way.

young couple embrace while man kisses smiling woman's cheek
As with love, the lens you use to look at rationality may give you only part of the big picture. Alina Rudya/Bell Collective/DigitalVision via Getty Images

Multiple ways to consider a complex whole

The danger arises when one perspective is treated as the whole story. Reducing love entirely to brain chemistry, or rationality entirely to cognitive biases, treats a partial explanation as a complete one. Scientific disciplines illuminate different aspects of complex phenomena, but none has a monopoly on their meaning.

Forgetting this carries a cost: We risk drawing overly narrow conclusions – about human behavior, intelligence or public policy – by mistaking the limits of a single framework for the limits of human rationality itself.

Seen this way, fear of rare brain-eating amoebas, of flying, or of tap water is not simply a failure of reason. Such reactions may appear irrational under one standard yet reflect a form of rationality adapted to uncertainty, vivid impressions and limited information.

What ultimately matters is not labeling people as rational or irrational, but being explicit about which conception of rationality is at work – and why. That choice, in turn, shapes whether public policy aims to nudge behavior, educate citizens or redesign environments so that human reasoning can operate at its best.

Alejandro Hortal-Sánchez does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Read These Next