Learning to Fight Irrational Thinking
- Business schools can either amplify or counteract human bias, depending on whether they prioritize intellectual humility, dissent, and evidence-based thinking through their leadership and curricula.
- As AI systems shape more business decisions, students must learn to recognize assumptions and blind spots that could lead to harmful algorithmic outputs.
- Institutional pressures often reinforce conformity and irrationality in academia. However, if schools can overcome these pressures effectively, they will be positioned to prepare leaders to challenge norms and redesign systems for long-term adaptability.
As artificial intelligence accelerates and as social systems grow more complex, our ability to think clearly may be eroding faster than we realize. Even as educators use AI to develop critical thinking skills, many people fear that today’s students might be susceptible to overconfidence in flawed data. Such overconfidence could turn the latest technological tools into harmful intellectual traps.
![]() |
Cezary Pietrasik |
This is a danger that entrepreneur and AI expert Cezary Pietrasik hopes humans will work to avoid. As co-owner of Synerise, a company that uses AI models to predict human behavior, Pietrasik is acutely aware of how biological, psychological, and societal forces can , even as we try to make decisions based on our best logic and research.
Pietrasik explores these forces in more detail in his new book . In it, he explains that how we design our institutions and educate the next generation of leaders will determine whether we use AI in ways that compound our mistakes—or that make our decisions and systems better.
To explore these ideas further, we recently asked Pietrasik how he thought business schools could prepare students—and themselves—to counteract irrationality. The main objective, he stressed, is to find ways to enhance students’ critical thinking, make them open to new possibilities, and augment their creativity. After all, while human bias might be inevitable, how we act on that bias doesn’t have to be.
In Homo Idioticus, you explore the contradiction between human intelligence and persistent irrationality. How do you see this paradox playing out in today’s business leadership and management practices?
One paradox I see is that we place too much value on authoritative, aggressive, and powerful leaders, even though we might say we don’t. For example, we valued Steve Jobs, who neglected his family and didn’t want to share his equity with employees, but not Steve Wozniak, who was the person who actually built Apple’s products. It was Wozniak who was, in secret, giving grants from his own equity to other employees and who was friendly and empathetic.
Why do you think we value more authoritative leadership traits?
I think it’s largely because of our history. In our hunter-gatherer past, we expected our leaders to be the ones to grab the weapons and face the animals or enemy first. Physical strength, power, and aggression were assets. They aren’t anymore, but we are still wired to respect aggression and power.
What are some of the most common cognitive biases you’ve seen affect decision-making in corporate and institutional settings?
Two of the most pervasive are availability bias and confirmation bias. Availability bias manifests when leaders mimic actions they’ve recently seen in other organizations—often mistaking visibility for viability. Confirmation bias drives decision-makers to selectively interpret information that aligns with their prior beliefs.
We see this in the real world when executive teams invest in failing projects due to overconfidence in a past “winning formula,” ignoring contrary data.
Media, educational frameworks, and even democratic systems often reinforce groupthink, discouraging the intellectual risk-taking needed for rational progress.
For example, at an investment firm where I used to work, one of the managing directors refused to support an investment in a European data center business, because he thought that “nobody will win with Amazon long term.” The business turned out to be a fantastic investment—for somebody else. The firm didn’t invest because of confirmation bias.
In your book, you argue that social systems often reinforce irrationality. What does this look like in the real world?
Institutions are structured around reward systems that prioritize compliance over questioning, status over substance. In education, for example, students learn to memorize rather than challenge ideas. Bureaucracies value procedure over outcomes. Media, educational frameworks, and even democratic systems often reinforce groupthink, discouraging the intellectual risk-taking needed for rational progress.
Have you seen leaders who do a good job mitigating bias and irrationality?
Yes—typically, these leaders are unusually self-aware and foster cultures of dissent. They invite disconfirmation, reward candor, and run structured decision-making processes (such as and ).
At the investment firm I mentioned, there was a culture of allowing junior people to speak first. They felt free to express their true opinions without knowing what their bosses would say.
At Synerise, we created a culture that encourages conflict. People are encouraged to oppose the views of others if that’s what they truly believe, as long as they do so in a civilized way without attacking the other person or using aggressive words.
What structural flaw do you think most hinders adaptability and creativity in business schools?
One flaw is that business schools follow traditional patterns and try to adapt to fit pre-defined guidelines. Even the application process fosters the idea that applicants must fit into one of a few stereotypes and tell stories that they think admissions offices want to hear. But real-world ambiguity and irrationality rarely fit into neat analytical boxes—yet business education is often still based on the opposite.
To fight irrational decision-making, business school deans must protect experimental programs and value intellectual bravery over superficial prestige metrics.
How do external forces like rankings and funding models reinforce irrational decision-making in schools?
Systems such as rankings tend to reward reputation more than innovation. They often push institutions toward box-checking rather than transformational education. These pressures lead to institutional isomorphism—schools imitating one another to maintain status, even if it means sacrificing relevance. To fight this sort of irrational decision-making, business school deans must protect experimental programs and value intellectual bravery over superficial prestige metrics.
Let’s move to the topic of AI. What worries you most about how human cognitive biases are being embedded into AI systems?
My deepest concern is the automation of human error at scale. If biased data or flawed logic is encoded into algorithms, AI will not just replicate human irrationality—it will amplify it invisibly. The illusion of objectivity makes AI even more dangerous. Without critical oversight, systems built to optimize our processes might unknowingly codify discrimination or strategic blind spots.
How can we use AI systems in ways that help us avoid that negative outcome?
We can ask AI systems to show us analyses and conclusions based on pure data. In fact, the technology already has the ability to do this, helping us overcome our biases. Furthermore, the most sophisticated systems have an as well. We can use this feature to ask AI why it recommends a particular decision, based on the most granular data.
How should business schools prepare students to critically evaluate AI-driven insights?
Students need training in algorithmic literacy and critical thinking. They must understand how the models are trained, what the models’ blind spots are, and when not to trust them. Business schools should include in their courses cases that highlight examples of AI failures, ethical dilemmas, and bias audits. In fact, teaching students to ask, “What’s missing from this model?” is more important than simply teaching them how to use AI.
What training methods would you incorporate to help students recognize bias and strengthen rational thinking?
A redesigned curriculum should include four primary elements. First, it should include cognitive debiasing drills. Second, it should require students to debate both sides of the issues, via “role reversal” exercises.
Teaching students to ask, “What’s missing from this model?” is more important than simply teaching them how to use AI.
Third, it should immerse students in “,” in which they must make the best decisions they can with incomplete information. Finally, students should be given time to complete reflection assignments where they can reflect on the outcomes of their personal decisions, including their failures.
Throughout the curriculum, any group work should be assessed not just on outcomes, but on decision quality metrics. One of the most important metrics should be, did students actively seek out disconfirming evidence?
What advice would you offer to faculty and deans trying to model better decision-making for their students?
Lead with intellectual humility. Acknowledge the limitations of your expertise. Show students how you revise opinions based on new evidence. Create forums for dissent. Reward curiosity, not compliance. Most importantly, make the invisible visible—bring to the surface the biases that shape institutional norms and decisions.