AI Decision-Making in Governance
As it becomes more embedded in executive workflows, I’m doing a lot of thinking about AI decision-making in governance. As part of that, I’ve been experimenting with a new tool called Critical Thinking Bot. What caught my attention about it wasn’t speed or output quality. It was the premise: it’s explicitly designed to s l o w you down.
Most LLMs are optimized for helpfulness. They’re fluent and efficient, and that’s valuable. But in leadership and governance, fluency isn’t the goal. Judgement is.
Decisions rarely fail because someone poorly drafted their explanation. They fail because stakeholders didn’t test assumptions, no one surfaced incentives, tradeoffs weren’t fully examined, or dissent didn’t have enough space in the room.
So, this tool behaves less like an overly agreeable assistant and more like an intellectual sparring partner. It questions your framing. It asks what evidence would change your mind. It presses on second-order effects. It introduces friction.
The developer, Shae O., is a Harvard PhD candidate who hosts the podcast Critical Thinking in the Age of AI. You can see the throughline. This isn’t about making AI more persuasive. It’s about making humans more rigorous.
AI isn’t going anywhere. As it becomes more integrated into executive workflows, I’m increasingly interested in tools that strengthen thinking. And further AI decision-making in governance, productive friction may be one of the most valuable features of all.
Photo by Milad Fakurian on Unsplash


