Ethics isn't about being a good person
On leadership & moral exhaustion
You’ve kept your legacy API running far beyond its natural life. A thousand of your smallest customers depend on it—nonprofits, community projects, one-person operations who built their entire website around your “easy to use” solution. These were the customers who trusted you first. This was the product that got this whole thing off the ground.
But it’s creaking under the weight of tech debt, and it’s becoming harder and harder to keep it secure and compliant. Shutting it down would save $1M annually and let your team focus on the product that serves 20,000 paying customers. Your engineers are burned out maintaining two systems. Your CTO wants the resources reallocated. The big customers don’t know the small ones exist.
You told the CTO the numbers don’t work, and you can’t justify keeping the service up. You told your team they wouldn’t have to worry about maintaining it much longer. You’ve told the small customers nothing.
Three months later, you’re still paralyzed. You haven’t been able to sleep, food tastes off, and there’s an Excel file open on your desktop with legacy API funding zeroed out. In another window, you’ve written your resignation email.
That’s not burnout. Burnout is when you’re out of gas. This is something else.
Call it moral exhaustion. Not because you’re “immoral”, but because your ethical operating system has hit an infinite loop, and you’re rapidly running out of fucks to give.
A distinction with a difference
Operational burnout comes from resource depletion. You ran out of time, energy, or recognition. Rest helps. Vacation helps. A sabbatical might fix it.
Moral exhaustion comes from actions that conflict with your values, your identity, or what you thought integrity meant. It shows up as disconnection, numbness toward decisions that should matter, resentment that feels irrational, guilt you can’t quite place.
Here’s the diagnostic: if your workload stayed constant but your alignment with your stated values improved dramatically, would you regain energy?
If yes, you’re dealing with moral injury—a term borrowed from trauma research that describes sustained violation or suppression of core moral beliefs under constraint, power, or decision pressure.
The organizational costs are measurable. Withdrawal. Disengagement masked as “professional distance.” Broken trust cascading through teams. High-integrity talent leaving first, quietly. This is what researchers call “ethical fading”: the slow drift where individuals and organizations stop recognizing moral dimensions of decisions entirely.
What moral injury looks like at work
You’re asked to defend decisions you privately believe are harmful.
You’re obliged to enforce policies you already told your team you disagreed with (RTO mandates, anyone?).
You feel complicit in values theater—the gap between what your company says it believes and what it actually rewards.
You’ve traded a piece of your integrity for job security, then justified it with “everyone does this” or “I’m protecting my team.”
These moments cluster around shutdown decisions, layoffs, policy pivots under pressure, and high-stakes announcements where you become the face of something you didn’t author and don’t endorse.
The pain isn’t that you made a bad decision. It’s that you justified that decision using ethical logic you don’t actually subscribe to.
Misalignment, not amorality
Leaders experiencing moral injury aren’t necessarily unethical. They’re navigating conflicts between multiple, incompatible ethical systems operating simultaneously in the same organization.
Some leaders evaluate ethics by outcomes—did this produce the best result for the most people? Others by proper form—did we follow the right process, honor our commitments, maintain consistency? Still others by character—did this decision reflect who we want to be?
These aren’t personality quirks. They’re fundamentally different ways of thinking about what makes something right or wrong.
Then there’s the question of moral standing. Who or what are you treating as mattering intrinsically? Individual people? The relationships between them? The community or organization as a collective whole? Your CFO might treat the company-as-entity as the patient that needs protecting. Your Chief People Officer might center the preservation and health of human relationships. Both are reasoning ethically—but with different geometries of care.
Then there’s scope of obligation. Does this bind just you personally? Anyone in your role? Anyone in this situation? Everyone universally?
Finally, why are we even bound in the first place? Sometimes people believe ethics binds because it follows logically from first principles. Some believe it’s just built into the nature of reality. Sometimes an authority—board, law, professional duty—simply commands it. And sometimes the only justification is, “it works, doesn’t it?”
Most leaders don’t realize they’re switching between these frames constantly—or that their colleagues may be operating in entirely different ethical universes.
The pattern underneath
Here’s the source of the mess: organizations don’t “pick” one ethical system. They layer multiple incompatible ones, then leave leaders to resolve the conflicts quietly, personally, invisibly.
You optimize for outcomes this quarter (maximize shareholder return) while claiming to honor duty-bound principles (we treat people with dignity). You enforce universal policies (everyone follows the same rules) while making situational exceptions (but this customer is strategic). You appeal to authority (compliance says so) while justifying with pragmatism (it’s just how things work).
The misalignments cluster in predictable zones:
Incentives vs stated beliefs (quota structures vs customer care claims)
Team norms vs company values (how we actually work vs the handbook)
Short-term performance vs long-term responsibility (this quarter vs this decade)
Most leaders resolve these conflicts by choosing one system and muting the others.
The exhaustion comes from the muting, not the choice.
When tools take over
Delegation to machines makes it worse. You’ve handed moral authority to systems that don’t experience guilt.
Spreadsheets decide who gets laid off. Algorithms decide who gets promoted. Dashboards decide what’s “working.” Performance review templates decide what counts as “good.”
These tools reframe decisions into their own logic. The layoff becomes a cost-optimization problem, stripped of questions about dignity or community. The promotion becomes a metric-comparison exercise, emptied of character or potential. You’d think you could use the tool to insulate you from the emotional impact of what you’re doing—like a moral shield you can hide behind. But it doesn’t work like that.
Tenbrunsel & Messick identified this kind of self-deception as “ethical fading”: when decisions get reframed as “just business” or “just numbers,” leaders stop perceiving moral dimensions even when they’re central. It’s automatic under time pressure. In the moment, you stop seeing the decision as having ethical weight.
Once you see the pattern, you see it everywhere.
Values capture: Stated values become PR surfaces rather than operating principles. The language stays. The behavior changes. No one announces the shift, so misalignment becomes endemic. The company claims “people-first” while systematically rewarding managers who drive attrition.
Accountability diffusion: Responsibility spreads so thin that no one feels ethically agentic. “I didn’t make that call” becomes a reflex, even for people with decision-making power. Everyone’s involved, no one’s accountable. You’ve built a system where moral agency is structurally impossible.
Emotional mediation: The tools you use to make decisions also shape how you feel about them. When everything runs through dashboards, scorecards, and templates, the affective texture of choice changes. You stop feeling the weight because the interface doesn’t transmit it.
But these aren’t failures of character. They’re patterns that emerge when you delegate authority to systems that weren’t designed to carry moral complexity.
Debugging the infinite loop
Here’s where it gets interesting. I’ve been mapping these conflicts for a while, and there’s a structure underneath the chaos. This structure provides a language for the mismatches that drive ethical confusion.
So let’s try an experiment. Pick one decision that still bothers you—not because it was wrong, but because it lingers. The kind that surfaces when you’re trying to fall asleep.
Run it through these diagnostic axes:
What made it “right” or “wrong”? Did you evaluate based on what would happen (outcomes and consequences), what kind of person or organization it makes you (character and integrity), or whether it conformed to proper form (the right process, duty, or rule)? If you justified with one lens but feel compromised by another, that’s your first conflict.
Who or what had moral standing? Did you treat individual people as what mattered intrinsically? The relationships between them? The community or organization as a collective whole? Or some larger unified system? If your messaging claimed one (”we care about each employee”) but your structure served another (”the company must survive”), you operated in two ethical geometries simultaneously.
Who was obligated? Was this binding on you personally, on anyone in your role, on anyone in this situation, or on everyone universally? If you made a decision that felt acceptable as “what someone in my position does” but violated what you believe “anyone should do,” you’ve hit a scope conflict.
Why did it bind you? Because it follows logically from first principles? Because it’s built into the nature of reality or your role? Because an authority (board, law, duty) commanded it? Or because you’ve observed it works in practice? If you justified with pragmatism (”this is what works”) but feel violated at the level of basic axiomatic reality (”but this just isn’t right“), that’s the gap between your stated and felt justifications.
Don’t try to resolve these yet. Just make them visible.
Going back to our API sunsetting example: we justified keeping it running by appeals to our shared character (”we don’t abandon the people who believed in us first”) but the outcome logic says shut it down (”$5M and team health vs. a thousand users who pay almost nothing”).
You centered individuals and relationships (”these specific people trusted us”) but your role obligation centers the collective (”the company itself”).
You feel personally bound by the promise (”we said ‘easy to use’ and they built on that”) but your role is about breaking promises sometimes (”it’s on me to make the hard decisions”).
You want to believe loyalty is axiomatic (”you just don’t betray early believers”) but you justified it to the CTO with pure pragmatism (”the numbers don’t work”).
Four criteria, four conflicts, one decision. That’s not an ethical dilemma, it’s like... a quadrilemma. You might feel like a moral monster for failing to navigate this gracefully, but the whole thing was a setup—a structural impossibility dressed up as leadership.
The claim here is modest: naming these conflicts improves situational awareness. It won’t solve ethical problems for you, but you can at least begin to anticipate and mitigate the risk of moral exhaustion.
But but my boss is a sociopath
When discussing the psychology of ethics and morality, I often get the pushback: “My leaders aren’t exhausted—they just don’t care.”
Maybe. But consider Jackall’s finding in Moral Mazes, his foundational work on the psychology of morality in large companies: organizations often require the abandonment of personal morality for survival. Not because leaders are sociopaths, but because systems reward expediency and punish visible integrity.
Moral disengagement isn’t the same as a lack of ethics. It’s often a protective response to impossible ethical tensions. People who look like they don’t care may have simply stopped letting themselves feel the mismatch, because feeling it was unsustainable.
The system shaped them. That doesn’t excuse harm, but it reframes what intervention looks like.
What happens next
If you address it: Transparency. Restored sense of agency. Leaders can explain choices without defensiveness. Space opens for acknowledging trade-offs. Trust rebuilds, not because everyone agrees, but because the conflicts are named and the logic is visible.
If you ignore it: Drift into cynicism. Disengagement masked as professionalism. Retention loss in the people with the strongest ethical clarity. Organizational numbness—the condition where everyone is too tired to care, and that becomes the culture.
Healthcare has been studying moral injury for decades (Epstein & Hamric, 2009; Rushton, 2017). Clinicians experience it when policies prevent them from delivering the care they believe patients need. The public sector sees it in policymakers paralyzed by conflicting mandates. AI governance is facing it now—teams building systems with responsibility diffused so thoroughly that no one feels accountable for outcomes.
The pattern is consistent: when ethical conflicts are structural but treated as personal failures, the injury compounds.
Say it again: ethics isn’t about being a good person
It’s not about good PR. It’s not compliance theater. It’s not a culture war talking point.
It’s how you reconcile your values, your actions, and your beliefs—especially under pressure.
This isn’t a guide of “how to be ethical.” It’s more like “how to survive making ethically loaded decisions without losing yourself.”
Pick one decision that still bothers you. Map where there was conflict: what kind of rightness you claimed versus what kind you violated; whose moral standing you centered versus whose you deprecated; what scope of obligation you operated under versus what you actually believe; why you said it bound you versus why it actually did. Notice where your organizational tools and processes mediated—or completely erased—the moral weight.
Ask: was the conflict between you and the decision, or between multiple ethical systems you were asked to hold simultaneously while pretending they were one?
Name the tension instead of burying it.
That’s not resolution. But it’s a start.


