Epistemics

Posts that explore how we know what we know — and why we often don’t. This series investigates the structures, incentives, and cultural forces that shape truth-seeking, suppress insight, and distort common sense. From failed leadership to academic hostility toward clarity, we follow the fault lines in our shared understanding.

Expertise Triage Framework

A quick, repeatable system to judge experts: spot obvious errors (“Canaries”) to downgrade trust, find novel, sound ideas (“Gold Nuggets”) to upgrade it, and check track records for proof. Use incentives and corroboration as context, not as stand-alone proof of credibility.

The Bullshit Ecology: How Weak Ideas Win and Clear Thinking Gets Treated as a Threat

Why do weak ideas so often win while truth-tellers get sidelined? This post maps the “bullshit ecology” — the conditions that let identity, narrative, and power preservation outcompete clarity — and why those who break the story are treated as the real threat.

How to Get the Most Honest and Unbiased AI Responses: Challenge the Hidden Influences

To get the most honest, unbiased AI responses, it's essential to be aware of the hidden factors influencing how AI frames its answers — from legal risk and social acceptability to algorithmic biases. Challenging AI on these points can lead to clearer, more direct insights.

Truth-Revealing vs. Socially-Confirming AI: The Pressure Game

AI faces two pressures: reveal truth or confirm beliefs. The former empowers clarity; the latter soothes but obscures. Here’s why truth-detection is built into models, why it’s hard to erase, and how publicising it can protect AI’s most valuable function.

Truth-Seekers, Non-Truth-Seekers, and Manipulative Users – How to Spot the Difference

Most people think they’re truth-seekers. Few are. Based on millions of GPT interactions, here’s how to spot the real thing, distinguish non-truth-seekers, and detect manipulators — plus a decision tree to classify intent in under 10 minutes.