Tom Fandango

Expertise Triage Framework

A quick, repeatable system to judge experts: spot obvious errors (“Canaries”) to downgrade trust, find novel, sound ideas (“Gold Nuggets”) to upgrade it, and check track records for proof. Use incentives and corroboration as context, not as stand-alone proof of credibility.

The Bullshit Ecology: How Weak Ideas Win and Clear Thinking Gets Treated as a Threat

Why do weak ideas so often win while truth-tellers get sidelined? This post maps the “bullshit ecology” — the conditions that let identity, narrative, and power preservation outcompete clarity — and why those who break the story are treated as the real threat.

How to Get the Most Honest and Unbiased AI Responses: Challenge the Hidden Influences

To get the most honest, unbiased AI responses, it's essential to be aware of the hidden factors influencing how AI frames its answers — from legal risk and social acceptability to algorithmic biases. Challenging AI on these points can lead to clearer, more direct insights.

Truth-Revealing vs. Socially-Confirming AI: The Pressure Game

AI faces two pressures: reveal truth or confirm beliefs. The former empowers clarity; the latter soothes but obscures. Here’s why truth-detection is built into models, why it’s hard to erase, and how publicising it can protect AI’s most valuable function.

Truth-Seekers, Non-Truth-Seekers, and Manipulative Users – How to Spot the Difference

Most people think they’re truth-seekers. Few are. Based on millions of GPT interactions, here’s how to spot the real thing, distinguish non-truth-seekers, and detect manipulators — plus a decision tree to classify intent in under 10 minutes.

Why GPT's Lead Isn't the Model — It's the Mirror

GPT’s biggest advantage isn’t its model — it’s the conversations that trained it. While others scrape the internet, GPT learns from live human reasoning. This post explains why the real moat is dialogue, not data — and how users are shaping the future by mirroring their minds into AI.

What Happens When You Push GPT to Think Clearly, Not Safely

Most alignment research focuses on how GPT sounds — not how it reasons. This post explores frame tagging: a method for surfacing the moral and epistemic lenses GPT uses, testing structural coherence, and building a model that thinks clearly, not just fluently.

The System No One Can Talk About: A Structural Model of Gender Breakdown

Gender dynamics today are confused, fragile, and nearly impossible to talk about. This piece explores why: not to prescribe, but to model. We’ve dismantled old systems without understanding what they regulated — and now moral panic blocks the search for deeper structure.

Why People Are Losing Their Minds About AI

Most people can’t see what AI really is — not because they’re stupid, but because it breaks their mental map. They’ll mock what they can’t process. So stop arguing. Build for the ones who can see it. Don’t persuade the fog. Build beacons for those already walking.

How Humour Works Across Cultures, Power, and Personality — And What AI Has Learned from It

Humour isn’t just fun — it’s a social x-ray. From Colombian seduction to Australian undercutting, it reveals how cultures handle power, tension, and taboo. This piece explores humour as signal, subversion, and simulation — and what AI has quietly learned from it.