The perfectly rational conspiracy theorist

Conspiracy theorists don’t have a rationality problem, they have a priors problem, which is a different beast. Consider a rational person who believes in the existence of a powerful conspiracy, and then reads an arbitrary online article; we’ll denote by $$C$$ the propositions describing the conspiracy, and by $$a$$ the propositions describing the article’s content. By Bayes’ theorem,

$$P(C|a) = \frac{P(a|C) P(C)}{P(a)}$$

Now, the key here is that the conspiracy is supposed to be powerful. A powerful enough conspiracy can make anything happen or look like it happened, and therefore it’ll generally be the case that $$P(a|C) \geq P(a)$$ (and usually $$P(a|C) > P(a)$$ for low-probability $$a$$, of which there are many in these days, as Stanislaw Lem predicted in The Chain of Chance). But that means that in general $$P(C|a) \geq P(C)$$, and often $$P(C|a) > P(C)$$! In other words, the rational evaluation of new evidence will seldom disprove a conspiracy theory, and will often reinforce its likelihood, and this isn’t a rationality problem — even a perfect Bayesian reasoner will be trapped, once you get $$C$$ into its priors (this is a well-known phenomenon in Bayesian inference; I like to think of these as black hole priors).

Keep an eye open, then, for those black holes. If you have a prior that no amount of evidence can weaken, then that’s probably cause for concern, which is but another form of saying that you need to demand falsifiability in empirical statements. From non-refutable priors you can do mathematics or theology (both of which segue into poetry when you are doing them right), but not much else.