top of page
Search

Chapter VII. Exposing Hidden Knowledge: The Analyst Endpoint

  • Writer: john raymond
    john raymond
  • Aug 25
  • 3 min read
ree

There is a bright line between analysis and commentary. Analysts integrate data over time with mechanistic reasoning to generate predictions that can be scored against reality. Commentators, on the other hand, narrate what has already happened and retrofit a story. Only the former strengthens the web of truth.


The knowledge exposed is simple: if your model neither predicts nor survives contact with events, it is not analysis.


The Criterion of Valid Analysis

  1. Fit to reality, not elegance. Models must conform to observed data—across time, not at a single photogenic moment.

  2. Predictive, not post-hoc. Explanations must emit near-term forecasts with clear time bounds.

  3. Mechanism, not vibes. A causal chain must explain why the data take the form they do—who has which levers (law, finance, force, narrative), through which channels, on what timelines.

  4. Nulls and disconfirmation. State the null hypothesis and the conditions under which you would abandon your thesis. If nothing can falsify it, it is not a model.


The Anti-Pattern: Post-Hoc Commentary

Post-hoc work speaks in tidy generalities (“states pursue power,” “signaling matters”) and then back-fills an exegesis after events settle. This produces the appearance of order without bearing risk.


The tell is the absence of pre-stated hypotheses, disconfirmation criteria, and dated forecasts. Explanations become tautologies: whatever happened was “consistent with the model.”


That is not science; it is narrative.


William Spaniel as Negative Example

Spaniel’s after-the-fact modeling is structurally non-predictive. It tends to:


  • treat live gambits as noise until they are over,


  • ignore regime-security incentives and asymmetric deception mechanisms,


  • publish ex post rationalizations without time-boxed, testable priors.


If there is a counterexample—one case where a Spaniel model exposed a gambit in advance or predicted a concrete move with falsifiable timing—let it be produced and scored.


That is the gauntlet: name the forecast, show the timestamp, show the mechanism, show the outcome. Anything else is post-hoc, unfalsifiable commentary.


The Positive Pattern: Structural Empiricism (Raymond Method)

By contrast, structural empiricism binds correlation to mechanism and forces both through time.


The Raymond Method operationalizes this with three pillars:


  • Pillar One — Regime Security. Assume each move serves personal/cluster survival; test policies by their effects on that survival.


  • Pillar Two — Asymmetric Warfare. Expect low-cost deception and narrative flooding; analyze channels, cadence, and intended cognitive states.


  • Pillar Three — Byzantine Traitor-General. Track insider vectors (personnel, appointees, envoys) that translate intent into institutional outputs.


Predictive record (Illustrative):


  • The May 9 parade gambit—the promised Trump sanctions never materialized, as predicted.


  • The Graham–Blumenthal bill—the bill still has not been picked up by the Senate, as predicted.


  • Secondary sanctions—the tariffs on India did not contains Russian behavior or actions, as predicted.


  • The moral inversion of the Alaska–Washington summits—the entire summit architecture was designed to make Ukraine look like the obstacle of peace, as predicted.


  • Zelenskyy’s bilateral logic—the trilateral summit idea was rejected because it is designed to damage Ukraine, as predicted.


None of these required insider access. They required data across time, explicit nulls, and the power equation—harm over time—to rank moves by what they enable, not by what individual actors claim.


A Concrete Challenge (for Spaniel or Anyone Really)

Pick a live question with strategic salience. For example: What will be the U.S. executive’s next two decisive moves that decrease Kremlin leverage within 90 days?


Publish, in advance (for this example):


  • H₀: No costly actions occur that measurably reduce Kremlin leverage on timelines that matter.


  • H₁: At least one specified action occurs (define it operationally), with observable results (define metrics), within a dated window.


  • Mechanism: Incentive chain explaining why or why not (domestic coalition costs, legal exposure, alliance bargaining, IO payoffs).


  • Disconfirmation: What observation would force you to revise the model.


  • Scorecard date: When you will grade yourself—publicly.


This is not theatrics; it is but the most minimal scientific dignity for actual strategic analysis. If your framework cannot do this, it is not a framework.


The Analytic Implications

War as waged by President Trump and Vladimir Putin is not some mythical chaos. It is legible through science once you bind data to mechanism and insist on predictions that face the calendar.


Analysts who refuse this discipline are not “wrong”; they are simply not analysts. They are narrators, nothing more.


And the web of truth requires more. It requires testable scientific theory.




 
 
 

Comments


bottom of page