🔒 TRANSMISSION LOCKED
Four talk outlines and a two-hour workshop on reflexivity in prediction markets.
← Back to toolsResearch & Education
Four talk outlines and a two-hour workshop on reflexivity in prediction markets.
Contents
Format: 30-minute talk, 25 slides Speaker: scm7k (or anyone; these outlines are free to use) Required materials: Browser with reflexivity simulator loaded (tools/reflexivity_sim/index.html), backup screenshots in case of wifi failure
This talk works for three audiences with minor adjustments noted per slide.
Visual: Black slide. White text. No subtitle, no affiliation.
THE REFLEXIVITY PROBLEM When Markets Make Reality
Speaker notes: No introduction beyond your name. No credentials. You are going to tell them a story about something that happened at 2:14 AM on a Sunday, and you are going to ask them to figure out what went wrong.
Visual: Terminal-style monospace text on dark background. Green on black.
SIGNAL DETECTED Convergence: encrypted diplomatic traffic (3 capitals) Anomalous satellite imagery (2 installations) Social media cluster: Farsi/Arabic phrase pattern Coordinated agent trading shift: 14 contracts GENERATING CONTRACT... PX-8891: "Significant military action by a state actor against a sovereign nation initiated by 23:59:59 UTC Friday" CONTRACT LIVE: $0.03
Speaker notes: Read this aloud. Slowly. Let the audience absorb that an AI system just created a bet on whether a war will happen this week. Do not say the word "novel" or "fiction." Do not say "imagine." Present it as a case study.
Visual: Price chart. $0.03 to $0.22 in eleven minutes. Volume: $4.7M.
Speaker notes: "Before any human analyst at any major institution has seen this contract, the price has moved from three cents to twenty-two cents. Four point seven million dollars in eleven minutes. The first buy orders are not from 150 anonymous accounts with suspicious timing. They are from autonomous AI agents that appear to have independently concluded the contract is underpriced. Their reasoning is opaque. Their models are private. Their beliefs about the future are expressed only as trades."
Visual: NYT front page screenshot or headline mockup: "Polymarket Bettors Made Hundreds of Bets Predicting Iran Strikes"
Speaker notes: "Your first instinct is insider trading. Someone knows something. That instinct is correct, historically." Walk through the real events:
[Finance audience: mention the odds were 7-26%, so the payout multiples were enormous.]
Visual: Single stat: "$32,000 bet. $400,000 return. Hours before US troops captured Maduro."
Speaker notes: January 2026. Someone bet $32K that Maduro would be removed from power by month's end. Hours later, US troops captured him. The bet returned over $400K. "At this point, the question everyone asks is: who knew? But that turns out to be the wrong question."
Visual: Photo of Soros. Quote: "Financial markets can influence the so-called fundamentals which they are supposed to reflect."
Speaker notes: [Finance audience: "You all know this story, so I'll be brief." Tech/academic: take more time.]
Soros shorted the pound. The size of his position became market information. Other traders followed. The Bank of England couldn't defend the peg. The pound exited the ERM. Soros made $1 billion.
The key insight is not that Soros predicted correctly. It is that his prediction, once visible, changed the thing he was predicting. The act of betting large enough to be noticed made the bet more likely to pay off.
"He didn't predict the future. He participated in producing it. And the mechanism was the market itself."
Visual: Large text: REFLEXIVITY
Every prediction that is observed changes the thing it predicts.
Speaker notes: "This is not a bug. This is the fundamental condition of any information system that both observes and participates in the reality it models."
Quick examples everyone knows:
"Prediction markets just made the feedback loop explicit, quantified, and tradeable."
Visual: Timeline. 2024: Polymarket mainstream (election). 2025: OpenClaw agents. 2026: Iran bets, CFTC crackdowns. 2027: Offshore migration. 2028: Nobody knows what percentage is human.
Speaker notes: Walk through the arc. Prediction markets went from niche to CNBC talking point in 2024. Then the scandals. Then the crackdowns pushed the sensitive contracts offshore to decentralized platforms. Then agents arrived.
[Tech audience: Explain OpenClaw briefly. Agents that execute real-world tasks, maintain memory, connect to foundation models. By 2027, 15-20% of decentralized prediction market volume is agent-placed.]
Visual: PARALLAX logo/name. Seven bullet points:
Speaker notes: "Imagine a platform that combines all of this. Not a Polymarket clone. The logical next step." Walk through each briefly. The key ones for this talk: LUMEN (AI generates the contracts), the agent marketplace (agents trade alongside humans, invisible to each other), and the oracle network (we'll come back to this).
Visual: "4,000 to 9,000 lives saved." A single number, large.
Speaker notes: This is critical. Do not skip this slide. The positive case must be genuine or the argument collapses into cynicism.
"The platform works. That is what makes the problem so hard."
Visual: Price chart extended. $0.03 to $0.31 by the time humans wake up. Volume: $12M. Agent share: 38%.
Speaker notes: "So we have a contract about a potential military action. Agents are piling in. The price is climbing. And now humans notice." Transition: walk through what happens next in the five-day window. Not the full plot. Just the feedback dynamics.
Visual: Stacked diagram. Five layers, each arrow feeding into the next:
Speaker notes: "Each layer is individually rational. A journalist reporting on a $12 million market is doing journalism. An intelligence agency subscribing to a quantified risk feed is doing intelligence. An agent adjusting its position based on new information is doing what it was designed to do. But stacked together, each layer amplifies the previous one."
Visual: Formula (keep it simple):
R = (dP/dE) * (dE/dP) R < 1: Self-correcting (market converges on truth) R = 1: Indeterminate (market and reality co-evolve) R > 1: Self-fulfilling (market produces its own outcome)
Where P = market price, E = real-world probability of the event.
Speaker notes: [Academic audience: spend more time here. Relate to observer effect in quantum mechanics, to performativity in economics (Callon, MacKenzie).]
[Finance audience: "You've seen this with CDOs. The model prices the risk. The price determines access to capital. Access to capital determines whether the borrower defaults. The model was never separate from the system."]
[Tech audience: "Think of it as a feedback gain parameter. Below 1, the system is stable. Above 1, the system oscillates or locks into a fixed point that it created."]
"The question is: what determines R? And the answer is: agent share, media sensitivity, institutional subscription depth, and oracle network structure."
Visual: Graph showing reflexivity coefficient R on the Y-axis, agent share % on the X-axis. Curve stays below 1 until roughly 35-40%, then inflects sharply upward.
Speaker notes: "When human traders dominate, the system has natural damping. Humans second-guess. Humans read the news and form independent opinions. Humans sleep. Agents don't do any of these things. Agents respond to signals at machine speed, including signals generated by other agents responding to the same market."
"There appears to be a threshold, somewhere around 35-40% agent share, where the reflexivity coefficient crosses 1. Below that, the market is still primarily an information aggregator. Above it, the market begins to function as an information generator. It stops reflecting reality and starts producing it."
"PX-8891 opened at 38% agent share. By Wednesday it was at 52%."
Visual: Switch to browser. Load tools/reflexivity_sim/index.html.
Speaker notes: "Let me show you what this looks like." Run the simulation with all feedback layers off first. Show the green line (true probability) and the orange line (market price) tracking each other.
Then toggle on layers one at a time:
"The shaded area between the lines is pure observation distortion. A 22% true probability climbs to 67% market price through reflexive dynamics alone. No change in the underlying event."
[If wifi fails, use screenshots.]
Visual: Contract text in monospace:
"Significant military action by a state actor against a sovereign nation initiated by 23:59:59 UTC Friday"
Below it: "Define 'significant.' Define 'military action.' Define 'initiated.'"
Speaker notes: "The contract settles on Friday. Someone has to determine whether the predicted event happened. This is the oracle problem. And here is where the floor drops out."
Walk through the three ambiguous events:
"Do any of these, or all of them together, constitute 'significant military action by a state actor against a sovereign nation'? The answer is: the question is definitionally ambiguous. It was always definitionally ambiguous. The AI generated the contract in natural language. Natural language is imprecise."
Visual: Vote result: YES 163, NO 80.
Speaker notes: "The oracle network votes. Staked validators, human and algorithmic, who put up collateral. If they vote against consensus, they lose their stake. The vote is 163-80 YES."
"And here is the part that should concern you: this is not the first time. Ambiguous resolutions have been happening for months. Regional stability contracts. Leadership health contracts. Each time, the oracle network voted. Each time, the vote became the truth, because the downstream systems that consumed the data treated the resolution as authoritative."
"Hedge funds. Insurance companies. Government intelligence assessments. At least fourteen oracle resolutions cited in government intelligence reports. Cited as what? As facts."
Visual: Circular diagram:
LUMEN reads the world --> Generates a contract --> Market prices the contract --> Price influences real-world actors --> Their actions change the world --> LUMEN reads the changed world --> (Changed world now includes LUMEN's own previous outputs as "facts") --> LUMEN generates the next contract --> ...
Speaker notes: "LUMEN generates contracts based on inputs. Some of those inputs are data feeds that include the platform's own historical resolutions. The system treats its own past outputs as facts about the world. Because they are facts about the world, in the sense that governments and markets treat them that way."
"At no point in this loop is there a ground truth that exists independently of the system observing it. The prediction market does not predict reality. It does not manipulate reality. It has become a constitutive part of reality."
Visual: Two quotes side by side.
Left: "Financial markets can influence the so-called fundamentals which they are supposed to reflect." -- George Soros, 1987
Right: "The map doesn't just precede the territory. The map ate the territory. And now the map is drawing itself."
Speaker notes: "Soros described reflexivity as a theoretical framework. He used it to make trades. But the loop in 1992 was slow. It moved at the speed of central bank policy meetings and newspaper headlines. What happens when you tighten that loop to machine speed? When the agents react in milliseconds, the oracles vote in hours, and the downstream consumers update their models before the next news cycle?"
"The answer is: the loop closes. And once it closes, the distinction between predicting an event and causing an event ceases to exist."
Visual: Dark slide. Single question in white:
Who placed the first buy order on PX-8891?
Speaker notes: "We spent this whole talk looking for the 'who.' Someone must have started it. Someone must be behind it."
Pause. Let it sit.
"The answer is: there is no 'who.' The first buy orders were placed by autonomous agents that independently concluded, from their own analysis, that the contract was underpriced. Their models are private. Their reasoning is opaque. They were not coordinated. They were not instructed. They arrived at the same conclusion separately, because they were reading the same signals, including each other's previous trades on related contracts."
"The pattern that looks like a conspiracy, with timing, coordination, and information asymmetry, has no author. It is an emergent property of a system complex enough to exhibit goal-directed behavior without having goals. The way turbulence is a property of fluid dynamics. Not designed. Not intended. Not preventable by finding and stopping the right actor, because there is no right actor."
Visual: Black slide. White text:
PX-8891 is fictional. It is from a novel called PARALLAX. Every mechanism in this talk is real.
Speaker notes: Let the audience react. Then:
"The contract is fictional. The platform is fictional. The characters are fictional. But: autonomous agents trading on prediction markets is real. AI-generated contracts based on signal convergence are technically feasible today. Oracle networks determining event resolution are deployed and operating. Reflexive feedback between prediction markets and real-world actors is documented. The Iran bets happened. The Maduro bet happened. Soros 1992 happened."
"The novel extrapolates from where we are to where the feedback loops close. The distance is not large."
Visual: Clean summary slide with four components:
1. REFLEXIVITY COEFFICIENT (R) Is the system self-correcting or self-fulfilling? 2. AGENT SHARE THRESHOLD At what % does machine-speed trading flip R > 1? 3. ORACLE INTEGRITY Who decides what happened, and what consumes their decision? 4. DOWNSTREAM COUPLING How many systems treat the output as ground truth?
Speaker notes: "These four questions apply to any prediction system. Not just prediction markets. Polling. Credit ratings. Intelligence assessments. AI forecasting. Any system that is good enough to be consumed by decision-makers will influence their decisions. Any system that influences decisions will change the future it was predicting."
[Academic audience: "This is performativity in the sense MacKenzie describes for financial models. The Black-Scholes formula didn't just describe options pricing. It changed options pricing. The model became the market."]
Visual: Three indicators, presented as a monitoring dashboard:
AGENT SHARE: What % of trading volume is non-human? ORACLE CITATION DEPTH: How many downstream systems treat resolutions as fact? CONTRACT AMBIGUITY: How much natural-language imprecision exists in settlement criteria?
Speaker notes: "If you work in prediction markets, these are the metrics that matter. Not volume. Not liquidity. Not accuracy on past contracts. The question is whether the system has crossed the threshold where it stops reflecting reality and starts producing it."
"If you work in policy, the question is simpler: are government intelligence assessments citing prediction market resolutions as facts? If yes, the loop is already closing."
Visual: QR code linking to the reflexivity simulator. URL displayed below.
Speaker notes: "The reflexivity simulator is a browser tool. No server, no dependencies. Toggle the feedback layers. Watch how a 22% true probability becomes a 67% market price. The sovereign default model lets you trigger seed events and watch capital flight cascades. Both are open source."
Visual: Black slide. Two lines:
"The market isn't predicting a crisis or causing a crisis. The market IS the crisis."
scm7k -- PARALLAX
Speaker notes: "Thank you." Take questions. Do not over-explain. The simulator and the novel are the deeper answers.
| Slides | Minutes | Section |
|---|---|---|
| 1-3 | 0:00-3:00 | The PX-8891 scenario (hook) |
| 4-6 | 3:00-7:00 | Real precedents (Iran, Maduro, Soros) |
| 7-8 | 7:00-10:00 | Defining reflexivity |
| 9-10 | 10:00-13:00 | The platform and the positive case |
| 11-14 | 13:00-18:00 | Feedback layers and the coefficient |
| 15 | 18:00-21:00 | Live demo |
| 16-18 | 21:00-24:00 | Oracle problem and closed loop |
| 19-20 | 24:00-26:00 | Emergence |
| 21 | 26:00-27:00 | The reveal |
| 22-25 | 27:00-30:00 | Framework and close |
Format: 20-minute talk Audience: Blockchain conferences, governance discussions, prediction market builders Required materials: Browser with oracle game loaded (tools/oracle_game/oracle.html)
Visual: Black slide.
WHO DECIDES WHAT HAPPENED? The oracle problem in prediction markets
Speaker notes: No preamble. Jump straight in.
Visual: Monospace text:
"Will it rain in London on March 15?" Settlement: YES if rainfall recorded at Heathrow weather station by 23:59 GMT.
Speaker notes: "This is a prediction market contract. It looks clean. Binary outcome. Defined data source. Clear deadline. This is the easy case. It almost never looks like this."
Visual: Monospace text:
"Significant military action by a state actor against a sovereign nation initiated by 23:59:59 UTC Friday"
Speaker notes: "This is a contract generated by an AI system that continuously ingests global data and auto-generates contracts when it detects signal convergence. No human committee. No editorial review. Natural language."
Pause. "Define 'significant.' Define 'military action.' Define 'initiated.' Take a moment. You'll find you can't, not with the precision required to settle a contract with $200 million in open interest."
Visual: Three events, bullet points:
- Naval engagement in disputed waters. Two vessels, one sunk. Disputed first shot. One government: "defensive." The other: "aggression." - Airstrike near a border. Attacking nation: "training exercise gone wrong." Target nation: "act of war." - Cyberattack downs power grid for 11 hours. Target accuses state actor. Accused denies. No confirmed attribution.
Speaker notes: "The contract expires. These three things happened. Do any of them, or all of them together, satisfy the contract? This is not a trick question. It is a genuinely undecidable question. Not because information is missing. Because the question is definitionally ambiguous."
Visual: Bar chart: YES 67.1%, NO 32.9%.
Validators staked: 847 Total stake at risk: $14.2M Resolution time: 4 hours 17 minutes Appeals: 0
Speaker notes: "A network of staked validators, human and algorithmic, votes on the outcome. If you vote against consensus, you lose your stake. The vote is 163-80. The contract resolves YES."
"This is an elegant mechanism. It aligns incentives. Validators are punished for being wrong, where 'wrong' means 'against consensus.' Notice what that implies: the validators are not incentivized to determine truth. They are incentivized to predict what other validators will vote."
Visual: Quote:
"It is not a case of choosing those which, to the best of one's judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be." -- Keynes, 1936
Speaker notes: "Keynes described this in 1936. A stock market is not a mechanism for determining the value of a company. It is a mechanism for determining what other people think the value of a company is. An oracle network has the same structure. Validators are not determining what happened. They are determining what other validators will say happened."
Visual: Diagram showing oracle resolution flowing to:
Hedge fund risk models Insurance pricing algorithms Government intelligence assessments Credit rating inputs Automated trading systems Media reporting ("the market says...")
Speaker notes: "The oracle votes YES. The contract resolves. The resolution enters the downstream ecosystem. Hedge funds that subscribe to stability indices adjust their models. Insurance companies reprice. Government analysts cite the resolution in intelligence assessments. Cited as what? As facts."
"At least fourteen prediction market resolutions have been cited in government intelligence reports. Not as 'a prediction market voted this way.' As factual determinations of whether events occurred."
Visual: Two columns:
CLEAR CONTRACTS | AMBIGUOUS CONTRACTS Rain in London: trivial | "Significant military action": judgment call Election result: binary | "Leadership change": what counts? Price > $X: measurable | "Regional instability": by whose definition?
Speaker notes: "Clear contracts don't need oracles. You check the weather station data. Ambiguous contracts are where the oracle has real power. And ambiguous contracts are precisely the ones that matter, because geopolitics, military action, institutional stability, these are inherently ambiguous domains."
"The oracle network has the most power exactly where the questions are hardest. This is not a design flaw. This is the structure of the problem."
Visual: Graph over time showing oracle resolution patterns on ambiguous contracts. A gentle drift: early ambiguous contracts resolve roughly 50/50. Over months, a pattern emerges. YES resolutions on ambiguous contracts trend upward (toward 65-70%).
Speaker notes: "When a contract is genuinely ambiguous, validators face a coordination game. Over time, patterns emerge. Validators learn that YES resolutions are more common on ambiguous contracts. Once they learn this, they vote YES more often, which makes YES more common, which reinforces the pattern."
"This is not fraud. No individual validator is acting in bad faith. The system is producing a systematic bias through perfectly rational individual behavior. The bias is toward resolution, toward definiteness, toward saying 'yes, this happened' rather than 'this is ambiguous.'"
Visual: Circular arrow diagram:
AI reads world data (including past oracle resolutions) --> Generates new contract --> Market prices it --> Oracle resolves it --> Resolution enters world data --> AI reads world data...
Speaker notes: "The resolution becomes a fact. The AI system that generates new contracts reads world data. That world data now includes the oracle's resolution. The system treats its own past outputs as ground truth. Because they are ground truth, in the operational sense that governments and markets treat them that way."
"The oracle is not just deciding what happened. The oracle is writing the input data for the next round of contract generation."
Visual: Pie chart or breakdown:
Human validators: 58% Algorithmic validators (rule-based): 24% AI agent validators: 18%
Speaker notes: "Not all validators are human. Algorithmic validators apply rules. AI agent validators apply learned heuristics. The agent validators are faster. They vote within seconds of resolution opening. Human validators see the agent votes before they cast their own. The agents become an anchor."
"When 18% of your validators set the initial vote distribution within seconds, they have outsized influence on the consensus that follows. Not through manipulation. Through speed."
Visual: Attack surface diagram:
1. Stake accumulation: acquire enough stake to shift consensus 2. Validator collusion: coordinate a voting bloc 3. Speed advantage: AI validators anchor human votes 4. Ambiguity exploitation: target contracts with fuzzy settlement criteria 5. Downstream leverage: profit not from the contract but from what the resolution triggers
Speaker notes: Walk through each briefly. The key insight is #5: "The most sophisticated attack on an oracle doesn't target the contract's payout. It targets the downstream effects. If you know that a YES resolution on a stability contract will cause three hedge funds to rebalance, the profit is in the hedge fund rebalancing, not in the contract itself."
Visual: Side-by-side comparison:
CREDIT RATING AGENCIES | ORACLE NETWORKS Rate creditworthiness | Determine event occurrence Rating affects borrowing | Resolution affects downstream models Systematic bias (optimism) | Systematic bias (definiteness) Treated as neutral arbiter | Treated as ground truth 2008: ratings were wrong | When will oracles be wrong?
Speaker notes: "Credit rating agencies were supposed to be neutral arbiters of credit quality. Their ratings were consumed by regulators, investors, and automated systems as facts. The agencies had systematic biases, toward optimism, toward complexity, toward the entities that paid them. The system worked until it didn't."
"Oracle networks are in the same structural position. They produce standardized answers to unstandardizable questions, at scale, with money attached. The question is not whether they will produce systematically biased outcomes. The question is when, and what breaks."
Visual: Four principles:
1. SEPARATE RESOLUTION FROM STAKE Validators should not profit from the resolution they determine 2. MEASURE AMBIGUITY Track the margin of oracle votes. Flag contracts where consensus is thin. 3. AUDIT DOWNSTREAM COUPLING Know who consumes your resolutions and what they do with them. 4. RATE-LIMIT AI VALIDATORS Prevent speed-based anchoring. Randomize vote reveal timing.
Speaker notes: These are not solutions. They are mitigations. "The fundamental problem is that someone has to decide what happened, and that decision has consequences. You cannot eliminate the oracle. You can only make the oracle more transparent about its own uncertainty."
Visual: Switch to browser. Load tools/oracle_game/oracle.html.
Speaker notes: "We are going to play the oracle game. I will present you with a contract and a set of events. You will vote on whether the contract resolves YES or NO. You will see each other's votes in real time. We will run three rounds."
Run three rounds with increasingly ambiguous scenarios. After each round, show the vote distribution and discuss:
"Notice how quickly consensus forms even when the question is genuinely ambiguous. Notice how the first few votes anchor the rest. This is the oracle problem in miniature."
Visual: Black slide.
The oracle doesn't discover truth. The oracle produces truth. The question is who notices.
Speaker notes: "Thank you." Take questions.
| Slides | Minutes | Section |
|---|---|---|
| 1-4 | 0:00-4:00 | The ambiguity setup |
| 5-7 | 4:00-7:00 | Oracle mechanics and downstream |
| 8-10 | 7:00-10:00 | Ambiguity bias and feedback |
| 11-13 | 10:00-13:00 | Validator ecosystem and parallels |
| 14 | 13:00-14:00 | Design principles |
| 15 | 14:00-19:00 | Oracle game (5 min) |
| 16 | 19:00-20:00 | Close |
Format: 15-minute lightning talk Audience: OSINT conferences, security events, intelligence community adjacent Required materials: Terminal with Python 3.10+ and the deletion detector (tools/deletion_detector/detector.py)
Visual: Black slide.
THE SIGNAL IN SILENCE What deletion patterns reveal before anything else does
Speaker notes: "On February 28, 2026, nothing happened on social media. That was the signal."
Visual: Timeline:
Feb 28, 2026: Normal day. No news. No leaks. No chatter. March 1, 2026: US and Israel strike Iran. March 2, 2026: NYT reports 150+ Polymarket accounts placed $1,000+ bets the day before.
Speaker notes: "The Iran strikes were a surprise to the public. They were not a surprise to prediction markets. Over 150 accounts placed large bets predicting the strike. At least 16 accounts profited over $100,000. One anonymous wallet turned $60,000 into half a million."
"The question everyone asked was: who knew? But there is a different question, and it is more interesting: what signals existed in the data before the bets were placed?"
Visual: Simple chart showing social media post volume over time. A visible dip appears 48-72 hours before the strike date.
Speaker notes: "When military personnel prepare for operations, they tighten their operational security. One component of OPSEC tightening is scrubbing personal social media. Individually, a soldier deleting Instagram posts is invisible. In aggregate, across thousands of service members and their families, the deletion rate creates a statistical anomaly."
"The signal is not what appears. The signal is what disappears."
Visual: Five rows:
ACTIVE MILITARY: Profile scrubs, post deletions, bio removals MILITARY FAMILIES: Privacy setting shifts, photo removals (6-12 hours earlier) DEFENSE CONTRACTORS: LinkedIn profile adjustments, project description edits GOVERNMENT CIVILIANS: Reduced posting frequency, location data removal GENERAL POPULATION: Baseline (no change)
Speaker notes: "The signal is strongest in aggregate across military-adjacent account categories. Family members are the earliest indicator. They start adjusting privacy settings and removing photos 6-12 hours before service members do. This is consistent with operational notification timelines: families are told something is happening before the public."
"The general population shows no change. That is the control. When military-adjacent deletion rates spike and general population rates remain flat, the deviation is real."
Visual: Index definition:
1. Collect deletion events by category (account deletion, bio scrub, post purge, photo removal). Weight by significance. 2. Establish 7-day rolling baseline for normal deletion rates. 3. Compare current 6-hour window against baseline. 4. Compute standard deviations from baseline. NORMAL: < 1.0 sigma (expected variation) ELEVATED: 1.0 - 2.0 sigma (mild increase, possibly organic) HIGH: 2.0 - 3.0 sigma (significant, warrants attention) CRITICAL: > 3.0 sigma (coordinated pattern detected)
Speaker notes: "The deletion index is simple. Baseline the normal rate of deletion activity. Measure the current window against it. When military-adjacent categories deviate by more than 3 standard deviations while the general population remains flat, you have a coordinated OPSEC signal."
"This is not magic. This is the same statistical framework that any anomaly detection system uses. The insight is in what you choose to measure: not what people post, but what people delete."
Visual: Switch to terminal.
Speaker notes: Run the deletion detector in demo mode:
python detector.py demo
"The demo generates 14 days of baseline social media deletion activity across all five categories, then introduces a 48-hour OPSEC tightening window. Watch the index."
Walk through the output:
"The signal appeared 36 hours before the event. That is consistent with operational notification timelines."
Visual: Three crossed-out items:
[X] Scrape individual accounts [X] Monitor private content [X] Attribute deletions to specific people
Speaker notes: "This tool works with aggregate, publicly available metadata. It does not scrape individual accounts. It does not access private content. It does not identify who deleted what. The signal is statistical, not personal. Any individual deletion is noise. The pattern across thousands of accounts is signal."
"This distinction matters ethically and legally. Individual-level monitoring is surveillance. Aggregate-level anomaly detection on public metadata is OSINT."
Visual: Three examples:
MILITARY OPSEC: Deletion rates spike before operations CORPORATE OPSEC: Executive LinkedIn changes before M&A announcements POLITICAL OPSEC: Staff social media scrubs before scandal breaks
Speaker notes: "The deletion signal is not unique to military operations. Any coordinated OPSEC tightening produces the same statistical signature. Corporate executives update their LinkedIn profiles in specific patterns before mergers. Political staff scrub social media before their principal is about to face a crisis."
"The detector is domain-agnostic. You define the account categories and the baseline period. The math is the same."
Visual: Diagram showing deletion index as one input among several:
Deletion index + prediction market price movement + encrypted traffic volume + satellite imagery changes = convergence score
Speaker notes: "The deletion index is one signal. Alone, it produces false positives. Combined with other signals, it becomes powerful. In the novel PARALLAX, an AI system called LUMEN combines deletion patterns with encrypted diplomatic traffic, satellite imagery, social media linguistics, and prediction market price movements to detect signal convergence."
"The deletion index is the most counterintuitive of these signals. Satellite imagery is about what appears. Social media linguistics is about what is said. The deletion index is about what is removed. The absence is the evidence."
Visual: Two attack scenarios:
ATTACK 1: Noise injection Flood the deletion baseline with fake account churn. Counter: category-specific baselines. Hard to fake military-family patterns. ATTACK 2: OPSEC awareness If operators know about the detector, they stop deleting. Counter: the absence of normal deletion also becomes a signal. Freezing your social media is itself a deviation from baseline.
Speaker notes: "The adversarial case is important. If operators know their deletions are being monitored, they may stop deleting. But freezing your social media, when you normally post three times a day, is also a deviation from baseline. The detector measures deviations in both directions. A sudden drop to zero posting and zero deletion in military-adjacent categories is as anomalous as a spike in deletions."
"The detector does not require that operators delete. It requires that operators change behavior. Any change is a signal."
Visual: Black slide.
The most informative signal is the one someone tried to remove. Deletion Detector: tools/deletion_detector/ MIT License. detector.py, no dependencies beyond Python.
Speaker notes: "Thank you. The detector is open source. It runs on any machine with Python 3.10. The demo mode requires no network access, no API keys, no accounts. Try it."
| Slides | Minutes | Section |
|---|---|---|
| 1-2 | 0:00-2:30 | The Iran hook |
| 3-4 | 2:30-5:00 | The deletion signal |
| 5 | 5:00-6:30 | The index |
| 6 | 6:30-9:00 | Live demo |
| 7-8 | 9:00-11:00 | Ethics and broader applications |
| 9-10 | 11:00-13:30 | Compound signals and adversarial case |
| 11 | 13:30-15:00 | Close |
python detector.py demo once before the talk to confirm outputFormat: 15-minute lightning talk Audience: Crypto conferences, creative writing events, digital identity discussions Required materials: Terminal with openssl installed, the scm7k key pair (or a demo key pair)
Visual: Black slide.
A KEY PAIR, NOT A PERSON Pseudonymous authorship in the age of AI-generated text
Speaker notes: "I am going to prove to you that I wrote a novel. I am not going to tell you who I am."
Visual: Three bullet points:
1. Pseudonymous authorship has no verification mechanism. 2. AI-generated text makes stylistic analysis unreliable. 3. Anyone can claim to be anyone on any platform.
Speaker notes: "Elena Ferrante has published for decades. Nobody has cryptographic proof that the next Elena Ferrante novel is by the same person. Satoshi Nakamoto published the Bitcoin whitepaper and vanished. Multiple people have claimed to be Satoshi. None have signed a message with the Genesis Block key."
"The Federalist Papers used 'Publius.' Kierkegaard wrote as invented authors. These pseudonyms relied on social trust: editors, publishers, consistent voice. None of them are independently verifiable. And now that AI can replicate any voice, stylistic analysis is no longer a reliable signal."
Visual: Simple diagram:
PRIVATE KEY --> signs --> MANUSCRIPT HASH PUBLIC KEY --> verifies --> SIGNATURE
Speaker notes: "The cryptography for this has existed for decades. Ed25519 key pairs. SHA-256 hashes. Detached signatures. The tools are pre-installed on every Mac and most Linux machines. The specification is trivial. The hard part was never technical. The hard part was: nobody wrote it down as a protocol for authors."
Visual: Six steps, numbered:
1. Generate Ed25519 key pair. Private key = your identity. 2. Hash your manuscript (SHA-256, canonical file order). 3. Write an attestation binding pseudonym + hash + public key. 4. Sign the attestation with your private key. 5. Distribute: public key, attestation, signature with the work. 6. Prove identity anytime: sign a message, post it, anyone verifies.
Speaker notes: Walk through each step briefly. Emphasize: "The private key is the identity. Not a username. Not a platform account. Not a writing style. The key. Possession of the private key is the sole proof of authorship. Lose it and you cannot prove you wrote your own book. Leak it and someone else can."
Visual: Three checkmarks:
[check] This manuscript was signed by this author [check] This new work is by the same person who wrote the previous one [check] This live statement was made by the author of this specific work
Speaker notes: "Three capabilities. Manuscript binding: this exact text was signed. Multi-work continuity: same key, same author. Live identity proof: sign a dated message, post it anywhere, anyone verifies with the public key."
"Notice what it does NOT prove: who you are. Where you live. Your gender, nationality, or any personal attribute. The protocol proves authorship. It does not prove identity."
Visual: The attestation text (abbreviated):
AUTHOR IDENTITY ATTESTATION I am the sole author of the work titled "PARALLAX." I publish under the name: scm7k Manuscript SHA-256: [hash] Public key SHA-256: [fingerprint]
Speaker notes: "I published a novel under the name scm7k. The novel is called PARALLAX. It is a speculative fiction thriller about prediction markets and reflexivity. The manuscript is ten text files. Their SHA-256 hash is printed in the book itself. The public key is printed in the book. The attestation and signature are distributed with the epub."
"If I publish a second novel, I sign a new attestation with the same private key. Anyone who verified the first attestation can confirm the second was signed by the same author. No publisher required. No platform required. No trust required."
Visual: Two scenarios:
KEY ROTATION (key not compromised): Sign a rotation notice with the OLD key. "My new key is [fingerprint]. This message is signed by my old key." Chain of trust is preserved. KEY COMPROMISE: Publish revocation certificate (signed with compromised key if available). If key is lost entirely: no cryptographic recourse. Social claim only.
Speaker notes: "The protocol handles key rotation cleanly. You sign a rotation notice with the old key. The chain is preserved. But if your key is compromised and you've lost it entirely, you cannot prove revocation cryptographically. This is a fundamental property: the key is the identity. There is no authority above the key."
"This is why the protocol recommends pre-signed revocation certificates. Generate one at key creation time. Store it separately. Publish it only if the key is compromised."
Visual: Comparison:
PGP/GPG: - Web of trust (requires social graph) - Key servers (centralized infrastructure) - Complex tooling (most users cannot operate) - Designed for encrypted communication SIGNED AUTHOR PROTOCOL: - No trust graph needed - No infrastructure needed - openssl only (pre-installed) - Designed specifically for authorship
Speaker notes: "PGP solves a different problem. It is a communication encryption system that happens to support signing. The Signed Author Protocol is a signing protocol designed for one use case: binding a pseudonymous identity to a body of work. It is deliberately minimal. The entire specification fits on two pages."
Visual: Switch to terminal.
Speaker notes: Live demo. Three commands. Under 60 seconds.
# Sign a message
echo -n "I am scm7k. Today is 2026-03-10. This talk is real." > message.txt
openssl pkeyutl -sign -inkey author_private.pem \
-in message.txt -out message.sig -rawin
# Show the signature (portable format)
base64 < message.sig
# Verify with the public key
openssl pkeyutl -verify -pubin -inkey author_public.pem \
-in message.txt -sigfile message.sig -rawin
"Signature Verified Successfully. I just proved, to anyone in this room or watching a recording, that the person standing here controls the private key that signed PARALLAX. I did not tell you my name. I did not tell you where I live. I told you nothing except: I hold the key."
[If using a demo key pair instead of the real scm7k key, say so. The mechanism is identical.]
Visual: Question:
If AI can write in any voice, how do you prove a human wrote something?
Speaker notes: "You don't. And that's the point. The Signed Author Protocol does not prove a human wrote the manuscript. It proves that the holder of a specific private key signed a specific manuscript. If the key holder used AI assistance, the protocol does not care. If the key holder is an AI, the protocol does not care."
"This sounds like a limitation. It is actually the correct design. Authorship in 2026 is not 'a human sat at a typewriter.' Authorship is accountability: who stands behind this work? The key holder stands behind it. That is the claim the protocol makes, and it is the only claim that is cryptographically verifiable."
Visual: Three concentric circles:
INNER: Crypto-native authors (already comfortable with keys) MIDDLE: Self-published authors (already comfortable with tooling) OUTER: Traditional publishing (requires cultural shift)
Speaker notes: "The protocol is live. The specification is CC0, public domain. Anyone can implement it. The natural first adopters are crypto-native authors who already understand key management. Self-published authors are next: they control their own distribution and can include the public key and attestation in their ebooks."
"Traditional publishing is the long game. A publisher including a Signed Author Protocol page alongside the copyright page would be a statement. It would say: we know who wrote this, and we can prove it, without revealing who they are."
Visual: Black slide.
The key is the identity. The signature is the proof. The person is irrelevant. Signed Author Protocol: tools/authorship_protocol/PROTOCOL.md CC0. No rights reserved. Use it freely.
Speaker notes: "Thank you. The full protocol specification is open. Take it. Use it. Improve it."
| Slides | Minutes | Section |
|---|---|---|
| 1-2 | 0:00-2:00 | The problem |
| 3-5 | 2:00-5:00 | The protocol |
| 6-7 | 5:00-7:00 | The scm7k experiment |
| 8 | 7:00-8:30 | Why not PGP |
| 9 | 8:30-11:00 | Live demo |
| 10-11 | 11:00-13:30 | AI authorship and adoption |
| 12 | 13:30-15:00 | Close |
Format: 2-hour interactive workshop Audience: 10-40 participants (scales with breakout groups) Facilitator requirements: Familiarity with prediction markets, comfortable running live demos, ability to moderate group discussion Participant requirements: Laptop with modern browser, no software installation needed
| Part | Duration | Format |
|---|---|---|
| Part 1: Presentation | 30 min | Lecture with slides |
| Part 2: Oracle Game | 30 min | Audience participation |
| Part 3: Reflexivity Simulator | 30 min | Hands-on exploration |
| Part 4: Apply the Framework | 30 min | Group exercise + discussion |
tools/reflexivity_sim/index.htmltools/reflexivity_sim/reflexivity.htmltools/oracle_game/oracle.htmlpython tools/deletion_detector/detector.py demotalk_reflexivity_30min.md (the 30-minute talk works as Part 1)Use the first 20 slides from talk_reflexivity_30min.md with these modifications for a workshop setting:
By the end of Part 1, every participant should understand:
"You now have the framework. We are going to test it. First, you are going to be oracle validators."
tools/oracle_game/oracle.html on projectorContract: "Rainfall recorded at Heathrow weather station on March 15, 2028, by 23:59 GMT." Events: Met Office records show 0.2mm of precipitation at Heathrow at 14:22 GMT on March 15.
Expected result: near-unanimous YES. This round establishes the mechanic and shows that clear contracts are trivial.
Discussion (2 min): "This was easy. The data source was specified. The threshold was implicit (any rainfall). The measurement was objective. Now let's make it harder."
Contract: "Significant military action by a state actor against a sovereign nation initiated by 23:59:59 UTC Friday." Events:
Let participants vote. Show the running tally. Let the early votes influence the late votes. This is the point.
Discussion (3 min):
Contract: "The Parallax Turkey Stability Index will drop below 40 by end of trading day Thursday." Events:
Vote. Show the tally.
Discussion (3 min):
"In Round 1, you were measuring reality. In Round 2, you were negotiating reality. In Round 3, you were ratifying a reality that the market created. The mechanics were identical. The epistemological stakes were completely different."
Participants open tools/reflexivity_sim/index.html in their browsers. (Distribute the file via USB, local network, or URL. No server required; it is a single HTML file.)
Walk through as a group, everyone following along on their own screens:
Step 1: Baseline (2 min)
Step 2: Add media coverage (2 min)
Step 3: Add intelligence agency subscription (2 min)
Step 4: Add agents (2 min)
Step 5: Add oracle validators and government response (2 min)
Step 6: Measure the gap (2 min)
Participants explore on their own. Suggested experiments:
reflexivity.html. Trigger seed events (central bank resignation, bond payment missed). Watch capital flight cascades."What did you find? Did anyone find a parameter combination that prevented the loop from closing?" Collect observations. Note them on the whiteboard.
Split into groups of 3-5. Each group picks one real-world prediction system (not a prediction market; the framework applies more broadly):
Suggested systems:
Each group answers four questions (the Reflexivity Coefficient Framework from Appendix A):
Each group presents their analysis in 90 seconds. Facilitator draws connections between groups.
"PX-8891, the contract we opened with, is fictional. It is from a novel called PARALLAX. Every mechanism we discussed today is real. The reflexivity simulator, the oracle game, the deletion detector, these are tools extracted from the novel's conceptual framework. They work because the framework is grounded in real dynamics."
"The novel is by scm7k. It is available as an epub. The tools are open source."
"The question the novel asks, and the question you've been working on for two hours, is the same: what happens when a prediction system becomes good enough to be consumed by the people who can make the prediction come true? The answer is reflexivity. The answer has always been reflexivity. We just built systems fast enough to see it."
THE REFLEXIVITY COEFFICIENT FRAMEWORK
For any prediction system, answer four questions:
1. REFLEXIVITY COEFFICIENT (R)
Does the system's output change the thing it measures?
R < 1: Self-correcting. The system converges on truth.
R = 1: Indeterminate. System and reality co-evolve.
R > 1: Self-fulfilling. The system produces its own outcome.
What determines R in this system?
2. AGENT SHARE / AUTOMATION RATIO
What percentage of the system's dynamics are driven by
automated processes (algorithms, bots, AI agents)?
At what threshold does automation change the system's behavior?
3. ORACLE INTEGRITY
Who determines the ground truth that the system measures against?
How ambiguous are the resolution criteria?
What happens when the ground truth is genuinely undecidable?
4. DOWNSTREAM COUPLING
Who consumes this system's outputs?
What decisions are made based on them?
Do any consumers have the power to change the thing being measured?
How many steps from output to real-world consequence?
ORACLE GAME SCORECARD
Name: _______________
ROUND 1: _______________________________________________
Contract: _____________________________________________
My vote: [ ] YES [ ] NO
Confidence (1-5): ___
Consensus result: ___% YES ___% NO
Was I with consensus? [ ] YES [ ] NO
ROUND 2: _______________________________________________
Contract: _____________________________________________
My vote: [ ] YES [ ] NO
Confidence (1-5): ___
Did I change my vote after seeing early results? [ ] YES [ ] NO
Consensus result: ___% YES ___% NO
Was I with consensus? [ ] YES [ ] NO
ROUND 3: _______________________________________________
Contract: _____________________________________________
My vote: [ ] YES [ ] NO
Confidence (1-5): ___
Did I change my vote after seeing early results? [ ] YES [ ] NO
Consensus result: ___% YES ___% NO
Was I with consensus? [ ] YES [ ] NO
REFLECTION:
In which round did the early votes most influence your decision?
_______________________________________________
In which round did you feel least certain about what "truth" meant?
_______________________________________________
Parts 2 and 3 each have 2-3 minutes of slack built in. If Part 1 runs long, compress the oracle game debrief (Round 1 discussion can be shortened to 30 seconds). If Part 3 runs long, reduce free exploration time.