We are actively seeking collaborators who want to stress-test, extend, or dismantle the framework.
We are looking for researchers who can attack specific open problems in the framework.
The mapping between replicator dynamics and optimization processes has been described informally. We need category theorists to formalize the structure-preserving maps between these domains and determine whether ROM constitutes a genuine functor or something weaker.
The framework derives governance legitimacy as a function of consent alignment. This needs adversarial testing against real-world political systems—particularly edge cases where high-alignment regimes are illegitimate or low-alignment systems persist stably.
The framework claims friction dynamics are substrate-independent—that the same formal machinery applies to biological, institutional, and computational systems. AI safety researchers are uniquely positioned to identify where this claim breaks down in artificial agent architectures.
The hedging paradox predicts that risk-mitigation instruments eventually generate the systemic risk they were designed to prevent. We need economists to identify market structures where this prediction fails—where hedging genuinely reduces systemic risk without recursive amplification.
The consciousness monograph argues for eliminative monism—that phenomenal consciousness is an artefact of optimization rather than a fundamental feature of reality. Philosophers of mind who can identify fatal objections to this position are particularly welcome.
ASCRI operates under Dissensus AI. For services, consulting, and organizational enquiries, visit dissensus.ai.
For academic correspondence, paper-specific questions, or data requests, use the email above. For press, partnerships, or institutional enquiries, contact Dissensus AI directly.