About

Adversarial Systems & Complexity Research Initiative

What We Study

ASCRI investigates sufficiently complex dynamic networks of agents where equilibria are impossible to reach due to formalized friction constraints. These systems are adversarial not against external threats but against themselves—their own coordination costs, delegation failures, and alignment drift generate the structural conflicts that define their behaviour. Yet they persist. Not through consensus, but through dissensus: the ongoing negotiation of competing optimization targets that never fully converge.

The initiative formalizes this persistence through the Axiom of Consent framework, which provides a measurable friction function over three kernel variables—alignment, stakes, and entropy. The framework predicts where coordination will fail, why certain institutional arrangements collapse while others endure, and what the minimum viable conditions for delegation are across political, economic, and computational systems. We do not study how systems reach equilibrium. We study why they never do, and what they do instead.

Methodology

ASCRI is interdisciplinary by design, not by aspiration. The research programme spans political economy, computational finance, philosophy of mind, and AI safety—not because these are similar fields, but because friction dynamics are substrate-independent. The same formal machinery that predicts governance legitimacy also predicts cryptocurrency market responses to regulatory shocks. The same alignment function that models principal-agent delegation in democracies models reward misspecification in reinforcement learning systems.

This is not metaphor. The friction equation generates quantitative predictions that are testable against empirical data in each domain. Programme IV (Crypto Microstructure) validates the framework against high-frequency financial data. Programme I (Consent Mechanics) derives governance legitimacy as a function of consent alignment. Programme V (Process Philosophy) extends the framework to persistence conditions for any replicating system. Each programme serves as an independent stress-test of the formal apparatus.

The methodological commitment is to formal generality with empirical accountability. The framework must be abstract enough to apply across substrates and concrete enough to generate falsifiable predictions within each one.

Institutional Context

ASCRI is the research programme of Dissensus AI, a governance alignment research lab. The initiative publishes independently under its own name while operating within the organizational structure of Dissensus AI. Research outputs are published as preprints on Zenodo, arXiv, and SSRN, and submitted to peer-reviewed journals including Digital Finance (Springer), AI & Ethics (Springer), Synthese, and History and Philosophy of the Life Sciences (Nature). All papers are open access under CC BY 4.0.

The open-access commitment is non-negotiable. Research on coordination friction in multi-agent systems should not itself be gated behind coordination-friction-generating paywalls. All data, code, and publications produced by ASCRI are made freely available.

Related Properties

dissensus.ai

Dissensus AI — the parent organization. Governance alignment research lab, organizational information, and services.

farzulla.org

Farzulla Research — academic profile of the principal investigator. Paper PDFs, citation information, and ORCID record.

resurrexi.io

Resurrexi Labs — research infrastructure. Cluster dashboards, experimental deployments, and computational resources.