Verification Architecture: Foundational Literature Map

This map organizes the literature informing the conceptual framework developed in this research program. It reflects selected anchor points rather than an exhaustive review. The work on verification architecture builds upon and extends established debates across epistemology, institutional theory, and socio-technical systems. The references below represent a foundational map of those conversations, with emphasis on conditions under which claims remain credible across time.

This page is designed as a compact grounding layer for future papers, scrolls, and related work within the TOLARENAI lattice.

Epistemology — Explanation, Uncertainty, Testimony

Core works addressing explanation, knowledge evaluation, testimony, distributed knowing, and the conditions under which belief formation remains reliable.

  • Luciano Floridi et al. (2018) — AI4People Framework Establishes normative grounding for AI and epistemic responsibility.
  • Helen Longino (2002) — The Fate of Knowledge Clarifies social conditions under which knowledge claims are evaluated.
  • Alvin Goldman (1999) — Knowledge in a Social World Examines structures of trust, testimony, and distributed knowing.
  • Cailin O'Connor and James Owen Weatherall (2019) — The Misinformation Age Explores network dynamics shaping belief formation and distortion.

Philosophy — Limits, Time, and Conditions of Knowledge

Works that frame temporality, uncertainty, decision, and responsibility as structural conditions rather than temporary inconveniences.

  • Martin Heidegger (1927) — Being and Time Treats temporality as a condition underlying understanding and interpretation.
  • Søren Kierkegaard (1844) — The Concept of Anxiety Positions uncertainty as a structural condition enabling decision and action.
  • Hans Jonas (1984) — The Imperative of Responsibility Addresses ethical obligation under conditions of uncertainty and technological power.

Institutional and Governance — Credibility Across Time

Foundational texts on institutions, distributed governance, power, and the stabilization of expectations across complex systems.

  • Douglass North (1990) — Institutions, Institutional Change and Economic Performance Shows institutions as structures that stabilize expectations and claims.
  • Elinor Ostrom (1990) — Governing the Commons Demonstrates distributed governance and credibility without central authority.
  • Susan Strange (1988) — States and Markets Examines power and credibility within global economic systems.

Socio-Technical Systems — AI, Information, and Mediation

Key sources on documentation, opacity, accountability, and the epistemic risks introduced by AI-mediated environments.

  • Timnit Gebru et al. (2021) — Datasheets for Datasets Frames documentation as a condition for accountability and traceability.
  • Emily M. Bender et al. (2021) — On the Dangers of Stochastic Parrots Clarifies limits of AI systems and risks to epistemic integrity.
  • Solon Barocas et al. (2023) — Fairness and Machine Learning Examines system-level implications of opacity and decision processes.

Information, Archives, and Provenance — Record, Memory, Continuity

Texts connecting recordkeeping, classification, archival trust, and decision under uncertainty to continuity across time.

  • Geoffrey C. Bowker (2005) — Memory Practices in the Sciences Shows classification and recordkeeping as conditions for knowledge continuity.
  • Luciana Duranti (2015) — Archives and Trust Frames preservation of records as a foundation for accountability.
  • Savage (1954) — Foundations of Statistics Addresses decision-making under uncertainty and evidential limits.

Connection to the Verification Trilogy

The Verification Trilogy articulates a minimal framework for preserving credible claims across time.

BlockClaim — Establishes origin and assertion of claims.

TransferRecord — Preserves custody, continuity, and transformation.

WitnessLedger — Enables distributed verification and independent assessment.

Together, these elements formalize conditions under which claims remain inspectable, attributable, and evaluable within complex and evolving systems.