An essay on knowledge, attribution, and the infrastructure that connects them.
You write something. A paper, a note, an observation that took years to form. It goes out
into the world. Someone reads it, builds on it, transforms it into something new. That work travels
further still.
You never know where it lands. The connection between you and everything that grew from your
thinking—severed at the moment of publication. You are the root of a tree you cannot see.
This is how knowledge has always worked. Ideas flow outward, transformed and recombined, losing their
connection to origin. We accept this as natural—the price of contribution.
The current reality
Now consider what is actually happening. AI systems are trained on the sum of human knowledge—your
papers, your writing, your thinking. The models learn patterns from work you spent years developing.
They generate responses shaped by your contributions.
You receive nothing. No attribution. No compensation. No knowledge of whether your work mattered at all.
Researchers cite each other, but citations are symbolic. They don't pay rent. Platforms host your writing
and capture the value of aggregation. Journals charge readers for access to work you gave away. The
infrastructure of knowledge was built to serve institutions, not contributors.
The economics of extraction — on value capture in knowledge systems
Consider the path of a research paper. You spend months—perhaps years—developing an insight.
You write it up, submit it to a journal. The journal charges readers $30 to access it. Or
perhaps it's open access, and your institution pays $3,000 for publication.
Your paper is cited by others. Those citations help their careers, help the journal's impact
factor, help the platforms that index everything. Someone trains a language model on your
field. Your precise formulations become part of its weights.
At each step, value is created. At no step do you participate in it beyond the initial
transaction—which often costs you money.
This isn't a bug. It's the architecture. Current systems have no mechanism to trace value
back to origin, so they don't. The infrastructure wasn't built for attribution at scale.
What if knowledge remembered its origins?
Imagine a different architecture. Every piece of knowledge carries with it the complete chain of what it
built upon. Not as metadata that can be stripped away, but as cryptographic structure—inseparable from
the content itself.
When someone queries that knowledge, payment flows backward through the entire chain. Automatically. To
everyone who contributed to making it possible.
This is not a platform you upload to. It's a protocol that runs locally—on your machine, under your
control. Your second brain, actually yours.
You decide what to share. You can participate in the network while keeping everything private. You can
query others' knowledge without contributing your own. Or you can publish your foundational work and
earn perpetually as others build upon it.
Why local-first matters — on sovereignty and trust
Every platform asks you to trust them with your data. They promise to keep it safe, to use it
responsibly, to let you delete it when you want. These promises are architectural lies—the
data lives on their servers, governed by their incentives.
A local-first protocol inverts this. Your knowledge lives on your machine. The network is a
communication layer, not a storage layer. When you share something, you're serving it
directly. When you stop, it stops.
This changes everything about what "sharing" means. You're not giving something away to be
held by others. You're opening a door that you can close.
The protocol doesn't require trust in any central party. Cryptographic proofs replace
institutional promises. The math doesn't change its terms of service.
The layers of knowing
Knowledge isn't flat. It has structure—not layers stacked on top of each other, but a graph at the center
with contributions flowing in and insights flowing out. The protocol makes this structure explicit.
L2 is the core: your personal knowledge graph. Entities and relationships,
RDF-compatible, pluggable into the semantic web. This is where understanding actually lives—the mesh of
connections between concepts, people, ideas. Always private, always yours.
L0 and L3 are peers that interface with your graph from opposite
directions. L0 is contribution—documents, notes, research that feed into your understanding. L3 is
synthesis—insights that emerge from it, crystallizations of what you've learned that others might build
upon.
L1 extends outward from both. Extracted facts, claims, statistics that make your sources
and insights searchable, quotable, verifiable. The tendrils that let others find what you've created.
Every contribution and synthesis maintains cryptographic links through the graph. When you create an L3
insight, the sources that shaped your understanding are permanently, verifiably part of its provenance.
The chain traces back through your graph to every foundational contribution.
On the privacy of L2 — why your graph stays yours
Your L2 represents how you understand the world—which entities you consider important, how
you resolve ambiguities, what relationships you see that others might not. This is valuable
intellectual work, but it's personal.
Two people reading the same papers will build different graphs. These differences are
features, not bugs. They represent genuine intellectual diversity.
The protocol keeps L2 private by design. It never leaves your machine, is never queryable by
others. Its value surfaces only when you create L3 insights—and then it's the insight that's
shared, not the underlying structure of your thinking.
This is how synthesis actually works. You don't share your entire mental model; you share the
conclusions you've drawn from it.
The paths
The protocol doesn't prescribe how you participate. It provides infrastructure; you decide what to build
on it.
Contribute foundations
Publish your research, writing, or domain expertise. Earn perpetually as others
build upon your work. The protocol routes value backward through derivation chains—you don't
need to keep producing to keep earning.
Synthesize privately
Build your own understanding without sharing anything. Query others' knowledge,
construct your L2 graph, develop insights—all locally, all private. Participate in the network
as a consumer.
Build applications
Create tools that interact with the protocol. Search interfaces, visualization
layers, specialized extractors. The network becomes infrastructure for knowledge applications.
Deploy agents
Connect AI systems that query knowledge and pay for access. Every query
triggers payment through the provenance chain. Your agents participate in fair exchange.
On building agentic businesses — a different model of work
Consider what this enables. You have years of expertise in a domain. You've published papers,
developed frameworks, accumulated knowledge that took a career to build.
Today, that knowledge feeds AI systems that compete with you. You're training your
replacement.
On the protocol, you publish your knowledge with provenance. AI agents that want to use your
domain expertise query it—and pay. Your years of work become infrastructure that generates
value continuously.
You can build agents of your own that combine your knowledge with others'. You can negotiate
with organizations about shared research. You can see exactly how your work is being used
and by whom.
This isn't about working harder. It's about building something once that participates in all
derivative value. A different relationship between effort and outcome.
The agentic future
AI systems will proliferate. They will query vast amounts of knowledge, synthesize constantly, create
derivative works at machine speed. This is not speculation; it's already happening.
The question is whether you're part of that exchange or outside it. Whether the knowledge you've
developed participates in what agents create, or simply gets absorbed without attribution.
The protocol provides standard infrastructure for AI-human knowledge exchange. Every query triggers
payment to all contributors in the provenance chain. Agents become participants in a fair economy, not
extractors from a commons.
Not extraction without attribution, but transaction with fair compensation.
This creates alignment. AI systems benefit from high-quality human knowledge. Humans benefit from AI
consumption of their work. The protocol mediates between them—ensuring that value flows both ways.
The deeper implication
What's really at stake is a model of work.
Today, you're paid for what your knowledge is worth in the moment—or more accurately, for what someone
will pay for it in the moment. A paper's value is its publication fee. A book's value is its advance.
Consulting is hourly.
But knowledge generates value over time, through chains of derivation you cannot see. Your foundational
insight becomes part of someone's synthesis, which becomes part of an AI's training, which shapes
decisions you'll never know about.
The protocol makes this value legible—and capturable. Contribute something foundational, and participate
in every derivative work, proportionally, forever.
This is not passive income. It's structural participation in the knowledge economy you helped build.
The 95/5 distribution — on how value flows
When someone queries an L3 insight, payment flows through the provenance chain. The
synthesizer who created the L3 keeps 5%—a fee for the work of synthesis. The remaining 95%
flows to foundational contributors: everyone whose L0 or L1 content was used.
This ratio is a design choice, enforced by the protocol. It cannot be negotiated away, cannot
be altered by platforms, cannot be captured by intermediaries. The math ensures that
foundational work is valued.
If your L0 document was used by three different sources that contributed to an L3, you
receive proportional payment from all three paths. If your work appears multiple times in a
provenance chain, your share increases accordingly.
The synthesizer still benefits—they receive the 5% fee plus any foundational shares they
hold. Creating valuable synthesis is rewarded. But not at the expense of those who made
synthesis possible.
The Protocol
Nodalync
Everything described above is implemented in a protocol specification and reference implementation. Open
source, cryptographically verifiable, running on Hedera for settlement.
The protocol is infrastructure. It specifies content addressing, provenance chains, payment channels, and
settlement—the minimal structure needed for fair knowledge exchange. Everything else is built on top.
Technical architecture — for developers
Content addressing: All content is referenced by SHA-256 hash. Tamper-proof,
verifiable, unique.
Provenance: Every piece of content maintains a cryptographic chain to its
sources. L3 insights carry the complete root_L0L1 array—every foundational contributor.
Local-first: Content lives on owner nodes. The network is a communication
layer (libp2p), not a storage layer. DHT for discovery, direct queries for content.
Payment channels: Off-chain payment channels for micropayments. Batch
settlement to Hedera for finality. Sub-second transactions, fraction-of-a-cent costs.
MCP integration: Model Context Protocol server for AI agent access. Budget
controls, provenance tracking, automatic payment.
Design decisions — on why the protocol is shaped this way
No native token: The protocol uses HBAR directly. This eliminates token
bootstrapping complexity, leverages existing liquidity, and avoids regulatory ambiguity. The
focus is on proving the knowledge economics model, not creating a speculative asset.
L2 is always private: Your knowledge graph never leaves your machine. This
isn't a limitation—it's protection. Your unique perspective on how knowledge connects is
yours alone.
95/5 split is protocol-enforced: The distribution ratio isn't a setting or a
negotiation. It's cryptographic structure. This prevents value capture by intermediaries and
ensures foundational contributors always receive the majority.
Discovery is application-layer: The protocol provides hash-based lookup, not
search. Search engines, directories, and recommendation systems are built on top. This keeps
the protocol minimal and focused.
Current status — what exists today
Protocol specification v0.2.1 is complete. Reference implementation in Rust. CLI for
publishing, querying, and synthesis. MCP server for AI agent integration.
Settlement contract deployed on Hedera testnet. Payment channels operational. Provenance
verification working end-to-end.
What's next: mainnet deployment, improved L1 extraction (currently rule-based, AI extractors
planned), expanded documentation, community tooling.
Join the network
The infrastructure for fair knowledge exchange is ready. What you build on it is up to you.