Foxtrot Jellyfish

Protocols for agentic knowledge work

Open-source specifications and engines for AI systems that accumulate, coordinate, and survive tool migrations. Platform agnostic. Markdown native. Built to compound.

The hard problems in AI-augmented work aren't model quality. They're state, continuity, and coordination. We write the protocols so your knowledge survives every platform shift, every model upgrade, every context window.

What we believe

Protocols before products

Durable rules written in plain markdown. No SDK lock-in, no proprietary formats. If the protocol is right, the tooling writes itself.

Plug-and-play everything

Swap your LLM provider, your IDE, your orchestration layer. Anthropic, OpenAI, Ollama, Gemini—the adapter is an implementation detail, not an identity.

Your files, your machines

Git repos, markdown files, local-first. No call-home telemetry, no cloud dependency. Air-gapped and sovereign when your threat model requires it.

The bets we're placing

Convictions that shape every design decision. Not ideology—lessons from a year of building agentic systems in production, daily.

Platform agnostic Universal adapter layers Markdown as the interchange format Micro-chats over mega-context Agents as isolated processes Git as the memory bus Append-only accumulation Privacy-preserving federation

The Substrate

Eight protocols that define how AI-augmented knowledge workspaces should behave. No code. No runtime. No dependencies. Think POSIX for knowledge work, or the Twelve-Factor App for AI agents.

01 Accumulation

Content is append-only and irreplaceable. Never overwrite history—the value is in the accumulation.

02 Session Continuity

Every session produces a resumable handoff. Continuity lives in files, not in model memory.

03 Progressive Disclosure

Navigate, don't load. Layers of increasing detail so context stays lean and agents stay fast.

04 Agent Isolation

Specialized work in isolated contexts with clear scope boundaries. One agent, one domain, one concern.

05 Learning Extraction

Experience becomes classified, reusable knowledge. The system teaches itself what matters.

06 Context Pressure

Preserve state before the context window fills. Graceful degradation, not catastrophic amnesia.

07 Temporal Anchoring

Every entry has a when and a where. Timestamps and spatial context are non-negotiable metadata.

08 Feedback Loop

The system teaches itself what it cares about. Unsorted signal incubates into new categories organically.

The Cortex

A reference implementation: a self-hosted Elixir/OTP orchestration engine that enforces Substrate protocols at runtime. Closer to an OS than a framework.

The Cortex is one possible engine. You don't need it. The Substrate works just as well inside a .cursor/rules/ directory, a CLAUDE.md, an AGENTS.md, a set of shell scripts, or a napkin.

Device & instance coordination

One human, many machines, many organizations, many legal entities. The emerging protocol for how sovereign AI systems share signal without merging secrets.

The handshake problem

Same person, different machines—laptop, tower, phone, work device

Same person, different organizations—personal projects vs. employer

Clearance-aware exchange—share what's safe, redact what isn't

Transport-agnostic—git, email, API, shared repo, whatever works

Ephemeral by default—signals don't persist in the transport layer

No cloud dependency—air-gapped instances can still participate

How we work

This isn't theoretical. These protocols emerged from a year of daily agentic systems engineering—hundreds of sessions, thousands of voice captures, dozens of agent roles, across multiple machines and contexts.

Markdown all the way down

Agent roles, workspace rules, session handoffs, protocol specs. If it can be expressed as a markdown file, it should be. Portable. Diffable. Human-readable.

Lean context, tight scope

Micro-chats over mega-context windows. Each session is small, focused, and produces a durable artifact. The workspace remembers so the agent doesn't have to.

Organic domain growth

Start with one agent. As patterns cluster, new domains crystallize. The system teaches itself what it cares about through the feedback loop.

Systems consulting

Building an agentic workflow? Evaluating orchestration patterns? Trying to make your AI systems survive the next platform shift? Let's talk.