§1 · Executive Summary
Silmari is a portable, user-owned memory substrate for senior knowledge workers and the enterprises that hire them. The product captures three distinct layers of professional AI context — domain encoding, workflow calibration, and the artifact / capability layer — and makes that context portable across every AI client, every employer, and every life context. The capture mechanism is MCP-native; the storage primitive is a folgezettel-style graph in which structure emerges from use rather than being imposed top-down; the federation mechanism is a coupling / decoupling protocol that lets enterprises temporarily share an operator's substrate without ever taking ownership of it.
The pitch is to seed-stage investors. The round is $1.3M to $3M depending on the slide version reviewed (v12 deck visible-copy figure is $1.3M; the canonical pitch-deck markdown v12 references $3M; the discrepancy is intentional and reflects raise-size flexibility based on lead-investor terms). The runway is 24 months. The Series A exit criteria are: 500+ paying individual seats at approximately $150/mo blended (~$1M ARR individual), 20+ paying enterprise engagements on the coupling protocol, managed-cloud closed alpha shipped, and Forward Deployed Operator time-to-competence under 10 weeks across five or more hires.
The strategic asymmetry is this: every existing memory-adjacent player builds for either the developer (Mem0, Letta, Zep, Pinecone) or the consumer (ChatGPT Memory, Auren, Memories.ai). No one is building the prosumer / late-stage knowledge-worker substrate that travels with the operator across tools and employers. Silmari claims that unclaimed territory and pairs it with a 25-year founder-market-fit specialization loop, a Stanford-credentialed technical thesis, and a 1M-connection talent operator who can fill the Forward Deployed Operator bench from the displaced-knowledge-worker pool.
§2 · Canonical Four-Line Anchor
Used across every customer-facing surface for Audience C (the late-stage knowledge worker / FDO pipeline) and adapted with minimal edits for B2B and investor surfaces:
Today: you have billions of conversations with AI every day. They learn how you work, live, and play.
Tomorrow: you switch tools, change jobs, get replaced. It's all gone.
Until now.
SAI + Silmari keeps it. Across every AI. Across every employer. Forever.
The structural pattern is a four-beat narrative arc — today → tomorrow → break → resolution — that maps cleanly onto the deck's Slide 3 (Why Now) sequence. The arc was selected after VC-discourse research confirmed that no other VC or memory startup currently owns the cross-tool / cross-employer portability framing. Mem0 used "memory passport" once; Sequoia / Buhler bundled portability with identity rather than memory; Bessemer canonized "memory as new moat" without naming portability as the defensibility mechanism.
§3 · Full Deck Transcript with Slide-by-Slide Strategic Rationale
What follows is the verbatim visible copy of all eleven slides, paired with the presenter notes spoken alongside each slide and a strategic-rationale annotation explaining the choice of each copy element.
Slide 1 · Cover
Visible copy
Silmari. Personal, portable memory for the Agentic AI era.
- CATEGORY: AI + Human knowledge-work infrastructure
- STAGE: Seed raising $1.3M
- FOUNDER: Maceo Jourdan · me@maceojourdan.com · 602.510.9800
Presenter intent
Quiet open. Read the tagline, do not sell it. The spoken framing is: "Eleven slides. Three questions: what's the problem, how big is the company that solves it, who's the team. That's what you came here to decide. Let's go."
Strategic rationale
The cover headline went through ~12 candidates. The current line ("Personal, portable memory for the Agentic AI era") claims the portability angle that VC discourse research surfaced as unclaimed territory and inserts "Agentic AI" as the category cue VCs already pattern-match on. Earlier candidates ("Memory is the new moat. Silmari is the substrate.") were rejected for being too consensus-co-opting; the current line plants the personal+portable axis the deck argues throughout.
Slide 2 · Problem
Visible copy
THE PROBLEM. Every senior knowledge worker is building the most valuable asset of their career inside AI platforms they don't own.
2–5× productivity gap — calibrated vs fresh.
When a senior operator switches AI providers, changes jobs, or gets fired to be "replaced by AI" that asset is lost. None of the current platforms have any incentive to build the infrastructure to carry it.
At senior operator comp ($200K–$1M+), that gap is a $100K–$500K annual productivity loss per operator.
Presenter intent
"You, right now — if you switched from Claude to whatever Anthropic ships next, or your fund was acquired and you moved to a new firm with different AI tools, you'd lose months of calibration. That's the pain. For you it's annoying. For a senior litigator, a Series-A operator, a specialized ops consultant, it's six figures of productivity every single time."
Strategic rationale
The problem is framed as asset destruction rather than friction. The audience (seed VC) computes the dollar magnitude immediately when given comp band × productivity gap. The "2–5x productivity gap" is calibrated to defensible practitioner observation (Cursor-style usage data; Anthropic and OpenAI public statements about senior-developer productivity multipliers when properly contextualized). The "platforms have no incentive" line directly mirrors Basis Set Ventures' Mem0-investment quote about lab incentive misalignment.
Slide 3 · Why Now
Visible copy
WHY NOW.
Context and memory may be the new moats. Switching costs in AI are already emotional. Tomorrow you switch tools, change jobs, or get replaced. It's all gone.
Until now.
Now, Silmari keeps it. Across every AI. Across every employer, work and play.
Supporting evidence (currently commented out in HTML, retained as fallback)
- Yann LeCun, Turing Award: "We are going to have AI systems that have humanlike and human-level intelligence, but they're not going to be built on LLMs." MIT Technology Review, January 22, 2026. $1.03B exit from Meta into AMI Labs.
- Richard Sutton, 2024 Turing Award: "They have the ability to predict what a person would say. They don't have the ability to predict what will happen." Dwarkesh Podcast, September 26, 2025.
- 76% of 475 AI researchers say scaling to AGI is unlikely. AAAI 2025 Presidential Panel.
Presenter intent
"Four beats. Let each one land. In 1998, Google organized a web that already existed. That made one company worth a trillion dollars. That's the scale of what happens when you solve the organizing layer over an exploding corpus. Today, people have billions of conversations with AI every day. Every conversation teaches the model something about how you work, live, and play. That's a bigger context corpus than the 1998 web was, by a lot, and it's growing every hour. Tomorrow you switch tools, change jobs, or get replaced. Everything you taught the AI is gone. That's the failure state every operator is already living with and nobody's preserving it. Until now. Silmari keeps it. Across every AI. Across every employer, every part of life. The Turing laureates and the AAAI field tell you LLMs aren't going to scale to AGI — the context layer is where durable value lives, and we're building it."
Strategic rationale
The opening sentence ("Context and memory may be the new moats") directly co-opts Bessemer's State of AI 2025 verbatim language, granting the deck Tier-1 VC authority in its first beat. The four-beat narrative arc is the deck's most rehearsed sequence and the moment the room either leans in or doesn't. The Google-1998 framing positions Silmari at the scale of a generation-defining infrastructure layer without claiming Silmari is Google. The supporting-evidence block is intentionally commented out of the visible deck to keep the slide breathing; it stays in the source as a fallback reference for room-by-room delivery.
Slide 4 · How Big
Visible copy
HOW BIG. Memory is the new moat. Silmari is the substrate.
Professional tier · envelope: ~150M global skilled professionals × 1% × $1,800/yr = $2.7B ARR at 1% penetration. $27B at 10%.
Consumer tier · horizon: 1B+ AI-using consumers × $60/yr tier = $60B+ TAM ceiling.
Customer-count table
| Segment | Count | Why they need it |
| Architects | ~2M | Global · codes, specs, clients |
| Software engineers | ~30M | Global · codebases, decisions, debug context |
| Civil / Mech / Elec / Chem engineers | ~10M | Global · specs, compliance, calculations |
| Local inspectors | ~1M+ | Building, fire, health · jurisdictional code |
| Plumbers, electricians, trades | ~10M+ | Global · job history, supplier + customer context |
| Doctors, lawyers, consultants | ~30M+ | Case / client history, regulatory specifics |
| Teachers, researchers, analysts | ~80M+ | Lesson / study / project archives |
| Other office / knowledge workers | ~1B+ | Anyone letting an agent help daily |
Presenter intent
"The Zettelkasten is not a filing system — it's a research method. Luhmann used it to produce 90,000 cards and 70 books of academic research. We take that same method and turn it into research into a person — their work, their family, their play. The substrate that any AI agent bolts onto. Every human who lets an agent help them with their life eventually needs this. Architects need code references and client histories. Software engineers need codebases and decisions. Plumbers need supplier and customer context. Regular people need their agent to know them as well as their spouse does. The professional tier alone is 150 million people globally. At 1% penetration and $1,800 a year, that's a $2.7 billion business. At consumer scale, the ceiling is tens of billions. I'm not naming a TAM number — I'm telling you the substrate has to exist underneath every single one of these agents, and we're the ones shipping it method-faithful."
Strategic rationale
The headline "Memory is the new moat. Silmari is the substrate." performs three jobs simultaneously: (1) it co-opts Bessemer's authority quote; (2) it plants "substrate" as the elevation word the VC-discourse research surfaced as completely unclaimed by any current memory startup; (3) it positions Silmari beneath rather than alongside competitors. The customer-count table makes the TAM concrete without committing to a single TAM number — the deck explicitly refuses the "name a number" trap in favor of the envelope-arithmetic frame.
Slide 5 · Product
Visible copy
PRODUCT. Silmari memory molds and forms automatically as you work.
- 01 DOMAIN ENCODING — Silmari learns without getting in the way.
- 02 WORKFLOW CALIBRATION — Silmari works like human memory; ideas are encoded and surfaced automatically.
- 03 ARTIFACT / CAPABILITY — The layer no platform captures today: what you made, how, and why it was good.
Architecture (ASCII diagram, mirrored in HTML flow-diagram component)
INDIVIDUAL LEARNING MACHINE ◄──── COUPLING / DECOUPLING PROTOCOL ────► ENTERPRISE LEARNING MACHINE
(captures all three layers) (workgroup-level federation)
Local-first, user-owned Multi-tenant, auth, audit
Travels between employers Federates across people
│
▼
FORWARD DEPLOYED OPERATOR (FDO)
Presenter intent
"Static markdown is dead. Vector stores are retrieval machines. Silmari is the only memory system that rearranges itself when you open it — the way thinking actually works. That's the category-level claim. Three layers sit inside that claim. Domain encoding: what you teach the model across hundreds of conversations without realizing it. Workflow calibration: your style, your decision patterns, encoded through repetition. Artifact layer: what you made, how you made it, and why it was good. That fourth one doesn't exist in any platform today. The coupling / decoupling protocol keeps individual and enterprise graphs separate by default. The FDO owns the accuracy number. Same pattern Palantir ran with forward-deployed engineers for twenty years."
Strategic rationale
The three-layer model is the strongest defensive position in the deck against the Aubakirova / Bornstein "filing cabinet" attack. The artifact / capability layer (Layer 03) is genuinely absent from every competing product reviewed in the VC-discourse research — Mem0 captures interactions; Letta captures conversation state with self-editing; Zep captures session memory; ChatGPT Memory captures user-stated facts; Memories.ai captures video. None capture provenance of artifacts plus the judgment that produced them. The coupling / decoupling protocol answers the "labs will eat it" attack: labs cannot eat what is structurally outside the lab boundary by design.
Slide 6 · Why We Win
Visible copy
WHY WE WIN. Seven questions. One column answers yes to all of them.
Seven-question moat (full matrix)
| Question | Embedding stores mem0 · Letta · Zep | Proprietary sidecars OpenAI · Anthropic · Google | Personal context DBs OpenBrain et al. | Consulting / fractional | Silmari |
| Dies when person leaves? | YES | YES | NO | YES | NO |
| Portable across AI clients? | NO | NO | YES | N/A | YES |
| Personal context (judgment, taste)? | NO | NO | YES | N/A | YES |
| Serves person and enterprise? | NO | NO | NO | NO | YES |
| Non-synthetic knowledge (anti-model-collapse)? | NO | NO | NO | YES | YES |
| Gets denser with use? | NO | NO | NO | NO | YES |
| Structure emerges from use? | NO | NO | NO | N/A | YES |
Only Silmari answers yes to all seven. That is the moat. Default alternative: paste your context into a new Claude tab and hope.
Presenter intent
"Seven questions. Every competitor answers no to at least two. We answer yes to all seven. That's the moat. The new rows at the bottom matter most. 'Gets denser with use' is the compounding-returns question every investor asks in the first ten minutes. Embedding stores and Notion-style memory accumulate linearly — each new entry is just more stuff. Silmari's folgezettel and typed edges mean every new card adds connections to existing cards. The graph density compounds. That's the moat that gets bigger the longer you use it. 'Structure emerges from use' is the Luhmann move. Everyone else imposes a schema, a folder tree, a tagging system. Silmari has no top-down taxonomy. The folgezettel grows out of where you placed each card. The structure writes itself."
Strategic rationale
The seven-row table was tuned specifically to defeat the Aubakirova / Bornstein April 2026 a16z essay, which attacks memory startups as filing cabinets that retrieve but do not learn. The two newest rows ("gets denser with use" and "structure emerges from use") are direct rebuttals: a filing cabinet cannot satisfy either. The table also pre-empts the lab-absorption attack by isolating proprietary sidecars (OpenAI, Anthropic, Google) in their own column and showing them failing six of seven properties.
Slide 7 · Team
Maceo Jourdan — Founder · Method and Thesis
- 2002–present: Live learning algorithm trading commodities and FX. 22k round turns per year on SP500 e-mini at approximately 22% IRR. FX software business: 15k customers, $18M ARR.
- 2005–2014: Cross-device tracking plus funnel optimization. Specialization per funnel stage beat generalized attribution.
- 2014–2018: Operations turnaround. Barton Publishing team process outsold an EOS-hybrid implementation 30 to 1. TruDog doubled email revenue in 60 days.
- 2024–present: LLM systems routinely hit 87% accuracy on production workloads against a chain-compounded industry baseline near 36% (0.95 to the 20th power).
Landon Allen — Head of Talent / Recruiting
- 1M+ professional connections, one of the most connected talent operators in the network.
- Head of Recruiting at Splunk (NYSE: SPLK, acquired by Cisco for approximately $28B).
- Leadership at PayPal and early-stage Venmo.
- Currently Head of American Recruiting at Wero.
- Silmari's scaling primitive is the FDO bench. Landon is the operator who actually fills it.
Matt Richter, PhD — Technical Advisor / Architecture
- Stanford Professor of Physics — institutional technical credibility at the highest tier.
- 30 years in machine learning, spanning pre-LLM, CNN, and transformer eras.
- Semiconductor design and process expertise enabling hardware-ceiling arguments from first principles.
- Holds the technical thesis in front of infrastructure-VC technical diligence.
Strategic rationale
Three operators, three distinct failure modes closed. Method (Maceo's 25-year specialization loop); distribution (Landon's network and recruiting access); technical defensibility (Matt's Stanford and ML credentials). Each member's primary contribution is non-substitutable from the other two: if any one is removed, the deck falls. The composition was designed deliberately around investor-diligence failure modes rather than around skill coverage.
Slide 8 · Traction
Visible copy
TRACTION · We started selling pre-alpha. Distribution channels are open. The round funds conversion.
Sales — three open channels
- Oracle ISV Partner Program — In talks with leadership. Direct channel into Oracle's enterprise customer base; co-sell motion and validated-technology listings.
- Enterprise access — Active recruiting at Barracuda Networks, Intel, Archer Aviation, and Apple Security. Warm introductions into security-serious tech enterprises through Landon's recruiting book.
- Consulting base — 20k clients in Maceo's existing practice. First-wave Silmari individual-subscription pool is warm conversion, not a cold market.
Product and hiring
- Alpha live — ionos01 deployment, 15 MCP tools exposed, browser viewer at port 8788, fork of beads_rust with the semantic-edge-type whitelist patched in.
- Accuracy — 87% on specialized LLM workloads measured by domain-expert review against actual input targets, versus an industry chain-compounded baseline near 36% (0.95 to the 20th power).
- FDO supply — Landon's 1M+ network gives structural access to the displaced-senior-operator pool — the exact workforce the AI firing wave created.
Slide 9 · Business Model
Four revenue legs, one dominant at seed
- Primary at seed — individual subscription, Claude Code band. $100–$200 per month, $1,800 per year blended. Sits inside the existing premium-AI-tool spending category senior operators already pay every month. Not a new line item — a parallel one. High margin (no per-query model costs; Silmari is the substrate, not the inference).
- Enterprise coupling fees — per-seat or per-engagement access with FDO escalation.
- Managed cloud — multi-tenant hosted Silmari for teams who do not self-host.
- Enterprise features — SSO, audit, compliance, workgroups, SLAs.
- Optionality — FDO marketplace with take-rate. Later stage.
Slide 10 · The Ask
$3M · Seed · 24-month runway · SAFE, standard post-money, clean cap table
Use of funds — milestone-anchored
| Bucket | Allocation | Milestone |
| Engineering (Maceo + 2–3 hires) | ~50% | Multi-tenant plus auth; managed-cloud alpha; FDO training substrate |
| First 5 senior FDOs (Landon-sourced) | ~20% | Hired from displaced-operator pool, dogfood-trained, deployed on 8–15 engagements |
| Ecosystem and open-source stewardship | ~10% | MCP community, agent-builder partnerships |
| Infrastructure and operations | ~10% | Hosted managed-cloud alpha, compliance readiness |
| Founder runway and buffer | ~10% | 24 months full-time plus reserve |
24-month exit criteria → Series A
- FDO time-to-competence under 10 weeks across 5+ hires (dogfood proof)
- 500+ paying seats at $150 per month blended = $1M ARR individual
- 20+ paying enterprise engagements on the coupling protocol
- Managed cloud in closed alpha
Slide 11 · Vision
Visible copy
Memory and context are the new asset class of the Agentic AI era.
- Year 1–2: FDO bench deployed. Substrate proven at enterprise scale. First cohort of senior operators carries Silmari across employers.
- Year 3–5: Silmari is the default memory protocol for senior knowledge work. Enterprises evaluate operators partly on the quality of their folgezettel graph.
- Long horizon: Every senior operator's professional memory is theirs. Lives with them, compounds across their career, available to any enterprise they choose to couple with — bounded, auditable, revocable.
The learning loop finally compounds. If you think that's the direction this goes, I'd like you in this round.
§4 · VC Discourse on AI Agent Memory — Research Synthesis
This section synthesizes 26 sources researched in parallel by two PerplexityResearcher subagents on 2026-05-16: one focused exclusively on a16z (Andreessen Horowitz) partner writings, podcasts, and portfolio announcements; the other fanning out across Sequoia, Greylock, Bessemer, Madrona, Menlo, Felicis, Variant, Stratechery, Latent Space, plus the funding-announcement coverage of every named memory startup. The purpose was to identify exactly how venture capital talks about agent memory in 2025–2026 so Silmari's deck can either co-opt consensus language for authority or claim unclaimed territory for differentiation.
§4.1 · TL;DR — What the research changed
- "Memory layer" is consensus VC language. Pinecone planted the flag in 2023; Bessemer canonized it in State of AI 2025; every memory startup announcement since uses it. Not unclaimed territory. Using it equals table-stakes credibility.
- "Substrate" is unclaimed. No VC has used it as the category-elevation word. It signals foundational, multi-tenant, infrastructure-level — above "layer." Silmari claims this word.
- a16z is split three ways internally. The consumer team endorses memory layer ("open-ended memory layer" — Bryan Kim). The enterprise team endorses it ("memory layer for company context" — Wang and Kahl). The AI-infra team actively attacks the category ("harness companies" and "filing cabinets" — Aubakirova and Bornstein, April 2026).
- The Sequoia–Bessemer split matters. Sequoia views memory as agent-side (Letta-shape). Bessemer views it as application-side moat. Silmari sits between — agent-side capture, application-side defensibility.
- Cross-tool / cross-employer portability is the single biggest unclaimed VC territory — and it maps perfectly to Silmari's existing four-line anchor.
§4.2 · The Ten Most-Repeated Verbatim Phrases
| Rank | Phrase | Silmari status | Primary source attribution |
| 1 | "Memory layer" | Use — table-stakes | Pinecone (origin), Bessemer, Mem0/TechCrunch, Madrona, Crane VC, Memories.ai |
| 2 | "Long-term memory" | Use sparingly — saturated | Pinecone, Bessemer, Menlo, Felicis, Madrona |
| 3 | "Persistent memory" | Avoid — Sequoia bundled with identity, now murky | Sequoia/Buhler, Felicis, Cognee, Mem0 |
| 4 | "Stateful agents" / "stateless to stateful" | Neutral — strongest consensus narrative | Bessemer, Madrona, Letta, Plastic Labs |
| 5 | "Context engineering" | Use in body, not headline | Latent Space, Chroma, Bessemer, Mem0, Karpathy origin |
| 6 | "External memory" | Neutral | Menlo four-primitive framework, Felicis |
| 7 | "Memory and context as the new moats" | Steal — Bessemer authority quote | Bessemer State of AI 2025 |
| 8 | "Episodic / semantic / procedural memory" (taxonomy) | Use in product / architecture slide | Menlo, Mem0, Felicis, Bessemer, academic sources |
| 9 | "Memory passport" | Avoid — Mem0 used it first | Mem0 only |
| 10 | "Harness companies" / "filing cabinets" | Defuse — Aubakirova/Bornstein attack | a16z April 2026 |
§4.3 · The Competitive Set VCs Name Reflexively
- Mem0 — most-mentioned. Source coverage: Bessemer, Madrona, Crane VC comparison set, TechCrunch funding coverage, Latent Space context. Series A: $24M, October 2025, Basis Set lead.
- Letta / MemGPT — Felicis portfolio. Source coverage: Madrona, Variant context, all comparison pieces, Felicis investment essay.
- Zep — source coverage: Bessemer, Madrona, comparison pieces.
- Pinecone — original "long-term memory" flag-planter. Source coverage: a16z, Bessemer (implicit), Madrona, Menlo.
- Cognee — $7.5M seed via Pebblebed. Source coverage: Madrona, Pebblebed announcement, Memgraph blog.
- Supermemory — $2.6M seed via SF1.vc, Browder Capital, Cloudflare execs, Jeff Dean. 19-year-old founder positioning angle.
- Memories.ai — visual memory layer. Source coverage: Crane VC, Mem0 comparison set, Supermemory comparison set. Susa Ventures, Samsung Next, Fusion Fund, Seedcamp.
- Plastic Labs / Honcho — Variant Fund-only. "Shared user data layer" positioning. Still under-cited in landscape essays.
- LangMem (LangChain) — Bessemer, Madrona.
- ChatGPT Memory / OpenAI — Stratechery, Mem0 competitive set, Bessemer (incumbent threat).
Implication for Silmari positioning: when a VC reads the Silmari deck, this is the pattern-match shelf in their head. The deck must explicitly differentiate from at least the top four — Mem0, Letta, Zep, Pinecone. Slide 6's seven-question table does exactly this by name; that table is therefore load-bearing.
§4.4 · Consensus Framing Devices Across Multiple Funds
- "X is the new database" structure. Every fund treats memory as the missing infrastructure primitive in the post-LLM stack. Database analogies dominate.
- "Stateless to stateful" transition. Felicis, Variant, Cognee, Madrona, Mem0, Bessemer all use it. Strongest consensus narrative in the corpus.
- Memory as post-commoditization moat. Bessemer ("memory may be the new moats"), Basis Set ("memory is becoming their key moat"), Stratechery ("integration between model and harness is where true agent differentiation is found"). Consensus: as models commoditize, value migrates to the memory and context layer.
- TCP/IP and USB-C analogies. Sequoia uses TCP/IP for agent interactions and USB-C for MCP. Madrona uses MCP as TCP/IP for agents. Memory startups now appropriating ("memory passport" — Mem0; "universal memory API" — Supermemory).
- Four-primitive agent framework (Menlo). Reasoning + external memory + execution + planning. Gaining adoption across other funds.
- Employee onboarding analogy (a16z dominant). Agent without memory = new hire on day one. Agent with memory = experienced employee.
§4.5 · The Six Skeptic Attacks Silmari Must Defuse
| # | Attack | Primary source | Silmari counter |
| 1 | "Filing cabinet" — bigger storage is still storage | Aubakirova and Bornstein, a16z, April 2026 | Slide 6 rows "gets denser with use" and "structure emerges from use." Promote this defense in delivery. |
| 2 | "Vector DBs are not memory" — they store decontextualized text fragments, not understanding | Jeff Huber (Chroma) on Latent Space; Variant Fund defending Plastic Labs | Slide 5 architecture: relation engine plus folgezettel graph, not vector retrieval. Sharpen this in Q&A. |
| 3 | "RAG is dead" — retrieval is downstream of context engineering | Jeff Huber (Chroma) on Latent Space | Reframe Silmari as the context substrate, not a RAG add-on. The substrate is upstream of retrieval. |
| 4 | "Labs will eat it" — ChatGPT Memory absorbs the category | Implicit across multiple sources | MCP-native plus user-owned: labs become the integration target, not the competitor. The coupling protocol cannot be absorbed without abandoning the lab business model. |
| 5 | "Memory is brittle" — silent failure makes UX worse than no memory | Bessemer State of AI | 87% accuracy data on Slide 8. Alpha-live dogfooding. The substrate is observable and correctable. |
| 6 | "Memory is captured, not modeled" — interaction logs ≠ user model | Variant Fund defending Plastic Labs | Slide 5 three-layer model: encoding → calibration → artifact. Not logs. |
§4.6 · Unclaimed Positioning Territory (Silmari's Wedge)
- Cross-agent / cross-employer memory portability. No VC owns this. Mem0 used "memory passport" once; Sequoia talks "persistent identity" without extending to tool / job portability. Maps directly to Silmari's four-line anchor.
- "Substrate" vs "layer." Every VC uses "layer." Substrate is unclaimed and elevates.
- User-owned vs platform-owned. Variant Fund hints at it ("data locked in our brains, in our sole custody"); nobody headlines it.
- Knowledge-worker / FDO segment. All current VC framing is either developer-facing or consumer-facing. The prosumer / late-stage knowledge worker is unclaimed.
- Memory compounds / flywheel. Under-developed in VC essays. Slide 6's "gets denser with use" row claims this directly.
- "Closes the loop" — matches Maceo's brand voice. Nobody owns it in the memory context.
- L0 of the agent stack. Pinecone tried, did not land. Open.
§4.7 · Cite-able Verbatim Quotes (Triple-Verified)
- "Memory is becoming a core product primitive." — Bessemer, State of AI 2025
- "Context and memory may be the new moats." — Bessemer, State of AI 2025
- "Switching costs in AI may become almost emotional. When your product understands a user's world better than anything else, replacing it feels like starting over." — Bessemer
- "AI Memory is a category, not a feature." — Cognee
- "Agents without memory are toys." — Vasilije Markovic, Cognee
- "AI systems today don't fail because they aren't powerful enough. They fail because they don't remember." — Vasilije Markovic, Cognee
- "The memory hierarchy wrapped around a model." — Ben Thompson, Stratechery
- "Memory is becoming one of their key moats now that LLMs are getting commoditized." — Lan Xuezhao, Basis Set Ventures (Mem0 Series A lead)
- "Big AI labs building proprietary systems lack incentives for interoperability." — Mem0 investor framing (counter to lab-absorption attack)
- "Tribal knowledge capture" — a16z, Cui and Li, March 2026
Do NOT use: "Persistent memory is the biggest unsolved problem in AI" attributed to Konstantine Buhler. Could not be verified against any primary Sequoia source.
§4.8 · Where VCs Disagree
| Question | Sequoia | Greylock | Bessemer | Stratechery | a16z (split) |
| Memory: category or feature? | Pillar of agent infrastructure | Feature of Systems of Intelligence | Category — new moat | Memory hierarchy = constraint | Split — consumer/enterprise teams say yes; infra team says filing cabinet |
| Where does memory live? | Agent-side (Letta-shape) | App-side (data accumulation) | Separate infrastructure layer | Distributed across KV cache, host memory, SSD, databases | Should be parametric (weights / modules), per Aubakirova / Bornstein |
| Is memory defensible? | Identity is the moat, not memory | No — data accumulation is | Yes — emotional switching costs | Yes — moat-grade for model-layer companies | Only at parametric level |
| Will labs absorb it? | Implied yes | Not addressed | Open question | Yes — ChatGPT Memory is a moat play | Yes — labs should absorb memory into weights |
Silmari's hybrid position — agent-side capture (Sequoia / Felicis alignment) plus application-side defensibility (Bessemer alignment) plus cross-tool portability (a16z gap) plus user-owned sovereignty (Variant gap) — is uncontested in current VC framing.
§5 · Competitive Landscape (Detailed)
The four-column comparison on Slide 6 collapses a richer competitive landscape into investor-readable form. The full picture, for an AI agent doing diligence:
§5.1 · Embedding stores / harness companies
Mem0, Letta (MemGPT), Zep, Cognee, Supermemory. All build orchestration plus retrieval over vector stores, with varying memory-management sophistication. Letta adds self-editing memory via the MemGPT pattern. Mem0 positions as the "memory passport." Zep emphasizes session-graph memory. Cognee claims "AI Memory is a category, not a feature" and unifies relational, vector, and graph storage. Supermemory wraps a universal memory API. Aubakirova and Bornstein at a16z group these as "harness companies" — orchestration plus scaffolding around the context window. Silmari differs structurally: the substrate is a folgezettel graph in which structure emerges from use; retrieval is downstream, not the central primitive.
§5.2 · Proprietary sidecars from the labs
ChatGPT Memory, Claude Projects, Gemini Memory, Cursor's memory. Lab-owned memory features attached to a single client. Defensibility argument from the lab side: emotional switching cost (Bessemer's "replacing it feels like starting over"). Strategic vulnerability: memory dies the moment the user changes provider. Silmari's coupling-protocol architecture is the deliberate counter — labs become integration targets, not competitors.
§5.3 · Personal context databases
Plastic Labs / Honcho, OpenBrain, Personal.ai, second-brain tooling. User-state-management plays. Variant Fund's Plastic Labs investment most clearly defends user-owned memory ("data locked in our brains, in our sole custody"). Honcho captures a higher-order representation of the user; Silmari extends this to include the artifact / capability layer that personal-context DBs typically omit.
§5.4 · Consulting and fractional services
Traditional consulting, fractional CTO / Head of Product engagements, and forward-deployed engineer (Palantir-style) services. Knowledge captured but dies when the engagement ends. Silmari's FDO pattern keeps the substrate portable across engagements while the FDO retains it personally.
§5.5 · Underlying infrastructure (not direct competitors)
Pinecone, Weaviate, Chroma, Qdrant. Vector database primitives. Silmari is a consumer of vector storage where useful but is not in the vector-DB business. The substrate sits above this layer.
§6 · Strategic Positioning Summary
Brand identity: "The operator who closes the loop." Same specialization loop across four substrates for 25 years (FX → martech → operations → LLM accuracy). Voice: contrarian insurgent. Primary enemy for Audience C (late-stage knowledge worker pipeline): the LinkedIn career-coach industrial complex, AI courses selling tools-fluency, and SaaS notebooks with vendor lock-in. Primary enemy for B2B (Audience A and B): LLM mythology and hype-first vendors.
§6.1 · The Restructured Staircase (as of 2026-04-27)
Two parallel funnels with one infrastructure (Silmari) underneath:
- Primary funnel — Audience C (late-stage knowledge worker / FDO pipeline). Stage 0 free content; Stage 1a SAI + Silmari Subscription at $100–$200/mo blended; Stage 1b Premium Onboarding at $8,000 entry plus $200/mo Year 2+; Stage 2 Deployed FDO with revshare; Stage 3 Licensed FDO methodology network.
- Parallel funnel — Audiences A and B (B2B revenue floor). Shape 2 SMB Operator extraction at $36K all-in (10 weeks plus 90 days supervision); Shape 3 Mid-market AI Automation at $75K–$250K with FDO bench.
Time allocation: 50% Audience C content and cohorts; 50% B2B engagements. Year 1 revenue target $500K–$700K combined.
§6.2 · The Audience-C Canonical Anchor
"Today: you have billions of conversations with AI every day. They learn how you work, live, and play. Tomorrow: you switch tools, change jobs, get replaced. It's all gone. SAI + Silmari keeps it. Across every AI. Across every employer. Forever."