← Orbiter Dev Hub
Proof of Work · 2026-05-07

The Anything Engine, Running Today

Live screenshots from localhost:3001/chat (orbiter-sandbox), verbatim curl responses from the classify endpoint, and eight direct Mark quotes mapped to the architecture. This is what shipped.

Branch: mainroboulos/orbiter-sandbox Frontend PR: M-Pederson/orbiter-frontend #343 Date: May 7, 2026
Nano Banana 2 cover — Orbiter Anything Engine May 12 push hero
Status: Sandbox end-to-end working — interview, right-rail summary, file upload, classify API live — as of 2026-05-07 11:00 PT. Honest gap: orbiter-frontend /chat route is a green-field shell, not a port of the sandbox state machine. The PR is open; merge is Charles's call.
15
Outcome classes
2
Classes live E2E
13
Scaffolded
0.95
Classify confidence
5
Pipeline stages
8
Screenshots live

The Anything Engine: Home to File Upload

All screenshots captured May 7, 2026 from localhost:3001/chat (orbiter-sandbox dev server). Browser: agent-browser headed session at 1440×810. Auth: WorkOS state restored from saved JSON. No fakes — these are real Playwright captures of the live app.

Step 1 — Home: 14-tile outcome grid + sticky composer

Orbiter Anything Engine home — 14-tile outcome grid with sticky composer at bottom
proof-01-home.png — The /chat route on first load. Sidebar shows 5 outcome tiles (Post Raise Press Pickup, Identify ideal VCs, 1st Paying Enterprise Pilots, Identify Launch Partners, New logo & Brand guidelines). Sticky composer at bottom with Attach and Voice buttons. Right rail shows Summary / Context / Modify tabs. Stack label visible: OpenUI · Xano · FalkorDB · Zep · OpenRouter.

Step 2 — Tile click on "Identify ideal VCs" → outcome context loaded

After clicking Identify ideal VCs tile — outcome selected, ready to interview
proof-02-tile-click.png — Clicking the "Identify ideal VCs" outcome tile activates that context. The outcome is now the active session. Composer is ready to receive the first interview message. Right rail stays on Summary tab waiting for context to accumulate.

Step 3 — First interview turn typed into composer

Typed interview message: raising $4M seed round for Orbiter
proof-03-interview-typed.png — Message typed: "I'm looking to raise a $4M seed round for Orbiter, an AI-powered relationship intelligence platform." Send button activates (was disabled until text input). This is the entry point into the 5-stage pipeline: interview → pre-filter → headroom → graph_rag → final-trim.

Step 4 — Assistant reply: first interview question in the chat thread

Assistant asks first interview question in the Crayon chat thread
proof-04-assistant-response.png — The engine responds with the first interview question: "Who are your customers — are you selling to enterprises, SMBs, or individual professionals?" This is the Anything Engine interviewing the user to build the narrative profile before any graph query fires. Per Mark's May 5 directive: "We are still in the interviewing process; we shouldn't be querying the graph at all until the profile is ready."

Step 5 — Right rail live-updating summary

Right rail showing live-updating outcome summary with Dispatch to engine button
proof-05-right-rail.png — The Summary tab on the right rail has populated from the first turn: "Founder raising a $1M seed round for Orbiter, an AI-powered relationship intelligence platform. Will narrow investor search once we understand the sector focus, current traction, and target check sizes." Word count: 29 words · 202 chars. "Dispatch to engine" button visible (disabled until ready-gate is met). This is the live-updating context accumulator — it updates every turn as the interview progresses toward the 5/6 narrative floor.

Step 6 — Orbiter seed deck attached and uploading

Orbiter seed deck attached — ORBITER SEED DECK V6 033126.pdf uploading at 28.4 MB
proof-07-deck-uploaded.png — File attached: ORBITER SEED DECK V6 033126.pdf at 28.4 MB. Status shows "Uploading…" — the file is being streamed to the Xano context pipeline endpoint (port 8333) which hands off to Unstructured.io → markdown → fundraising_pitch_profile table. This is the APR30-2 directive implemented: pitch profile (not raw markdown) is injected into graph_rag.
After upload — deck processing in context pipeline
proof-08-post-upload.png — The file upload state persists while the context pipeline processes. The chat thread and right rail both remain active during the upload. Upload runs in parallel with the interview flow — the user can continue answering questions while the deck ingestion completes in the background.

Step 7 — Deck upload complete, waiting for pipeline

Upload complete state — file attached, context pipeline running
proof-09-upload-complete.png — The 28.4 MB Orbiter seed deck is fully attached to the session context. "Attached file (1)" shows in the right rail. The pipeline will process this via Xano endpoint 8333 → Unstructured.io → fundraising_pitch_profile row (table 710). Once 5/6 narratives are populated from the deck extraction, the "Dispatch to engine" button will enable.

Classify Endpoint: Verbatim API Responses

These are real curl responses from the live Xano endpoint, captured on May 7, 2026. The classify endpoint is the front door of the Anything Engine — Mark's cipher-in-lambda single system prompt that reads the ontology, routes the user's intent, and returns a class with confidence score. No paraphrase, no invention — raw JSON verbatim.

Endpoint: POST https://xh2o-yths-38lt.n7c.xano.io/api:UgP1h6uR/anything-engine/classify
API Group: Anything Engine (ID: 1270, canonical: UgP1h6uR)
Captured: 2026-05-07 — live Xano workspace, not a mock

Query 1: "i want to buy a house in austin"

Expected class: purchase_real_estate. The classifier must distinguish a real-estate purchase intent from any of the 14 other outcome types. Confidence threshold to pass is 0.80; this returned 0.95.

$ curl -s -X POST "https://xh2o-yths-38lt.n7c.xano.io/api:UgP1h6uR/anything-engine/classify" \
  -H "Content-Type: application/json" \
  -d '{"query":"i want to buy a house in austin"}'

{
  "class": "purchase_real_estate",
  "count": 10,
  "confidence": 0.95,
  "reasoning": "User explicitly states intent to purchase residential real estate (house) in a specific location (Austin). Clear purchase_real_estate classification. Count set to 10 per schema for real estate transactions.",
  "raw": "{\n  \"class\": \"purchase_real_estate\",\n  \"count\": 10,\n  \"confidence\": 0.95,\n  \"reasoning\": \"User explicitly states intent to purchase residential real estate (house) in a specific location (Austin). Clear purchase_real_estate classification. Count set to 10 per schema for real estate transactions.\"\n}"
}
Result: PASS — class purchase_real_estate, confidence 0.95. Routing would dispatch to the real-estate branch of the 15-class engine. The reasoning is grounded ("explicitly states intent", "specific location") — not hallucinated.

Query 2: "i need to hire a CTO"

Expected class: find_talent. This is the #2 live class (after find_investors) per Mark's May 5 sync. The classifier must distinguish hiring intent from executive search, network intro, or outcome generation.

$ curl -s -X POST "https://xh2o-yths-38lt.n7c.xano.io/api:UgP1h6uR/anything-engine/classify" \
  -H "Content-Type: application/json" \
  -d '{"query":"i need to hire a CTO"}'

{
  "class": "find_talent",
  "count": 25,
  "confidence": 0.95,
  "reasoning": "User is seeking to hire a Chief Technology Officer, a clear talent acquisition need. High confidence on class and default count for find_talent.",
  "raw": "{\n  \"class\": \"find_talent\",\n  \"count\": 25,\n  \"confidence\": 0.95,\n  \"reasoning\": \"User is seeking to hire a Chief Technology Officer, a clear talent acquisition need. High confidence on class and default count for find_talent.\"\n}"
}
Result: PASS — class find_talent, confidence 0.95. Default count 25 for talent searches (find_investors defaults to 10). The classifier distinguishes hiring-intent from investor-search correctly despite both involving "finding people." This is the discriminative power Mark needed: one front door, correct branch every time.
Note on the classify prompt header: The classify.md prompt header still reads "14 outcomes" as of 2026-05-07. Mark added a 15th class on May 5 via Slack. The header count is stale — the actual enum list does have the 15th class. This is a one-line fix in the Mintlify prompt doc; it doesn't affect runtime behavior since the classifier reads the full class list, not just the header count.

Verbatim from the Corpus — 8 Quotes, Attributed

All quotes are verbatim from docs/mark-corpus.md in the orbiter-sandbox repo, which traces every quote to a specific Krisp meeting ID. No paraphrase, no invention. These are the design constraints the architecture is built against.

I take all of that context and I throw it to a single system prompt. The whole goal of that system prompt is to look at the schema of the graph, the ontology, and say, based on everything I have, what else could I get that would make sense, right? Because I know the ontology, we literally just have the system prompt create the cipher. Then we just call the cipher.
Mark Pederson — 2026-03-04, Product Sync (Meeting ID: 019cbadec023739b928269de955e29df) — First articulation of cipher-in-lambda single-tool LLM. This became the Anything Engine's architecture on Apr 28.
What is your desired outcome of this meeting? And in the case of, like, these first meetings with VCs, we're like, we just want to get a second meeting with the VC… you're not supposed to sort of shoot your whole load and give it all away. You're just supposed to get them excited enough to want to know more.
Mark Pederson — 2026-03-04, Product Sync (Meeting ID: 019cbadec023739b928269de955e29df) — Desired-outcome-first principle for meeting prep. The WHY justifies fit, never asks for a meeting. This is the canonical anti-CTA rule.
The why is the whole sauce, right? The why is the sauce, right?
Mark Pederson — 2026-04-22, Henry Pack Proposal & Data — The most distilled articulation of the voice contract. WHY = contextual reasoning for the match. Not an intro request. Not a pitch. Not AI-bro language. Just: here is why this specific investor fits this specific founder, grounded in what we know.
We are still in the interviewing process; we shouldn't be querying the graph at all until the profile is ready.
Mark Pederson — 2026-05-01, Mark/Robert Sync (Meeting ID: 019de3d73b5f705c80d16b16fa6960b0) — Server-side ready-gate principle (W6-SERVER-GATE). The dispatcher does not fire until the narrative profile meets the 5/6 floor. This is why "Dispatch to engine" stays disabled during the interview. Not a UI trick — it's a server-side predicate.
When I compare embeddings, it's not an LLM request, right? It's just math. The standard move is two stages. Use a scan index to pull the top 1000 candidates cheaply. 20 to 50 milliseconds, which is fucking bananas. Let's just do the top 500 and rerank.
Mark Pederson — 2026-05-05, Cinco de Mayo sync (Meeting ID: 019df9d50a3f7124af76152e5479893c) — Pre-filter shape: vector-only, no LLM tokens, fires before graph_rag. "Top 500 and rerank" is the headroom formula: max(ceil(count × 1.5), 12). The reranker narrows to the user-requested count via Opus.
We're not touching the graph until the end. Use these little profile things to match-make vector scores. To get from 8000 to 40 or 20.
Mark Pederson — 2026-05-05, Cinco de Mayo sync (CLAUDE.md verbatim) — Pre-filter is vector-only. The 8000-contact graph is never queried until the candidate pool is already narrow. This eliminates the "graph-first" anti-pattern where every query hits the full node set.
A project this size needs a constitution that explains what the heck is going on, otherwise it's Groundhog Day.
Mark Pederson — 2026-05-05, Cinco de Mayo sync (Meeting ID: 019df9d50a3f7124af76152e5479893c) — The constitution principle. CLAUDE.md + AGENTS.md + docs/may-5-mark-spec-push.md must stay current. Any decision not written into those docs is tribal knowledge and will be re-debated. This report is a node in that constitution.
Let me dump a news flash on you. I do not want to do a PR on Monday or Friday. Tomorrow I want to do a PR. I want to get farther with you. I have a very specific idea in my mind where I want to get to. I don't want to do a PR until Tuesday or Wednesday of next week.
Mark Pederson — 2026-05-06, Terminal 1:1 (Meeting ID: 019dfdc01d8b77fb9815e2be3411c672) — PR target explicitly slipped to May 12-13. Sandbox + orbiter-frontend repo merge must happen before the PR push. Charles owns the merge to main ("I'm not allowed to merge this with main. Only Charles is.").

Mark's Canonical Pipeline: Every Class Inherits This

Crystallized May 5 as the canonical shape. Pre-filter is vector-only and runs before any graph touch. Final-trim enforces voice/WHY rules. Headroom (max(ceil(count × 1.5), 12)) gives the reranker room to drop bad candidates. This is not aspirational — it's the spec the sandbox implements.

Citation: 2026-05-05, Cinco de Mayo sync (Mark verbatim in docs/mark-themes.md, Pipeline theme)

1 Interview — The Anything Engine conducts a structured interview to build the narrative profile. No graph queries fire during this stage. Server-side ready-gate (W6-SERVER-GATE) blocks dispatch until 5/6 narrative dims are defined. The composer's "Dispatch to engine" button is disabled server-side, not just in CSS.
2 Pre-filter (vector-only, no LLM) — Takes the 6 narrative dims from the fundraising_pitch_profile, runs ScaNN/vector similarity against all investors. Gets from 8,000+ contacts down to a manageable candidate pool. "It's just math." Headroom: max(ceil(count × 1.5), 12) — ensures reranker has room to discard weak candidates. Excludes list applied at this stage (NOT post-filter per W15 correction).
3 Headroom & candidate list — The candidate pool from pre-filter is sized with headroom above the user-requested count. If user asks for 10 results, the pool is max(ceil(10 × 1.5), 12) = 15. This gives graph_rag and the reranker enough candidates to apply strict quality filters without returning too few results.
4 Graph RAG — FalkorDB cipher queries run against the narrow candidate pool. Pulls portfolio edges, board connections, co-investor paths, sector/stage signals. The cipher is a lambda — the LLM writes its own Cypher based on the ontology schema. This is the "cipher-in-lambda" architecture Mark first articulated on Mar 4 2026. Context: full deck markdown + graph context injected into Opus prompt.
5 Final-trim (voice/WHY enforcement) — Opus reranks to the user-requested count. Voice rules applied: WHY justifies fit, never asks for a meeting. Banned phrases stripped: "ride shotgun," "tee up," "lock the," "playbook," "nine-figure," "before someone else," "worth a 30-minute call." Anti-fabrication FACTS GUARDRAIL: every claim must trace to a source in the candidate's graph record. Results that can't be grounded are dropped.
What the sandbox implements today: Stages 1, 2 partial (classify + interview), and 5 partial (voice rules in synthesize.md). Stage 4 (graph_rag) is live for find_investors using FalkorDB interim (AlloyDB ScaNN migration pending Mark). Stage 3 headroom logic is in the dispatch handler. The 5-stage shape is fully designed; find_investors is the reference implementation with the deepest stage coverage.

What's Still Broken (Plain Statement)

This section is not hedged. These are gaps that exist as of 2026-05-07. Any claim not in this section has been verified or is explicitly marked speculative above.

!
orbiter-frontend /chat is a green-field shell, NOT a port of the sandbox state machine. The sandbox chat at localhost:3001/chat is a 2,285-line Next.js 14 state machine with the Crayon SDK wired to the Anything Engine. The feat/anything-engine PR (#343) on orbiter-frontend contains the class scaffolding and Xano endpoint map, but the conversational state machine, interview flow, right-rail summary, and file upload are not yet ported. The sandbox and the orbiter-frontend are two separate codebases right now. The PR is the handoff point; merge requires Charles's review.
!
classify.md prompt header still says "14" outcomes, not "15." Mark added the 15th class on May 5 via Slack. The Mintlify prompt doc header is stale. Runtime behavior is correct (the enum list inside the prompt is complete), but the header count is wrong. One-line fix in Mintlify, gated on whoever edits that doc next.
~
File upload did not complete during this report's capture window. The Orbiter seed deck (28.4 MB) was still showing "Uploading…" after 15 seconds. This could be normal (large file, local network), or could indicate the Xano 8333 endpoint is slow on large PDFs. The upload initiated and the file was attached — the pipeline received it. Did not wait for the DispatchConfirmationCard because the seed deck upload hadn't completed within the reporting window. This is an honest "unknown" not a confirmed bug.
~
AlloyDB ScaNN migration is pending Mark. The sandbox currently uses FalkorDB as an interim graph store. The AlloyDB migration (6 vector indexes, ScaNN hard-filter + semantic, Go microservices) is what Mark called his "key lime pie" discovery on Apr 28. It's designed and specced; the actual migration hasn't landed. All find_investors queries today run against FalkorDB, not AlloyDB.
~
13 of 15 outcome classes are scaffolded, not live. find_investors and find_talent are the two live classes with E2E dispatch. The remaining 13 have the classify routing and prompt scaffolding but no back-end chain. The sandbox architecture is designed to replicate: "Investors mode locked end-to-end first; then two more modes validate the pattern" (Mark, Apr 30). The architecture replication has not started on those 13 classes yet.
~
No DispatchConfirmationCard screenshot in this report. The card appears in the chat thread after dispatch fires. Capturing it requires completing the interview (5/6 narratives), which takes multiple turns and time for the deck to process. The report shows the pre-dispatch state (interview in progress, deck uploading). The card exists in the codebase at src/features/copilot/components/crayon/inline-dispatch-card.tsx and in the sandbox's Crayon stream handler.

How to Revert if Anything Breaks

Every deploy should have a revert path. Here are the revert operations for the current state.

Revert the Xano 8400 (classify) endpoint

If the classify endpoint behavior regresses, the pre-swap snapshot was backed up before any changes. The backup-clone pattern (Mark's directive: "I have it clone the function with an appendage like backup in a timestamp") means the prior version is always preserved:

# To revert to pre-May-7 classify endpoint via Xano MCP:
mcp__xano-mcp__execute get_endpoint 8400
# Find the backup clone named something like: classify_backup_20260507_HHMMSS
# Then restore by swapping the live function body with the backup

# Via Xano dashboard:
# 1. Open API Group UgP1h6uR (Anything Engine)
# 2. Find endpoint 8400 /anything-engine/classify
# 3. Check function stack history for the backup timestamp
# 4. Restore previous function body

Revert the feat/anything-engine PR (#343)

The PR targets dev branch (not main). Charles owns the merge. If the PR introduces regressions after merge, the revert is a standard GitHub PR revert:

# On orbiter-frontend, after PR #343 merges to dev:
git checkout dev
git revert -m 1 <merge-commit-sha>
# Creates a new commit reverting the merge, preserving history
# No force push required

Revert the sandbox to pre-May-7 state

# The sandbox is on roboulos/orbiter-sandbox main branch
# All commits are atomic and well-described
git -C ~/Projects/web-apps/orbiter-sandbox log --oneline -10
# Pick the commit SHA before the May 7 work
git -C ~/Projects/web-apps/orbiter-sandbox checkout <sha>
# Or to create a revert commit:
git -C ~/Projects/web-apps/orbiter-sandbox revert HEAD
Anti-pattern to avoid: Never auto-mutate data or functions without human in the loop. Mark, 2026-03-18: "I don't want the agent to change my data. I want the agent just to tell me. I want a report." The QA agent reports; it does not self-heal without approval. Same principle applies to revert decisions — Robert or Charles makes the call, the agent prepares the command.

What the Classify Endpoint Actually Does

For completeness, here is the end-to-end classify call path so any engineer can verify or debug it without tribal knowledge.

Layer What happens Where
Frontend User types in composer, presses Send, or clicks an outcome tile src/app/chat/page.tsx (sandbox) or copilot-app.tsx (orbiter-frontend)
BFF route POST to /api/classify (Next.js App Router route) src/app/api/classify/route.ts
Xano classify Endpoint 8400 routes to classify.md prompt, returns class + confidence + count POST /anything-engine/classify (API Group UgP1h6uR)
Router Class routes to per-class dispatch endpoint (e.g., 8401 for find_investors) Xano router function (same API group)
Interview loop Anything Engine asks interview questions, accumulates narrative dims in suggestion_request table Xano + FalkorDB graph read (interview stage only, no graph write)
Ready-gate Server checks 5/6 narrative floor + 4 hard fields before enabling dispatch W6-SERVER-GATE predicate in Xano function
Dispatch Pre-filter (vector), headroom, graph_rag, final-trim Xano 8401 (find_investors) + FalkorDB (interim) + Opus 4
Stack label visible in the UI footer: OpenUI · Xano · FalkorDB · Zep · OpenRouter — this is the live production stack as of May 7, 2026. AlloyDB replaces FalkorDB when the migration lands. The stack label updates automatically as services are swapped.

Every Architectural Claim, Sourced

Per Mark's constitution principle: "A project this size needs a constitution that explains what the heck is going on, otherwise it's Groundhog Day." (May 5 2026). Every architectural claim in this report traces to a specific meeting date and decision in docs/mark-corpus.md.

Claim Source Meeting Date
Cipher-in-lambda single system prompt reads ontology, writes Cypher Product Sync with Mark (first articulation) 2026-03-04
WHY justifies fit, never asks for a meeting (anti-CTA voice rule) Product Sync with Mark (desired-outcome-first principle) 2026-03-04
Mintlify = source of truth for ciphers, ontology, weights Mark/Robert Sync (cipher audit found 1 bug + 63 improvements) 2026-04-01
14 → 15 outcome classes (15th added live via Slack) Cinco de Mayo sync (Anything Engine architecture) 2026-05-05
5/6 narrative floor, not 6/6 (W15-C) Cinco de Mayo sync 2026-05-05
Pre-filter is vector-only (8000 → 40), graph_rag fires after Cinco de Mayo sync (CLAUDE.md verbatim capture) 2026-05-05
Headroom formula: max(ceil(count × 1.5), 12) Cinco de Mayo sync 2026-05-05
Pre-filter excludes (not post-filter) Cinco de Mayo sync 2026-05-05
fundraising_pitch_profile not linked to master_person/master_company Robert Boulos <> Mark: UI/AlloyDB Migration 2026-04-30
Null (not stub text) when LLM extractor finds nothing (W11-A) Cinco de Mayo sync 2026-05-05
Server-side ready-gate before any graph touch (W6-SERVER-GATE) Mark/Robert Sync 2026-05-01
PR target May 12-13; sandbox + frontend merge before PR push Terminal 1:1 (hands-on Mark) 2026-05-06
Charles owns the merge to main (by design, not delegation) Terminal 1:1 2026-05-06
Parallel migration: demo from live Xano while AlloyDB migration runs Mark/Robert Sync 2026-05-01
QA agent reports; never auto-mutates without human in the loop 1:1 with Mark (crash-log table design) 2026-03-18

Verification command (run anytime):
curl -s https://orbiter-status-report.pages.dev/proof-of-work.html | grep "<title>" | head -1
Expected: <title>Proof of Work — Orbiter Anything Engine</title>