Natural language input → intent classification → specialized backend tool. One front door for every relationship outcome Orbiter can drive.
The Anything Engine is Orbiter's intent classifier and dispatch system. A user types any natural language query — "find me investors for our Series A" or "who at LSI should Henry meet?" — and the engine classifies it into one of 14 outcome types, then routes it to a specialized backend tool that runs the appropriate graph query, embedding search, and synthesis pipeline.
The classifier runs on Groq Llama 3.3 70B at near-instant latency. Every response includes a confidence score and reasoning trace, so the router knows when to confirm with the user before dispatching.
FalkorDB is interim. AlloyDB + ScaNN swap pending — same Cypher pattern, higher throughput, 6 vector dimensions per investor.
Every query collapses into one of these canonical classes. The classifier returns the class name, confidence (0–1), and reasoning. LIVE = tool implemented and deployed. PENDING = classified correctly, backend not yet wired.
1270, canonical UgP1h6uRhttps://xh2o-yths-38lt.n7c.xano.io/api:UgP1h6uR| ID | Method | Path | Purpose |
|---|---|---|---|
8400 |
POST | /classify |
Run intent classifier, returns class + confidence + reasoning |
8399 |
POST | /dispatch |
Full pipeline: classify → embed → query → synthesize → Crayon cards |
8401 |
POST | /find-investors |
Dedicated investor-search endpoint (reference implementation) |
POST /api:UgP1h6uR/classify
Content-Type: application/json
{
"query": "We're raising a $3M seed. Who should we talk to at LSI?",
"user_id": 15
}
{
"class": "find_investors",
"count": 1,
"confidence": 0.97,
"reasoning": "The query explicitly mentions fundraising context ('raising a $3M seed') and requests investor introductions scoped to a specific event (LSI). This maps directly to find_investors with high confidence."
}
confidence < 0.75, surface the classification to the user and ask for confirmation before routing to the tool. This prevents mis-dispatches on ambiguous queries like "who should I talk to?" without enough context.
POST /api:UgP1h6uR/dispatch
Content-Type: application/json
{
"query": "Find seed-stage VCs who invest in AI infrastructure. $3M round.",
"user_id": 15,
"context": {
"pitch_deck_text": "We build...",
"live_event_id": 3,
"zep_thread_id": "thread_abc123"
}
}
{
"class": "find_investors",
"confidence": 0.94,
"results": [
{
"master_person_id": 1847,
"name": "Kai Nguyen",
"firm": "Gradient Ventures",
"fit_score": 0.91,
"rationale": "Gradient led two AI-infra seed rounds in 2024...",
"why": "Thesis match on AI tooling + infra. Check size $1–5M. 3 portfolio companies in adjacent space.",
"draft_outreach": "Subject: AI infra seed — Orbiter intro via [name]..."
}
],
"mem_used": true,
"process_id": "proc_7c2f4d"
}
Groq Llama 3.3 70B — chosen for near-zero latency (<300ms) at high accuracy on intent classification tasks. Temperature 0.1 to minimize variability.
Structured JSON with class (enum of 14), confidence (float 0–1), count (number of intents detected), and reasoning (plain text).
count indicates how many distinct intents were detected. When count > 1, the dispatcher either sequences multiple tool calls or surfaces a clarification prompt to the user. The primary class with highest confidence dispatches first.
// Multi-intent example
{
"class": "find_investors",
"count": 2,
"confidence": 0.88,
"reasoning": "Query has two intents: find investors (primary) and research the lead investor from their last round (secondary). Dispatching find_investors first, queuing research_person."
}
The classifier prompt is stored in an editable Mintlify doc — never buried in TypeScript. Key elements:
count>1 rather than hallucinating a single class for multi-intent queriesjson_decode (Groq wraps in ```json at <1% rate)All business logic, prompt assembly, graph queries, and synthesis live in Xano endpoints. Xano is the source of truth for pipeline behavior.
Investor profiles, vector embeddings (6 dimensions), and relationship graph data. AlloyDB ScaNN handles hard filters + semantic in a single SQL call.
UI only + a thin backend-for-frontend. Routes call Xano and stream SSE to the Crayon SDK. Zero business logic in Next.js route handlers.
Every dispatch call checks Zep for prior user context. On turn 2 with a vague follow-up query ("show me more like the last one"), the memory layer provides the missing context needed for the classifier to produce a confident routing decision.
Each user session has a Zep thread. The dispatcher fetches thread.get_user_context before classifying. Context includes: recent queries, dispatched classes, and entities mentioned (companies, people, sectors).
mem_used FlagResponse includes "mem_used": true when Zep context influenced the classification or tool parameters. Lets the UI surface a "remembered from earlier" indicator.
| Tool | Status | Graph Pattern | Notes |
|---|---|---|---|
find_investors | LIVE E2E | VC_Firm + Angel, portfolio/co-inv hops, score < 0.85 | Reference implementation. Zep wired. |
find_talent | LIVE E2E | Person labels, role/company/skill edges | Returns ranked candidate cards. |
find_customers | LIVE E2E | Company + Person, sector/stage filters | BD targets with warm-path drafts. |
research_person | LIVE E2E | Person enrichment + entity graph | Deep bio, investments, board seats. |
| remaining 10 | PENDING | TBD per tool | Classified correctly; backends in backlog. |
Swap FalkorDB (interim) for AlloyDB with 6 ScaNN vector indexes per investor: sector, stage, check_size, geography, signal, founder_fit. Single SQL call combines hard filters + semantic similarity.
The 14 class names above are the canonical set. UI labels, classifier prompt, and dispatcher all must use the exact same strings. Canonical lock is pending sync with Mark.
Each tool has a minimum required context spec. find_investors requires pitch context (deck or description). find_talent requires JD or role context. The dispatcher surfaces a context-gap card when floor is not met.
The classifier already routes all 14 classes correctly. Building backends for the remaining 10 follows the find_investors reference pattern: Cypher template → Groq synthesis → Crayon card schema.