emcp wraps any MCP server and transforms raw tool responses into typed, pixel-accurate, cross-server context — so your AI agents stop guessing.
Connect Figma, Notion, or Slack to an LLM through MCP and the model gets unstructured prose. No coordinates, no hex colors, no hierarchy. Broken automations every time.
"Frame 'Dashboard' contains a button labeled Submit, blue color, top right area" // No coordinates — "top right" is not x=1142, y=48 // No hex colors — "blue" is not #0D99FF // No parent-child tree // No confidence scores // No cross-server links
{ "id": "btn-submit", "type": "BUTTON", "spatial": { "x": 1142, "y": 48 }, "style": { "fill": "#0D99FF" }, "parentId": "frame-dashboard", "confidence": { "overall": 0.97 }, "schema": "emcp/v1" }
emcp wraps transparently — no changes to your existing MCP setup.
Ships with the most common MCP servers. Write your own in ~20 lines.
Single server, multi-server cross-linking, or custom adapters.
import { EnhancedMCP } from '@odin_ssup/emcp' const client = new EnhancedMCP({ enrichment: 'full', output: 'json' }) const ctx = await client.process({ toolName: 'figma_get_file', serverId: 'figma', content: rawMCPResponse, }) console.log(ctx.nodes) // typed EnrichedNode[], keyed by ID console.log(ctx.meta.avgConfidence) // 0.97 const pixels = await client.getPixelManifest() // sorted by z-index const xml = await client.getXML() // hierarchical tree
const ctx = await client.processMany([ { toolName: 'figma_get_file', serverId: 'figma', content: figmaResponse }, { toolName: 'notion-fetch', serverId: 'notion', content: notionResponse }, { toolName: 'slack_messages', serverId: 'slack', content: slackResponse }, { toolName: 'github_list_issues', serverId: 'github', content: githubResponse }, ]) // Fuzzy Jaccard similarity auto-links similar names across servers // "Dashboard" ↔ "Dashboard Page" ↔ "dashboard-main" (threshold: 0.4) ctx.nodes['frame-dashboard'].linkedNodes // [{ nodeId: 'notion:page-dashboard', source: 'notion', linkType: 'related' }]
import type { EMCPAdapter, EnrichedNode } from '@odin_ssup/emcp' export class MyAdapter implements EMCPAdapter { name = 'my-server' version = '1.0.0' canHandle(toolName: string) { return toolName.includes('my_server') } async parse(toolName: string, response: unknown): Promise<EnrichedNode[]> { const raw = response as MyResponseType return [{ id: raw.id, name: raw.name, source: 'my-server', type: 'CONTAINER', confidence: { spatial: 0.1, semantic: 0.8, overall: 0.5 }, }] } } const client = new EnhancedMCP({ adapters: [new MyAdapter()] })
import { EnhancedMCP } from '@odin_ssup/emcp' // Process large batches incrementally — no memory spike const client = new EnhancedMCP() for await (const ctx of client.processStream(responses)) { console.log(`Nodes so far: ${ctx.meta.totalNodes}`) // ctx is cumulative — each yield adds to the previous state // pipe ctx.nodes to your LLM incrementally } // Also accepts AsyncIterable (e.g. a live MCP event stream) async function* liveSource() { for (const r of batch) yield r } for await (const ctx of client.processStream(liveSource())) { ... }
import { EnhancedMCP } from '@odin_ssup/emcp' // Compress context to fit a 4,000-token LLM window const client = new EnhancedMCP({ maxTokens: 4000, fuzzyLinkingThreshold: 0.5, // default 0.4 }) await client.processMany(responses) // getJSON / getXML auto-compress if over budget // Drop order: decorative → tertiary → secondary → structural → primary (last) const json = await client.getJSON() // guaranteed ≤4000 tokens // Errors are collected — never thrown silently const ctx = await client.getContext() if (ctx.errors?.length) { ctx.errors.forEach(e => console.warn(e.adapter, e.message)) }
Per-field scores so your agent never has to guess about data quality.