Built-in tool suites for data shaping, semantic discovery, workflow orchestration, persistent memory, safety checks, background execution, reusable skills, deliverables, and feedback loops.
Filter, sort, group, pivot, join, aggregate, and fan out over tool outputs with deterministic operations that keep structured data out of the prompt.
Search all your integrations by intent, plan CTC-verified workflows, navigate capabilities interactively, execute with demux.
Refract data with natural language, navigate latent spaces, generate charts, bridge types between systems, auto-paginate large results.
Semantic code analysis — extract structural facts, query dependencies and patterns, verify changes align with stated goals using deterministic reasoning.
Multi-tier prompt injection detection with confidence scoring. Three independent methods compose into a single verdict.
Higher-order workflows — pass named skills or inline plans as parameters to orchestrate multi-step pipelines with conditional routing, human approval gates, and variable references between steps.
Persistent symbolic memory that survives across sessions. Store facts, query knowledge, define constraints, traverse graphs — natural language or raw Prolog. All facts encrypted at rest.
Generate, analyze, and model numeric data. Sequences, sampling, statistics, correlation, regression, sliding windows — fully deterministic.
Managed view over your active cached data. See what cache_ref values are live, inspect their schema and contents, track time-to-expiry.
Durable handles for long-running MCP work. Poll promoted jobs, inspect recent background activity, fetch finished results, or cancel stale tasks without rerunning the original call.
Search, retrieve, and register persistent work products such as reports, exports, charts, and packaged datasets without exposing the underlying execution details.
Expand concepts into rich conceptual landscapes. Depth mode decomposes, breadth mode finds cross-domain isomorphisms, bridge mode connects two ideas.
Forge reusable skills, temper existing ones, browse the catalog, and invoke saved skills directly from the Toolsmith surface.
Schedule tasks with time or event triggers so recurring and event-driven work can run without staying inline in the current turn.
Every DataGrout server exposes these tools automatically alongside your integration tools. Connect over MCP or JSONRPC — whichever your agent speaks — and call any tool through the same connection you use for everything else. No extra setup, no separate API keys.
// Discover the right tool, call it, transform the result — all via MCP { "method": "tools/call", "params": { "name": "data-grout@1/discovery.discover@1", "arguments": { "goal": "find customers with unpaid invoices", "limit": 5 } } }
Every tool response includes a cache_ref
in its metadata. Pass that reference to
subsequent tools instead of re-sending the full payload — your agent can filter, refract, chart,
and export data while it stays server-side, never re-entering the LLM context window.
Cached data is encrypted at rest (AES-256-GCM) and expires automatically after 10 minutes of inactivity.
Both protocols are first-class.
MCP
is the standard for agent-to-tool communication;
JSONRPC
gives you direct programmatic access for toolsmithing, custom agents, or backend integrations.
Same tools, same results, same cache_ref
chaining — pick the protocol that fits your stack.
Tools with no AI premium (Data, Frame, Math) execute entirely in-process — no LLM, no network call. Tools that use AI (Prism, Warden, Discovery) get progressively more efficient: the first call generates and verifies, every repeat call runs the cached result deterministically. Your system gets faster and cheaper the more it operates.