AI Agent
Pro+The AI Agent builder creates autonomous software agents that reason about problems, invoke tools to interact with the real world, and take multi-step actions to accomplish goals. Unlike simple chat wrappers that respond to a single prompt, AI agents operate in loops -- observing, thinking, acting, and reflecting until the task is complete.
SKYCOT generates the complete agent stack: a runtime engine with configurable reasoning patterns, a typed tool registry, pluggable memory providers, a guardrail pipeline for safety and compliance, and full observability with tracing and cost tracking. You describe what your agent should do and SKYCOT handles the engineering.
Agents built with SKYCOT can operate as standalone services behind a REST API, as chat widgets embedded in your application, as webhook- driven automations, or as MCP servers that other agents can call. The architecture supports single-agent setups for focused tasks and multi- agent orchestration for complex workflows.
Capabilities
Every AI Agent build includes these core capabilities. Configure each in your agent definition to match your use case.
Reasoning Patterns
Choose how your agent thinks. SKYCOT supports multiple reasoning strategies that can be composed depending on task complexity.
- ReAct (Reason + Act) -- interleave chain-of-thought with tool calls for step-by-step problem solving
- Plan-Execute -- generate a full plan up front, then execute each step with optional re-planning on failure
- Reflection -- the agent critiques its own output and iterates until quality thresholds are met
Multi-Agent Orchestration
Scale from a single agent to a coordinated team. Each pattern suits a different complexity level.
- Single Agent -- one agent with a tool belt, suitable for focused tasks
- Supervisor -- a manager agent delegates subtasks to specialist workers and aggregates results
- Handoff -- agents transfer conversation control to each other based on detected intent
- Pipeline -- agents execute sequentially, each transforming the output of the previous stage
Tool Integration
Agents interact with the outside world through tools. SKYCOT generates a typed tool registry with runtime validation.
- MCP (Model Context Protocol) -- connect to any MCP-compatible server for extensible tool access
- REST APIs -- call external services with typed request/response schemas and retry logic
- Databases -- read and write to PostgreSQL via Drizzle ORM with parameterised queries
- File system -- read, write, and transform files in sandboxed storage
- Web search -- retrieve and summarise live web results
- Code execution -- run generated code in isolated sandboxes with timeout enforcement
Memory System
Agents remember context across turns and sessions. SKYCOT generates pluggable memory providers.
- Conversation memory -- full message history with automatic summarisation for long threads
- Knowledge base -- vector-indexed document store for RAG-style retrieval during reasoning
- Episodic memory -- persisted records of past tasks and outcomes for learning across sessions
Guardrails
Safety and compliance controls that wrap every agent invocation. Run before and after each LLM call.
- Input validation -- reject malformed or off-topic requests before they reach the model
- PII detection -- scan inputs and outputs for personally identifiable information
- Content moderation -- block harmful, toxic, or policy-violating content
- Spend limits -- cap token usage per request, per session, and per billing period
Observability
Full visibility into agent behaviour. Every reasoning step, tool call, and decision is traced.
- Trace viewer -- step-by-step execution timeline with input/output for each action
- Cost dashboard -- real-time token usage tracking per agent, per tool, per session
- Token tracking -- detailed breakdown of prompt vs completion tokens across models
Use Case Templates
Select a starting template during the build wizard. Each template pre-configures reasoning patterns, tools, and guardrails for the domain. You can customise everything after generation.
Customer Support
Answer tickets, look up orders, process refunds, and escalate to humans when needed.
Research Assistant
Search the web, summarise papers, cross-reference sources, and compile structured reports.
Workflow Automation
Orchestrate multi-step business processes across APIs, databases, and notification channels.
Data Analysis
Query databases, run calculations, generate charts, and narrate insights in plain language.
Code Assistant
Read codebases, generate patches, run tests, and open pull requests with explanations.
Content Generation
Draft articles, social posts, and marketing copy with brand voice enforcement and fact-checking.
Sales Assistant
Qualify leads, draft outreach emails, update CRM records, and schedule follow-up tasks.
HR & Recruiting
Screen resumes, schedule interviews, send offer letters, and track candidate pipelines.
DevOps & Monitoring
Monitor infrastructure, triage alerts, run diagnostics, and execute remediation playbooks.
Custom
Describe any agent behaviour and SKYCOT builds it from first principles using the custom fallback.
What SKYCOT Generates
A complete AI Agent build produces the following components, all type-safe, tested, and ready to deploy.
Runtime Engine
Core agent loop with reasoning pattern, tool dispatch, and retry logic
Tool Registry
Typed tool definitions with Zod-validated inputs/outputs and permission scopes
Memory Providers
Pluggable conversation, knowledge base, and episodic memory stores
Guardrail Pipeline
Pre/post-processing middleware for safety, PII, moderation, and spend limits
tRPC API Routes
Type-safe endpoints for agent invocation, session management, and history
Chat Widget
React component with streaming responses, tool-call visualisation, and file uploads
Webhook Handler
Inbound webhook receiver for triggering agent runs from external events
MCP Server
Model Context Protocol server exposing your agent as a tool to other agents
Eval Suite
Vitest-based evaluation harness with golden datasets and regression tracking
Code Examples
Below are representative snippets from a generated AI Agent project. All code is fully editable after generation.
Agent Configuration
// src/agents/support-agent/config.ts
import { defineAgent } from "@/lib/agent-runtime";
export const supportAgent = defineAgent({
name: "customer-support",
model: "claude-sonnet-4-6",
reasoning: "react",
orchestration: "single",
systemPrompt: `You are a customer support agent for Acme Corp.
You can look up orders, process refunds, and escalate to humans.
Always verify the customer's identity before taking actions.`,
tools: [
"lookup-order",
"process-refund",
"send-email",
"escalate-to-human",
],
memory: {
conversation: { maxTokens: 8_000, summariseAfter: 20 },
knowledge: { collection: "support-docs", topK: 5 },
},
guardrails: {
input: ["pii-detection", "topic-filter"],
output: ["pii-redaction", "tone-check"],
spend: { maxTokensPerRequest: 4_000, maxTokensPerDay: 500_000 },
},
});Tool Definition
// src/agents/support-agent/tools/lookup-order.ts
import { defineTool } from "@/lib/agent-runtime";
import { z } from "zod";
import { db } from "@/server/db";
import { orders } from "@/server/db/schema/orders";
import { eq } from "drizzle-orm";
export const lookupOrder = defineTool({
name: "lookup-order",
description: "Look up an order by ID and return its status, items, and shipping info.",
parameters: z.object({
orderId: z.string().describe("The order ID to look up (e.g., ORD-12345)"),
}),
returns: z.object({
id: z.string(),
status: z.enum(["pending", "shipped", "delivered", "cancelled"]),
items: z.array(z.object({ name: z.string(), quantity: z.number() })),
shippingAddress: z.string(),
trackingUrl: z.string().nullable(),
}),
execute: async ({ orderId }) => {
const order = await db.query.orders.findFirst({
where: eq(orders.id, orderId),
with: { items: true },
});
if (!order) throw new Error(`Order ${orderId} not found`);
return {
id: order.id,
status: order.status,
items: order.items.map((i) => ({ name: i.name, quantity: i.qty })),
shippingAddress: order.shippingAddress,
trackingUrl: order.trackingUrl,
};
},
});Guardrail Setup
// src/agents/support-agent/guardrails/pii-detection.ts
import { defineGuardrail } from "@/lib/agent-runtime";
export const piiDetection = defineGuardrail({
name: "pii-detection",
phase: "input",
description: "Detect and flag PII in user messages before processing.",
execute: async ({ message }) => {
const patterns = [
{ label: "SSN", regex: /\b\d{3}-\d{2}-\d{4}\b/ },
{ label: "Credit Card", regex: /\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b/ },
{ label: "Email", regex: /[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}/ },
];
const detected = patterns
.filter(({ regex }) => regex.test(message))
.map(({ label }) => label);
if (detected.length > 0) {
return {
action: "flag",
reason: `PII detected: ${detected.join(", ")}. Redacting before processing.`,
transformedMessage: message.replace(
/\b\d{3}-\d{2}-\d{4}\b/g,
"[SSN REDACTED]"
),
};
}
return { action: "pass" };
},
});API Usage
# Invoke the agent via the tRPC-compatible REST endpoint
curl -X POST https://your-app.vercel.app/api/trpc/agent.invoke \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <your-api-key>" \
-d '{
"json": {
"agentId": "customer-support",
"sessionId": "sess_abc123",
"message": "What is the status of order ORD-98765?"
}
}'AI Agent vs AI Wrapper
Both archetypes build AI-powered applications, but they serve different complexity levels. Choose the one that matches your needs.
| Feature | AI Wrapper | AI Agent |
|---|---|---|
| Primary purpose | Chat interface, RAG, content generation | Autonomous multi-step task execution |
| Reasoning | Single LLM call per request | Multi-step reasoning loops (ReAct, Plan-Execute) |
| Tool use | Optional (RAG retrieval) | Core capability (multiple tools per turn) |
| Multi-agent | Not supported | Supervisor, handoff, and pipeline patterns |
| Memory | Conversation history | Conversation + knowledge base + episodic |
| Guardrails | Basic content filtering | Full pipeline (PII, moderation, spend limits) |
| Observability | Token tracking | Trace viewer, cost dashboard, step-level logging |
| Best for | Chatbots, Q&A, document search | Support agents, workflow automation, research |
| Tier requirement | Pro+ | Pro+ |
Frequently Asked Questions
What models can my agent use?
Agents can use any model available on your SKYCOT tier. Sonnet 4.6 is the default for cost-efficiency. Pro+ users can select Opus 4.6 for complex reasoning tasks. The model is configured per-agent in the agent config, and you can change it at any time.
Can I connect my agent to external APIs?
Yes. Tools are the primary integration mechanism. Define a tool with a Zod schema for inputs and outputs, implement the execute function calling any REST API, and the agent runtime handles invocation, retries, and error reporting automatically.
How does the memory system work?
SKYCOT generates three memory providers. Conversation memory stores the current thread with automatic summarisation for long contexts. Knowledge base memory uses vector embeddings over your documents for RAG-style retrieval. Episodic memory persists task outcomes across sessions so the agent can learn from past interactions.
What guardrails are included by default?
Every generated agent includes input validation (schema and topic filtering), PII detection and redaction, content moderation, and configurable token spend limits per request and per day. You can add custom guardrails by implementing the defineGuardrail interface.
How is this different from the AI Wrapper archetype?
AI Wrapper builds simple chat interfaces and RAG pipelines with a single LLM call per request. AI Agent builds autonomous systems that reason over multiple steps, call tools, orchestrate sub-agents, and maintain rich memory. Choose AI Wrapper for chatbots and Q&A; choose AI Agent for automation, research, and multi-step workflows.
Can I deploy my agent as an MCP server?
Yes. SKYCOT generates an MCP server implementation alongside your agent. This means other MCP-compatible tools (including Claude Desktop and other SKYCOT agents) can invoke your agent as a tool, enabling composable agent networks.