Model Context Protocol (MCP)
Vreko integrates with AI coding assistants through the Model Context Protocol (MCP), enabling seamless context sharing for snapshots, risk detection, and learning capture.
Local MCP (Available Now)
100% Local, 100% Private Free
Vrekoβs MCP integration works entirely on your machine. No cloud connectivity required. No data sent to Vreko servers.
Privacy Guarantee: Local MCP runs entirely offline. No internet connection required. Your code never leaves your device.
Features:
- β Snapshot management
- β Task tracking with learnings
- β Code validation
- β Pattern enforcement
- β Risk analysis (local)
Available to: All tiers (Free, Pro, Team, Enterprise)
Recommended Setup
Zero Config for Most Users: The VS Code extension auto-configures MCP via SSE. Claude Desktop users can install with one command.
IDE Extension (Recommended)
No MCP configuration needed. The Vreko extension handles everything:
- Install the extension from VS Code Marketplace
- The extension auto-starts the Vreko service
- MCP connects via SSE to
localhost:8765 - All AI tools in your IDE automatically have Vreko protection
# Or install via CLI
code --install-extension MarcelleLabs.vreko-vscodeCLI Configuration (Recommended)
Install the Vreko CLI and configure Claude Desktop:
npm install -g @vreko/cli
vreko tools configure --claudeThen restart Claude Desktop. Vreko appears in your MCP servers list.
For Claude Code: Use claude-sync to generate integration files enriched with workspace intelligence:
vreko claude-syncUnlock Pro features:
vreko loginManual Setup (Advanced)
For Power Users: Manual configuration is only needed if auto-setup doesnβt work or you need custom settings.
Continue
Configure in .continue/config.json:
{
"experimental": {
"modelContextProtocolServers": [
{
"name": "vreko",
"command": "vreko",
"args": ["mcp", "--stdio"]
}
]
}
}
SOPR MCP Open-Source Library
Vrekoβs MCP integration is powered by an internal implementation of the Service-Oriented Protocol Router (SOPR) pattern.
For teams who want to reuse that architecture directly, we publish the core server framework as an open-source NPM package:
- Library:
@vreko-oss/sopr-mcp - Source:
vreko-oss/vreko-mcp
This library gives you:
- A SOPR-based MCP server factory (
createSOPRServer) built around the same protocol β registry β tools β services layering used in Vreko - First-class support for mode-based tools instead of a large, flat list of commands
- Extension points for routers, telemetry, and resilience, so you can plug in your own transport and analytics
Tip: Start with the library docs in the repositoryβs
docs/folder:
docs/architecture.mdxβ how the SOPR MCP server is wired internallydocs/usage-sopr-mcp.mdxβ how to build your own MCP server on top of itdocs/diagrams.mdxβ visual diagrams of the architecture and request flow
If you like how Vreko behaves inside your editor today, @vreko-oss/sopr-mcp is the foundation you can reuse to build similar MCP-powered workflows for your own tools.
Default Surface (Minimal)
Core Commands (5):
- PERCEIVE:
pulse- workspace health check - REASON:
advise,guide- get recommendations and concrete plans - ACT:
vreko,check- task and validation - REFLECT:
snap_learn,snap_end- capture learnings
What you get:
- Clean, focused tool list for AI assistants
- Fast tool discovery and invocation
- Anthropic MCP best-practice compliance
- All essential Vreko workflows
Full Surface (Advanced)
Set VREKO_MCP_INTELLIGENCE_SURFACE=full to enable advanced tools:
Additional Tools (11):
snap_violation- Report code violations for pattern learninglearning_gc- Learning lifecycle management- Intelligence Layer (5 tools): External context integration
vreko_validate_change,vreko_get_risk_score,vreko_query_patternsvreko_get_context,vreko_suggest_rollback
- Learning Intelligence (5 tools): Pattern-based learning system
intelligence.capture,intelligence.patterns,intelligence.insightsintelligence.explain,intelligence.outcome
Enabling Full Surface:
// In your MCP config (e.g., ~/.cursor/mcp.json)
{
"mcpServers": {
"vreko": {
"command": "npx",
"args": ["@vreko/cli", "mcp", "--stdio"],
"env": {
"VREKO_MCP_INTELLIGENCE_SURFACE": "full"
}
}
}
}
Use full surface when:
- You need proactive risk assessment with external context (GitHub, Sentry, Context7)
- You want pattern-based learning and detection
- Youβre debugging complex production issues
- You need violation tracking and lifecycle management
1. pulse - Workspace Vitals (PERCEIVE)
Quick health check of workspace state before taking action.
Parameters:
{
record_change?: string; // Optional file path to record as change
}
Wire Response:
π«|pulse:elevated|cpm:18|pressure:45|risk:M|changes:7|action:monitor
2. advise - Get Recommendations (REASON)
Get AI-powered advice on next steps based on current workspace state.
Parameters:
{
task?: string; // What you're about to do
files?: string[]; // Files you plan to modify
intent?: 'implement' | 'refactor' | 'debug' | 'test' | 'deploy';
}
Wire Response:
π§ |conf:65|rec:review|warn:2|learn:3|viol:1|hint:check_error_handling
3. vreko - Universal Entry Point (ACT)
Start tasks, get context, or quick check. This is the primary tool AI assistants should call first.
Parameters:
{
mode?: 'start' | 'check' | 'context'; // or legacy: 's' | 'c' | 'x'
task?: string; // Task description (for start mode)
files?: string[]; // Files to work on
keywords?: string[]; // Keywords for learning retrieval
intent?: 'implement' | 'debug' | 'refactor' | 'review' | 'explore';
thorough?: boolean; // Enable 7-layer validation (check mode)
compact?: boolean; // Use compact wire format
goal?: { // Goal for task completion validation
metric: 'bundle' | 'performance' | 'coverage';
target: number;
unit: string;
};
}
Modes:
start(ors): Start a task, creates snapshot, loads learningscheck(orc): Quick validation of filescontext(orx): Get current context without starting a task
Usage Example:
// AI assistant starts a task
await mcp.call('vreko', {
mode: 'start',
task: 'Refactor authentication module',
files: ['src/auth.ts', 'src/config.ts'],
intent: 'refactor'
});
4. check - Code Validation (ACT)
Validate code against patterns, run builds, check for issues.
Parameters:
{
mode?: 'quick' | 'full' | 'patterns' | 'build' | 'impact' | 'circular'
| 'docs' | 'learnings' | 'architecture' | 'trace' | 'security'
| 'coverage' | 'orphans' | 'health';
files?: string | string[]; // Files to check
diff?: 'staged' | 'changed' | 'uncommitted'; // Auto-detect from git
code?: string; // Code to validate (for patterns mode)
tests?: boolean; // Run tests
compact?: boolean; // Use compact wire format
}
Modes (14 total):
quick: Fast TypeScript + lint check (default) β‘ RECOMMENDEDfull: Comprehensive 7-layer validation π― PRODUCTION READYpatterns: Pattern-only style compliancebuild: Build verification (runs pnpm build)impact: Change impact analysis with risk scoringcircular: Circular dependency detectiondocs: Documentation freshness checklearnings: Learning tier maintenance and statsarchitecture: Layer dependency validationtrace: β NOT YET IMPLEMENTED - Returns error message. Usequickorfullinstead.security: Secret detection and threat scanningcoverage: Test coverage analysisorphans: Find orphan files and skipped testshealth: MCP server diagnostics
Trace Mode Unavailable: The check toolβs trace mode is not yet implemented in v0.1.0. Please use quick or full mode instead. Trace mode is planned for a future release.
5. snap_end - Complete Task (REFLECT)
Complete a task and capture learnings.
Parameters:
{
outcome?: 'completed' | 'abandoned' | 'blocked';
learnings?: string[]; // Key learnings from this task
notes?: string; // Additional completion notes
efficiency?: { // Your estimate of session efficiency
saved?: string; // Tokens saved (e.g., '~15K')
prevented?: string; // Mistakes avoided (e.g., '2 - wrong layer')
helped?: string; // What context helped (e.g., 'auth patterns')
};
survey?: { // Optional self-assessment (helps improve Vreko)
patterns_used?: number; // How many patterns applied
pitfalls_avoided?: number; // How many mistakes avoided
helpfulness?: number; // Rating 1-5
unhelpful_count?: number; // Count of unhelpful suggestions
};
compact?: boolean; // Use compact wire format
}
Wire Response:
π¦|E|status:OK|learn:2L|files:3F|lines:+45-12
6. snap_learn - Capture Learning (REFLECT)
Capture mid-session learnings for future reference. Use this mid-task for immediate learnings (vs snap_end for end-of-task summary).
Parameters:
{
trigger: string; // REQUIRED: What situation triggers this learning
action: string; // REQUIRED: What to do when triggered
type?: 'pattern' | 'pitfall' | 'efficiency' | 'discovery' | 'workflow';
source?: string; // Where this learning originated
compact?: boolean;
}
Learning Types:
pattern: Something that worked well (βWhen doing X, always do Yβ)pitfall: Mistake to avoid (βNever do X because Y happensβ)efficiency: Token/time optimization (βUse X instead of Yβ)discovery: New codebase knowledge (βFile X handles Yβ)workflow: Process improvement (βBetter way to do Xβ)
Wire Response:
π¦|L|status:OK|id:learn_abc123|type:pattern
Usage Example:
await mcp.call('snap_learn', {
trigger: 'modifying auth middleware',
action: 'always validate session before token refresh',
type: 'pitfall'
});
7. snap_violation - Report Violation (REFLECT)
Report a mistake for pattern learning. Use for actual bugs/errors (vs snap_learn pitfall for potential mistakes).
Parameters:
{
type: string; // REQUIRED: Violation category (e.g., 'silent_catch')
file: string; // REQUIRED: File where violation occurred
description: string; // REQUIRED: What went wrong
reason?: string; // Why it happened (optional)
prevention: string; // REQUIRED: How to prevent in future
compact?: boolean;
}
Auto-Escalation:
- 1st time: Recorded and tracked
- 3rd time: Promoted to pattern (prevents future occurrences)
- 5th time: Flagged for automation
Wire Response:
π¦|V|status:OK|type:silent_catch|count:3|promote:PROMOTED
Usage Example:
await mcp.call('snap_violation', {
type: 'silent_catch',
file: 'src/auth.ts',
description: 'Catch block swallowed error without logging',
reason: 'Rushed implementation, forgot logging',
prevention: 'Always log in catch blocks with context'
});
Intelligence Layer Tools (Advanced)
Proactive Intelligence: These tools enable AI assistants to evaluate changes before theyβre made by aggregating external context from GitHub, Sentry, and documentation sources. Available on all tiers when external MCP servers are configured.
The Intelligence Layer transforms Vreko from reactive protection to proactive prevention:
Before: AI makes change β Vreko snapshots it β User restores if broken
After: AI proposes change β Vreko evaluates against context β Informed decision before damage
Prerequisites
Intelligence Layer tools require external MCP server connections:
- GitHub MCP (
@modelcontextprotocol/server-github) - Commit history, PR discussions, issue references - Context7 MCP - API documentation validation, deprecation warnings
- Sentry MCP (
@modelcontextprotocol/server-sentry) - Error tracking, stacktraces, failure patterns
Optional Feature: Intelligence Layer tools work alongside core tools but require additional MCP server setup. Core Vreko functionality works without these integrations.
1. vreko_validate_change - Proactive Change Validation
Validate file changes against aggregated external context before theyβre applied.
Parameters:
{
files: Array<{ // Files to validate
path: string;
content?: string; // New content (optional)
diff?: string; // Git diff (optional)
}>;
context?: {
intent?: string; // Why this change is being made
relatedIssues?: string[]; // GitHub issue references
};
}
Response:
{
validationResult: {
overallRisk: number; // 0-10 risk score
riskLevel: 'low' | 'medium' | 'high' | 'critical';
findings: Array<{
file: string;
type: 'security' | 'deprecation' | 'pattern_violation' | 'error_prone';
severity: 'low' | 'medium' | 'high' | 'critical';
message: string;
source: 'github' | 'sentry' | 'context7' | 'local';
confidence: number; // 0-100
}>;
recommendation: 'proceed' | 'review' | 'block';
context: {
recentErrors?: number; // From Sentry
relatedCommits?: number; // From GitHub
deprecationWarnings?: number; // From Context7
};
};
}
Wire Response:
π|V|score:45|level:medium|findings:3|rec:review|src:github+sentry
Usage Example:
// AI assistant validates before applying changes
const validation = await mcp.call('vreko_validate_change', {
files: [{
path: 'src/auth.ts',
content: '// proposed new content...'
}],
context: {
intent: 'Add JWT token refresh logic',
relatedIssues: ['#123']
}
});
if (validation.validationResult.recommendation === 'block') {
console.warn('High risk detected - review required');
}
2. vreko_get_risk_score - Weighted Risk Assessment
Get comprehensive risk score with weighted factors from multiple sources.
Parameters:
{
files: string[]; // File paths to analyze
includeContext?: boolean; // Include full context details
}
Response:
{
riskScore: number; // 0-10 weighted score
riskLevel: 'low' | 'medium' | 'high' | 'critical';
factors: Array<{
source: 'github' | 'sentry' | 'context7' | 'local';
weight: number; // 0-1 contribution to score
score: number; // 0-10 for this factor
reason: string;
}>;
recommendation: string;
confidence: number; // 0-100 overall confidence
}
Wire Response:
π―|R|score:67|level:high|factors:4|conf:85|rec:snapshot_first
Usage Example:
const risk = await mcp.call('vreko_get_risk_score', {
files: ['src/payment.ts', 'src/api.ts'],
includeContext: true
});
console.log(`Risk: ${risk.riskLevel} (${risk.riskScore}/10)`);
risk.factors.forEach(f => {
console.log(`- ${f.source}: ${f.score} (weight: ${f.weight})`);
});
3. vreko_query_patterns - Pattern Database Query
Query Vrekoβs patterns, violations, and learnings database.
Parameters:
{
query: string; // Search query
type?: 'pattern' | 'violation' | 'learning' | 'all';
filters?: {
files?: string[]; // Filter by file patterns
severity?: string[]; // Filter by severity
since?: string; // ISO date - patterns since date
};
limit?: number; // Max results (default: 10)
}
Response:
{
results: Array<{
type: 'pattern' | 'violation' | 'learning';
id: string;
trigger: string; // When this applies
action: string; // What to do
occurrences: number; // How many times seen
lastSeen: string; // ISO date
confidence: number; // 0-100
relatedFiles: string[];
}>;
total: number;
query: string;
}
Wire Response:
π|P|results:7|type:violation|query:auth|conf:92
Usage Example:
// Check for known auth-related pitfalls
const patterns = await mcp.call('vreko_query_patterns', {
query: 'authentication',
type: 'violation',
filters: { severity: ['high', 'critical'] }
});
patterns.results.forEach(p => {
console.log(`β οΈ ${p.trigger} β ${p.action}`);
});
4. vreko_get_context - Aggregated External Context
Get comprehensive context from all integrated external sources.
Parameters:
{
files: string[]; // Files to get context for
sources?: Array<'github' | 'sentry' | 'context7' | 'local'>; // Filter sources
depth?: 'shallow' | 'deep'; // Context depth
}
Response:
{
context: {
github?: {
recentCommits: Array<{...}>;
relatedPRs: Array<{...}>;
issues: Array<{...}>;
};
sentry?: {
recentErrors: Array<{...}>;
errorFrequency: number;
affectedUsers: number;
};
context7?: {
apiDocs: Array<{...}>;
deprecations: Array<{...}>;
versionInfo: {...};
};
local?: {
snapshots: number;
changeVelocity: number;
aiDetectionSignals: Array<{...}>;
};
};
aggregatedAt: string; // ISO timestamp
sources: string[]; // Active sources
}
Wire Response:
π|C|sources:3|commits:12|errors:5|docs:8|depth:deep
Usage Example:
const context = await mcp.call('vreko_get_context', {
files: ['src/auth.ts'],
sources: ['github', 'sentry'],
depth: 'deep'
});
if (context.context.sentry?.errorFrequency > 10) {
console.warn('High error rate detected in this file');
}
5. vreko_suggest_rollback - Intelligent Restore Suggestions
Get restore suggestions based on failure patterns and context.
Parameters:
{
reason?: string; // Why restore is needed
files?: string[]; // Specific files to consider
includeAnalysis?: boolean; // Include detailed analysis
}
Response:
{
suggestions: Array<{
snapshotId: string;
timestamp: string;
reason: string;
confidence: number; // 0-100
filesAffected: string[];
riskOfRestore: 'low' | 'medium' | 'high';
relatedContext: {
commitsBetween?: number;
errorsSince?: number;
};
}>;
recommended?: string; // Recommended snapshot ID
analysis?: { // If includeAnalysis: true
currentState: {...};
targetState: {...};
impact: {...};
};
}
Wire Response:
π|RB|suggestions:3|recommended:snap_abc|conf:88|risk:low
Usage Example:
const restore = await mcp.call('vreko_suggest_rollback', {
reason: 'Critical error after auth refactor',
files: ['src/auth.ts'],
includeAnalysis: true
});
const best = restore.suggestions[0];
console.log(`Suggest restoring to: ${best.snapshotId}`);
console.log(`Confidence: ${best.confidence}%`);
console.log(`Risk: ${best.riskOfRestore}`);
Setup External MCP Servers
# Install GitHub MCP server
npm install -g @modelcontextprotocol/server-github
# Configure in MCP settings
# Add to your AI assistant's MCP config:
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "ghp_your_token_here"
}
}
}
}# Install Sentry MCP server
npm install -g @modelcontextprotocol/server-sentry
# Configure in MCP settings
{
"mcpServers": {
"sentry": {
"command": "npx",
"args": ["@modelcontextprotocol/server-sentry"],
"env": {
"SENTRY_AUTH_TOKEN": "your_token_here",
"SENTRY_ORG": "your-org",
"SENTRY_PROJECT": "your-project"
}
}
}
}# Install Context7 MCP (requires subscription)
npm install -g @context7/mcp-server
# Configure in MCP settings
{
"mcpServers": {
"context7": {
"command": "npx",
"args": ["@context7/mcp-server"],
"env": {
"CONTEXT7_API_KEY": "your_api_key"
}
}
}
}Integration Architecture
AI Assistant
β
βΌ
Vreko MCP Server (Local)
β
ββββΊ GitHub MCP βββΊ GitHub API
ββββΊ Sentry MCP βββΊ Sentry API
ββββΊ Context7 MCP βββΊ Context7 API
β
βΌ
Aggregated Context
β
βΌ
Risk Engine (Weighted Scoring)
β
βΌ
Intelligence Response
Privacy Note: External MCP servers make API calls using YOUR tokens/credentials. Vreko orchestrates these calls but doesnβt store or transmit your credentials.
Backend MCP (Planned)
Coming Soon: Backend MCP features are currently in development for Pro, Team, and Enterprise plans.
Whatβs Planned
Backend MCP will add cloud-powered capabilities to your local MCP integration:
Cloud Backup
Automatically sync snapshots to encrypted cloud storage with configurable retention.
Advanced AI Scoring
Cloud-based AI risk scoring analyzes code for security risks with higher accuracy.
Team Collaboration
Share snapshots and policies across your team with granular access controls.
Privacy Notice
Privacy & Consent: Backend MCP requires explicit consent before sending data to Vreko servers. Metadata (file paths, risk scores, timestamps) is uploaded with end-to-end encryption. File contents are never sent unless you explicitly enable cloud backup.
Whatβs Uploaded (with Backend MCP):
- β Snapshot metadata: File paths, sizes, hashes (encrypted)
- β Risk analysis results: Severity scores, violation types (no code content)
- β Session metadata: Session names, durations, tag data
- β Policy configurations: File policies, team .vrekorc settings
Whatβs Never Uploaded:
- β File contents (unless cloud backup is explicitly enabled)
- β API keys or secrets (always redacted)
- β Personal identifiable information (PII is sanitized)
- β Source code snippets (only metadata and hashes)
Planned Backend Tools
cloud_backup (Planned)
Upload a snapshot to encrypted cloud storage.
Parameters:
{
snapshotId: string;
workspacePath: string;
retention?: number; // days (default: tier-based)
}cloud_restore (Planned)
Restore a snapshot from cloud storage to your local machine.
guardian_ai_score (Planned)
Analyze code using cloud-based AI risk scoring for enhanced risk detection.
Key Differences from Local analyze_risk:
- Cloud-based ML model (higher accuracy)
- Confidence scores and explanations
- π§ Suggested fixes for violations
- π Faster analysis for large codebases (parallel processing)
Real-Time Intelligence Channel (Planned)
Coming Soon: The Channel Server is currently in development. This feature will provide real-time intelligence push notifications to your AI assistant sessions.
What It Does
The Channel Server creates a second MCP connection that pushes Vreko intelligence alerts in real-time to your AI coding assistant, eliminating the need for manual snap_pulse checks.
Key Capabilities
Proactive Risk Warnings
Inline alerts when modifying fragile files with historical context and violation patterns.
Live Session Monitoring
Automatic push notifications for risk detection, snapshot creation, and learning discoveries.
Inline Acknowledgments
Respond to alerts with vreko_acknowledge tool β acknowledge risks or create snapshots instantly.
Zero Configuration
Runs alongside your existing MCP server with no additional setup required.
Architecture
Vreko Daemon (vrekod)
β broadcasts events via IPC
Channel Server (vreko mcp --channel)
β pushes via MCP protocol
Your AI Assistant Session
β displays inline
Real-time intelligence alerts
Usage Example
Once released, youβll enable it via .mcp.json:
{
"mcpServers": {
"vreko": {
"type": "stdio",
"command": "npx",
"args": ["--yes", "@vreko/cli", "mcp", "--stdio", "\${workspacePath}"],
"instructions": "Use snap_begin, snap_pulse, snap_learn, snap_end for session management."
},
"vreko-channel": {
"type": "stdio",
"command": "npx",
"args": ["--yes", "@vreko/cli", "mcp", "--channel", "\${workspacePath}"]
}
}
}
Or launch Claude Code with:
claude --channels vreko-channel
Privacy & Performance
100% Local: The Channel Server runs entirely on your machine. No data is transmitted β all intelligence flows through local Unix sockets (macOS/Linux) or named pipes (Windows).
Performance Characteristics:
- β Sub-50ms alert delivery from service to AI session
- β Uses existing service infrastructure (no new connections)
- β Workspace-scoped connection pooling prevents resource contention
- β Circuit breaker pattern ensures graceful degradation
New Tool: vreko_acknowledge
The Channel Server introduces a new tool for responding to intelligence alerts:
Parameters:
{
action: "acknowledged" | "creating_snapshot" | "proceeding_anyway";
}
Actions:
"acknowledged"β I see the risk and will proceed carefully"creating_snapshot"β Creating a snapshot before proceeding (recommended for high-risk changes)"proceeding_anyway"β Accept the risk and continue without snapshot
Availability
Estimated Release: Q2 2026. The Channel Server extends existing MCP infrastructure β no breaking changes required. Available to all tiers (Free, Pro, Team, Enterprise).
Configuration for All Supported AI Assistants
Auto-Configuration: Vreko automatically configures MCP for most AI assistants when you install the VS Code extension. The configurations below are provided for manual setup or troubleshooting.
Supported AI Assistants
Vreko supports 11 AI assistants with automatic MCP configuration:
- Claude Desktop β Anthropicβs desktop app
- Cursor β AI-first code editor
- Windsurf β Codeiumβs AI editor
- Qoder β AI coding assistant
- VS Code β With MCP extensions
- Continue β Open source AI assistant
- Cline β Claude-powered coding assistant
- Zed β High-performance editor
- Roo Code β AI coding companion
- Aider β AI pair programmer (CLI)
- Gemini/Antigravity β Googleβs AI assistant
Multi-Workspace Architecture
How It Works: Each workspace gets its own dedicated MCP server process. If you have 3 Cursor windows open with different projects, Vreko spawns 3 separate MCP serversβall connecting to one shared service for coordination.
Example:
Cursor Workspace A β vreko mcp --stdio --workspace /path/to/project-a β Process #1 β local service IPC
Cursor Workspace B β vreko mcp --stdio --workspace /path/to/project-b β Process #2 β local service IPC
Cursor Workspace C β vreko mcp --stdio --workspace /path/to/project-c β Process #3 β local service IPC
β
~/.vreko/service.sock (shared service, workspace-scoped sessions)
Benefits:
- β Isolation β Changes in one workspace donβt affect others
- β Performance β Each server only watches its own files
- β Reliability β If one crashes, others keep working
- β Scalability β No cross-workspace coordination overhead
Manual Configuration by Client
Transport Modes: Vreko MCP supports multiple transport modes:
--stdio(recommended): Direct local communication via standard input/output--sse: Local HTTP SSE server for multi-client connectionsshim: Bridge mode that proxies stdio to remote SSE server atvreko-mcp.fly.dev
Use --stdio for single-client local development. Use --sse when running multiple MCP clients simultaneously.
Understanding Transport Modes
Vrekoβs MCP server can operate in different transport modes:
--stdio Mode (Recommended)
Direct local communication using standard input/output streams. This is the default and recommended mode for all local development.
vreko mcp --stdio --workspace /path/to/project
Characteristics:
- β 100% local, no network requests
- β Fastest response time (no network latency)
- β Works completely offline
- β Full privacy guarantee
- β Supported by all major AI assistants
Use this when: You want local, private MCP functionality (99% of use cases).
--sse Mode (Multi-Client)
Local HTTP Server-Sent Events server for multi-client connections on localhost:8765.
# Start service with SSE transport
vreko service start
# SSE is available at http://localhost:8765/events
Characteristics:
- β Multiple clients can connect simultaneously
- β Persistent connections without stdin overhead
- β Unified authentication with Vreko account
- β Works with any SSE-compatible MCP client
Configuration:
Set the SSE port (default: 8765):
export VREKO_SSE_PORT=8765
Connecting to SSE:
Configure your MCP client to connect to the SSE endpoint:
{
"mcpServers": {
"vreko": {
"url": "http://localhost:8765/events",
"transport": "sse",
"headers": {
"Authorization": "Bearer sk_live_xxx"
}
}
}
}
Authentication:
SSE transport supports two authentication methods:
-
API Key (recommended): Use your Vreko API key (
sk_live_xxxorsk_test_xxx)- Validated against api.vreko.dev
- Provides user context (tier, permissions) for rate limiting
-
Local Token: Automatically uses your CLI login session
- Retrieved from
~/.vreko/mcp-token - No additional configuration needed
- Retrieved from
Use this when: Running multiple AI assistants simultaneously or need persistent connections.
shim Mode (Advanced)
A stdio-to-SSE bridge that proxies local stdio requests to a remote Server-Sent Events endpoint at https://vreko-mcp.fly.dev.
vreko mcp shim --workspace /path/to/project
Characteristics:
- Connects to remote Fly.io server
- π Requires internet connectivity
- β οΈ May encounter 404 errors if server not deployed
- π Intended for stdio-only clients that need remote features
Use this when: You specifically need to connect stdio-only clients (like certain AI assistants) to a remote SSE-enabled MCP server.
Common Mistake: If you see https://vreko-mcp.fly.dev in your AI assistantβs MCP connection status with a 404 error, you likely have shim in your config when you meant to use --stdio.
Check your MCP config file and replace:
"args": ["mcp", "shim", "--workspace", "..."]with:
"args": ["mcp", "--stdio", "--workspace", "..."]1. Claude Desktop
Recommended: Use the CLI: vreko tools configure --claude
Manual configuration below is only needed for custom setups.
Config File Location:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
Manual Configuration:
{
"mcpServers": {
"vreko": {
"command": "npx",
"args": [
"@vreko/cli",
"mcp",
"--stdio",
"--workspace",
"/path/to/your/project"
]
}
}
}
Note: Claude Desktop doesnβt support the workspaceFolder variable. You must use a hardcoded path or omit --workspace (Vreko will use current directory).
2. Cursor
Recommended: Install the Vreko VS Code extension. It auto-configures MCP via SSE - no manual setup needed.
Config File Locations (for manual setup):
- Project-level:
.cursor/mcp.jsonin your project root - Global:
~/.cursor/mcp.json
Manual Configuration:
{
"mcpServers": {
"vreko": {
"command": "npx",
"args": [
"@vreko/cli",
"mcp",
"--stdio",
"--workspace",
"\${workspaceFolder}"
]
}
}
}
Cursor Multi-Workspace: Cursor automatically replaces workspaceFolder with the active workspace path, so each window gets its own MCP server instance.
3. Windsurf
Recommended: Install the Vreko VS Code extension. It auto-configures MCP via SSE.
Config File Location (for manual setup): ~/.codeium/windsurf/mcp_config.json
Manual Configuration:
{
"mcpServers": {
"vreko": {
"command": "npx",
"args": [
"@vreko/cli",
"mcp",
"--stdio",
"--workspace",
"\${workspaceFolder}"
]
}
}
}
4. Qoder
Config File Locations:
- macOS:
~/Library/Application Support/Qoder/SharedClientCache/extension/local/mcp.json - Windows:
%APPDATA%\Qoder\mcp.json - Linux:
~/.config/Qoder/mcp.json - Project-level:
.qoder-mcp-config.jsonin your project root
Configuration:
{
"mcpServers": {
"vreko": {
"command": "npx",
"args": [
"@vreko/cli",
"mcp",
"--stdio",
"--workspace",
"\${workspaceFolder}"
]
}
}
}
5. VS Code
Config File Location: .vscode/mcp.json in your project root
Configuration:
{
"servers": {
"vreko": {
"command": "npx",
"args": [
"@vreko/cli",
"mcp",
"--stdio",
"--workspace",
"\${workspaceFolder}"
]
}
}
}
Note: VS Code uses "servers" instead of "mcpServers" in the config.
6. Continue
Config File Location: ~/.continue/config.json
Configuration:
{
"experimental": {
"modelContextProtocolServers": [
{
"name": "vreko",
"command": "npx",
"args": [
"@vreko/cli",
"mcp",
"--stdio"
]
}
]
}
}
Continue Structure: Continue uses a unique array-based structure under experimental.modelContextProtocolServers.
7. Cline
Config File Location: ~/.cline/mcp.json
Configuration:
{
"mcpServers": {
"vreko": {
"command": "npx",
"args": [
"@vreko/cli",
"mcp",
"--stdio",
"--workspace",
"\${workspaceFolder}"
]
}
}
}
8. Zed
Config File Location: ~/.config/zed/settings.json
Configuration:
{
"context_servers": {
"vreko": {
"command": "npx",
"args": [
"@vreko/cli",
"mcp",
"--stdio"
]
}
}
}
Note: Zed uses "context_servers" instead of "mcpServers".
9. Roo Code
Config File Location: ~/.roo-code/mcp.json
Configuration:
{
"mcpServers": {
"vreko": {
"command": "npx",
"args": [
"@vreko/cli",
"mcp",
"--stdio",
"--workspace",
"\${workspaceFolder}"
]
}
}
}
10. Aider (CLI)
Config File Location: ~/.aider/mcp.yaml
Configuration (YAML):
servers:
vreko:
command: npx
args:
- "@vreko/cli"
- mcp
- --stdio
Aider Format: Aider uses YAML instead of JSON for its configuration.
11. Gemini/Antigravity
Config File Location: ~/.gemini/settings.json
Configuration:
{
"context_servers": {
"vreko": {
"command": "npx",
"args": [
"@vreko/cli",
"mcp",
"--stdio"
]
}
}
}
Troubleshooting MCP Connections
Status Bar Indicators
The Vreko VS Code extension shows status in the status bar using the π¦ prefix. Setup gates appear first if any precondition is unmet:
| Display | Meaning | Action |
|---|---|---|
π¦ Vreko | Connected and watching | None needed |
$(warning) Install Vreko CLI | CLI not found | Click to install |
$(sync~spin) Starting Vreko... | Daemon starting | Wait (auto-resolves) |
$(key) Sign in to Vreko | Not authenticated | Click to sign in |
$(folder) Initialize workspace | Workspace not initialized | Click to run init |
$(plug) Connect AI tool | MCP not configured | Click to auto-configure |
Common Issue: Status Bar Still Shows a Gate State
Symptom: You resolved a setup step (e.g. installed the CLI, configured MCP) but the status bar still shows the gate message.
Why this happens: The status bar refreshes when Vreko detects a relevant change, such as a service state transition. If the service was already running and stable, no event fires immediately after you complete a step.
Fix: Click the status bar item again β it re-evaluates all gates. If that doesnβt work, reload VS Code (β+Shift+P β Developer: Reload Window).
Common Issue: MCP Tools Work but Service Shows Disconnected
Symptom: Your AI assistant can call vreko tools, but the status bar shows $(sync~spin) Starting Vreko... or the service appears offline.
Whatβs happening: The MCP server process is running independently of the service socket connection. MCP tool calls can still succeed if the server process is alive, even when the service socket is temporarily broken.
Solution:
- Run
β+Shift+P β Vreko: MCP Reconnectto force a fresh service connection. - If reconnection fails, run
β+Shift+P β Vreko: MCP Diagnoseto see actual state. - As a last resort, run
β+Shift+P β Vreko: MCP Reset, then reload VS Code.
Checking MCP Process Status
See all running MCP servers:
# Check running Vreko MCP processes
ps aux | grep "vreko mcp"
Expected output for 3 workspaces:
user 12345 node /path/to/cli/dist/index.js mcp --stdio --workspace /Users/you/project-a
user 12346 node /path/to/cli/dist/index.js mcp --stdio --workspace /Users/you/project-b
user 12347 node /path/to/cli/dist/index.js mcp --stdio --workspace /Users/you/project-c
Check service socket:
ls -la ~/.vreko/service.sock
If the socket doesnβt exist, the service isnβt running. Start it with:
vreko service start --no-detach
Multi-Workspace Troubleshooting
Problem: One workspaceβs MCP works, others donβt.
Cause: Each workspace has its own MCP server process with a different workspace path.
Solution:
-
Verify each workspace has correct path:
# Check running processes ps aux | grep "vreko mcp" # Look for --workspace arguments # Each should point to its respective project root -
Check project-level configs:
# In each project cat .cursor/mcp.json cat .qoder-mcp-config.json cat .vscode/mcp.json -
Ensure the workspaceFolder variable is used:
- Good:
--workspacewith workspaceFolder - Bad: Hardcoded path like
/path/to/project
- Good:
Session Not Linking (snap_begin / snap_end)
Symptom: snap_begin returns a session briefing, but snap_end responds with βNo active session to closeβ or diagnostics showing workspace: default.
Root cause: The MCP server is not connecting to your local service. Session state lives in the local service (~/.vreko/service.sock) and requires the MCP server to run locally on your machine.
Diagnostic checklist:
-
Confirm the service socket exists:
ls -la ~/.vreko/service.sock vreko service statusIf missing:
vreko service start --no-detach -
Check the workspace shown in diagnostics: When
snap_endfails it prints aworkspace:field. If it showsdefaultinstead of your actual project path, the MCP server received no workspace parameter.Fix: ensure your MCP config passes the workspace path:
"args": ["mcp", "--stdio", "--workspace", "\${workspaceFolder}"] -
Confirm the MCP binary runs locally: The Vreko MCP server must run as a local process on your machine β not as a remote proxy.
Verify which process is serving your AI assistant:
# Check running vreko processes ps aux | grep vrekoYou should see a local
vreko mcpprocess, not a network connection to a remote host. -
Rebuild the MCP server bundle (if the binary is stale):
pnpm --filter @vreko/cli buildThen reload your AI assistant.
-
Reset and reconnect:
β+Shift+P β Vreko: MCP Reset β+Shift+P β Vreko: MCP Reconnect
Why this happens: Session operations (snap_begin, snap_end) are workspace-scoped and require direct access to the local service socket. If the MCP server process runs remotely or proxies requests to a cloud endpoint, it cannot reach ~/.vreko/service.sock on your machine.
CLI Not Installed
Symptom: Status bar shows $(warning) Install Vreko CLI.
Solution:
-
Install Vreko CLI globally:
npm install -g @vreko/cli -
Verify installation:
vreko --version npx @vreko/cli --version -
Restart your AI assistant (Cursor, Qoder, etc.)
Configuration Priority
Vreko looks for MCP configuration in this order:
-
Project-level config (highest priority)
.cursor/mcp.json.qoder-mcp-config.json.vscode/mcp.json.windsurf/mcp.json
-
Global config
~/.cursor/mcp.json~/Library/Application Support/Qoder/.../mcp.json- etc.
Best Practice: Use project-level configs for multi-workspace setups. This ensures each workspace gets its own configuration without conflicts.
Auto-Configuration
How it works:
- Install Vreko VS Code extension
- Extension auto-detects installed AI assistants
- Silently configures MCP for each one
- Shows toast confirming protection is active
No manual configuration needed!
Supported for auto-configuration:
- β Claude Desktop
- β Cursor
- β Windsurf
- β Qoder
- β VS Code
- β Continue
- β Cline
- β Zed
- β Roo Code
Manual configuration needed:
- β οΈ Aider (CLI tool, no auto-detect)
- β οΈ Gemini (less common, may need manual setup)
To manually trigger auto-configuration:
Command Palette β Vreko: Configure MCP
Status bar timing: After running Configure MCP, the status bar may briefly continue showing $(plug) Connect AI tool. This clears automatically on the next service event. If it persists after a few seconds, click the status bar item to force a re-check.
Validation Commands
Check your MCP configuration:
# Scan for all AI assistants
vreko mcp scan
# Validate configurations
vreko mcp validate
# Repair broken configurations
vreko mcp repair
VS Code setup commands (status bar gate actions):
Vreko: Install CLI β Install the Vreko CLI
Vreko: Start Service β Start the background service
Vreko: Sign In β Authenticate with Vreko
Vreko: Initialize Workspace β Initialize this workspace
Vreko: Configure MCP β Auto-configure MCP for your AI tools
VS Code MCP commands:
Vreko: MCP Diagnose β Show connection status
Vreko: MCP Reconnect β Force reconnection
Vreko: MCP Reset β Reset configuration state
Vreko: MCP Validate β Check all configurations
Vreko: MCP Status β Quick status check
Next Steps
- Install Vreko β Get started in 2 minutes
- CLI Reference β Use Vreko from the terminal
- How It Works β How Vreko detects AI changes
- Troubleshooting β Common issues and fixes