MCP Server Documentation
Connect your AI assistant (Claude Code, Claude Desktop, Cursor, OpenCode) to SiteGulp as a remote MCP server. Your LLM can then crawl and understand any website without needing a local Playwright setup.
How It Works
1Configure Once
Add SiteGulp to your AI tool's MCP config with your API key.
2LLM Creates Project
Your AI calls create_or_get_project for the target website — one project per site.
3Crawl & Read
AI starts a crawl, waits for completion, then reads all page descriptions.
Suggested system prompt for your AI assistant:
You have access to the SiteGulp MCP server for exploring websites.
When you need to understand a website:
1. Call create_or_get_project with the URL — create ONE project per website, not one per task.
2. Call start_crawl with a descriptive prompt about what to focus on.
3. Poll get_crawl_status every 10 seconds until status is "completed".
4. Call get_crawl_summary to read all discovered pages and their descriptions.
Do not create duplicate projects for the same URL — always check if one exists first.Available Tools
create_or_get_project
Create a project for a website URL, or return the existing one. Idempotent — safe to call repeatedly. Create one project per website you work with.
Parameters: url (required), name (optional), description (optional)
list_projects
List all projects in your account with their last crawl status.
Parameters: limit (optional, default 20)
start_crawl
Start an AI-powered crawl of a project. The crawler navigates the site, takes screenshots, and generates descriptions of every page. Optionally generates Playwright automation scripts.
Parameters: project_id (required), name (required), prompt (required), include_images (optional, default true), include_videos (optional, default false), include_descriptions (optional, default true), include_design_improvements (optional, default false), include_browser_console_logs (optional, default true), viewport_desktop (optional, default true), viewport_tablet (optional, default false), viewport_mobile (optional, default false), record_script (optional, default false), record_script_language (optional: javascript|typescript|python, default javascript)
get_crawl_status
Check the progress of a running crawl. Poll until status is "completed" or "failed".
Parameters: crawl_id (required)
get_crawl_summary
Get text descriptions of all crawled pages — URL, title, AI description, page type, forms, links. Everything needed to understand a site, no screenshots required.
Parameters: crawl_id (required), max_pages (optional, default 100)
Authentication
The MCP server uses your SiteGulp API key (Bearer token). Generate one from your Settings page.
Authorization: Bearer sk_your_api_key_hereinitialize and tools/list work without auth
Your client can discover server capabilities before authenticating.
All tool calls require a valid API key
Include the Authorization header in your MCP client config.
Keep your key secret
Never commit API keys to version control. Use environment variables.
Client Configuration
Claude Code
~/.claude/settings.json
Add under "mcpServers". Restart Claude Code after saving.
{
"mcpServers": {
"sitegulp": {
"type": "http",
"url": "http://localhost:3001/api/mcp",
"headers": {
"Authorization": "Bearer sk_YOUR_API_KEY_HERE"
}
}
}
}Claude Desktop
claude_desktop_config.json
Requires mcp-remote package. Install with: npm install -g mcp-remote
{
"mcpServers": {
"sitegulp": {
"command": "npx",
"args": [
"mcp-remote",
"http://localhost:3001/api/mcp",
"--header",
"Authorization: Bearer sk_YOUR_API_KEY_HERE"
]
}
}
}Cursor
.cursor/mcp.json (project) or ~/.cursor/mcp.json (global)
Add under "mcpServers" in your Cursor MCP config.
{
"mcpServers": {
"sitegulp": {
"url": "http://localhost:3001/api/mcp",
"headers": {
"Authorization": "Bearer sk_YOUR_API_KEY_HERE"
}
}
}
}OpenCode
opencode.json or ~/.config/opencode/config.json
Add under "mcp" section in your OpenCode config.
{
"mcp": {
"servers": {
"sitegulp": {
"type": "remote",
"url": "http://localhost:3001/api/mcp",
"headers": {
"Authorization": "Bearer sk_YOUR_API_KEY_HERE"
}
}
}
}
}cURL (manual test)
Verify your setup from the command line
Use this to test the MCP server directly.
# Initialize (no auth needed)
curl -X POST http://localhost:3001/api/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{}}'
# List tools (no auth needed)
curl -X POST http://localhost:3001/api/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":2,"method":"tools/list","params":{}}'
# Create a project (auth required)
curl -X POST http://localhost:3001/api/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk_YOUR_API_KEY_HERE" \
-d '{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"create_or_get_project","arguments":{"url":"https://example.com"}}}'Typical LLM Workflow
// Step 1: Create or find project
{"method":"tools/call","params":{"name":"create_or_get_project","arguments":{"url":"https://myapp.example.com","name":"My App"}}}
// → {"project_id": "proj_abc123", "status": "created", ...}
// Step 2: Start a crawl (with optional content options + script recording)
{"method":"tools/call","params":{"name":"start_crawl","arguments":{"project_id":"proj_abc123","name":"Initial crawl","prompt":"Document all pages, forms, and navigation patterns","include_images":true,"include_descriptions":true,"include_browser_console_logs":true,"record_script":true,"record_script_language":"javascript"}}}
// → {"crawl_id": "crl_xyz789", "status": "pending", ...}
// Step 3: Poll until done (every ~10s)
{"method":"tools/call","params":{"name":"get_crawl_status","arguments":{"crawl_id":"crl_xyz789"}}}
// → {"status": "running", "progress_percent": 42, "pages_found": 18, ...}
// Step 4: Read results once complete
{"method":"tools/call","params":{"name":"get_crawl_summary","arguments":{"crawl_id":"crl_xyz789"}}}
// → {"pages": [{"url":"/", "title":"Home", "description":"Landing page with hero and CTA..."}, ...]}Endpoint Reference
/api/mcpMain MCP endpoint. Send JSON-RPC 2.0 requests here.
/api/mcpReturns server info and available tools (unauthenticated).
Ready to get started?
Generate an API key, add the config to your AI tool, and start exploring websites.