LogoLogo
SupportDashboard
  • Community
  • Welcome to Hyperbrowser
  • Get Started
    • Quickstart
      • AI Agents
        • Browser Use
        • Claude Computer Use
        • OpenAI CUA
      • Web Scraping
        • Scrape
        • Crawl
        • Extract
      • Browser Automation
        • Puppeteer
        • Playwright
        • Selenium
  • Agents
    • Browser Use
    • Claude Computer Use
    • OpenAI CUA
  • HyperAgent
    • About HyperAgent
      • HyperAgent SDK
      • HyperAgent Types
  • Quickstart
  • Multi-Page actions
  • Custom Actions
  • MCP Support
    • Tutorial
  • Examples
    • Custom Actions
    • LLM support
    • Cloud Support
      • Setting Up
      • Proxies
      • Profiles
    • MCP Examples
      • Google Sheets
      • Weather
        • Weather Server
    • Output to Schema
  • Web Scraping
    • Scrape
    • Crawl
    • Extract
  • Sessions
    • Overview
      • Session Parameters
    • Advanced Privacy & Anti-Detection
      • Stealth Mode
      • Proxies
      • Static IPs
      • CAPTCHA Solving
      • Ad Blocking
    • Profiles
    • Recordings
    • Live View
    • Extensions
    • Downloads
  • Guides
    • Model Context Protocol
    • Scraping
    • AI Function Calling
    • Extract Information with an LLM
    • Using Hyperbrowser Session
    • CAPTCHA Solving
  • Integrations
    • ⛓️LangChain
    • 🦙LlamaIndex
  • reference
    • Pricing
    • SDKs
      • Node
        • Sessions
        • Profiles
        • Scrape
        • Crawl
        • Extensions
      • Python
        • Sessions
        • Profiles
        • Scrape
        • Crawl
        • Extensions
    • API Reference
      • Sessions
      • Scrape
      • Crawl
      • Extract
      • Agents
        • Browser Use
        • Claude Computer Use
        • OpenAI CUA
      • Profiles
      • Extensions
Powered by GitBook
On this page
  • Overview
  • Installation
  • Configuration
  • Tools
  • Session Options
Export as PDF
  1. Guides

Model Context Protocol

Using the MCP server for Hyperbrowser integration.

PreviousDownloadsNextScraping

Last updated 1 month ago

Overview

The MCP server provides a standardized interface for AI models to access Hyperbrowser's web automation capabilities. This server implementation supports key functions like web scraping, structured data extraction, and web crawling.

You can see the MCP server code at

Installation

Prerequisites

  • Node.js (v14 or later)

  • npm or yarn package manager

Setup

  1. Install the MCP server for Hyperbrowser

npx hyperbrowser-mcp

Configuration

Client Setup

Configure your MCP client to connect to the Hyperbrowser MCP server:

{
  "mcpServers": {
    "hyperbrowser": {
      "command": "npx",
      "args": ["hyperbrowser-mcp"],
      "env": {
        "HYPERBROWSER_API_KEY": "your-api-key"
      }
    }
  }
}

Alternative Setup Using Shell Script

For clients that don't support the env field (like Cursor):

{
  "mcpServers": {
    "hyperbrowser": {
      "command": "bash",
      "args": ["/path/to/hyperbrowser-mcp/run_server.sh"]
    }
  }
}

Edit run_server.sh to include your API key:

#!/bin/bash
export HB_API_KEY="your-api-key"
npx hyperbrowser-mcp

Tools

Scrape Webpage

Retrieves content from a specified URL in various formats.

Method: scrape_webpage

Parameters:

  • url: string - The URL to scrape

  • outputFormat: string[] - Desired output formats (markdown, html, links, screenshot)

  • apiKey: string (optional) - API key for authentication

  • sessionOptions: object (optional) - Browser session configuration

Example:

{
  "url": "https://example.com",
  "outputFormat": ["markdown", "screenshot"],
  "sessionOptions": {
    "useStealth": true,
    "acceptCookies": true
  }
}

Extract Structured Data

Extracts data from webpages according to a specified schema.

Method: extract_structured_data

Parameters:

  • urls: string[] - List of URLs to extract data from (supports wildcards)

  • prompt: string - Instructions for extraction

  • schema: object (optional) - JSON schema for the extracted data

  • apiKey: string (optional) - API key for authentication

  • sessionOptions: object (optional) - Browser session configuration

Example:

{
  "urls": ["https://example.com/products/*"],
  "prompt": "Extract product name, price, and description",
  "schema": {
    "type": "object",
    "properties": {
      "name": { "type": "string" },
      "price": { "type": "number" },
      "description": { "type": "string" }
    }
  },
  "sessionOptions": {
    "useStealth": true
  }
}

Crawl Webpages

Navigates through multiple pages on a website, optionally following links.

Method: crawl_webpages

Parameters:

  • url: string - Starting URL for crawling

  • outputFormat: string[] - Desired output formats

  • followLinks: boolean - Whether to follow page links

  • maxPages: number (default: 10) - Maximum pages to crawl

  • ignoreSitemap: boolean (optional) - Skip using site's sitemap

  • apiKey: string (optional) - API key for authentication

  • sessionOptions: object (optional) - Browser session configuration

Example:

{
  "url": "https://example.com",
  "outputFormat": ["markdown", "links"],
  "followLinks": true,
  "maxPages": 5,
  "sessionOptions": {
    "acceptCookies": true
  }
}

Session Options

All tools support these common session configuration options:

  • useStealth: boolean - Makes browser detection more difficult

  • useProxy: boolean - Routes traffic through proxy servers

  • solveCaptchas: boolean - Automatically solves CAPTCHA challenges

  • acceptCookies: boolean - Automatically handles cookie consent popups

https://github.com/hyperbrowserai/mcp
With the hyperbrowser MCP, claude can browse the web!