LogoLogo
SupportDashboard
  • Community
  • Welcome to Hyperbrowser
  • Get Started
    • Quickstart
      • AI Agents
        • Browser Use
        • Claude Computer Use
        • OpenAI CUA
      • Web Scraping
        • Scrape
        • Crawl
        • Extract
      • Browser Automation
        • Puppeteer
        • Playwright
        • Selenium
  • Agents
    • Browser Use
    • Claude Computer Use
    • OpenAI CUA
  • HyperAgent
    • About HyperAgent
      • HyperAgent SDK
      • HyperAgent Types
  • Quickstart
  • Multi-Page actions
  • Custom Actions
  • MCP Support
    • Tutorial
  • Examples
    • Custom Actions
    • LLM support
    • Cloud Support
      • Setting Up
      • Proxies
      • Profiles
    • MCP Examples
      • Google Sheets
      • Weather
        • Weather Server
    • Output to Schema
  • Web Scraping
    • Scrape
    • Crawl
    • Extract
  • Sessions
    • Overview
      • Session Parameters
    • Advanced Privacy & Anti-Detection
      • Stealth Mode
      • Proxies
      • Static IPs
      • CAPTCHA Solving
      • Ad Blocking
    • Profiles
    • Recordings
    • Live View
    • Extensions
    • Downloads
  • Guides
    • Model Context Protocol
    • Scraping
    • AI Function Calling
    • Extract Information with an LLM
    • Using Hyperbrowser Session
    • CAPTCHA Solving
  • Integrations
    • ⛓️LangChain
    • 🦙LlamaIndex
  • reference
    • Pricing
    • SDKs
      • Node
        • Sessions
        • Profiles
        • Scrape
        • Crawl
        • Extensions
      • Python
        • Sessions
        • Profiles
        • Scrape
        • Crawl
        • Extensions
    • API Reference
      • Sessions
      • Scrape
      • Crawl
      • Extract
      • Agents
        • Browser Use
        • Claude Computer Use
        • OpenAI CUA
      • Profiles
      • Extensions
Powered by GitBook
On this page
  • Installation
  • Usage
  • Response
  • Session Configurations
Export as PDF
  1. Web Scraping

Extract

Extract data from pages using AI

PreviousCrawlNextOverview

Last updated 2 days ago

The Extract API allows you to get data in a structured format for any provided URLs with a single call.

For detailed usage, checkout the

Hyperbrowser exposes endpoints for starting an extract request and for getting it's status and results. By default, extracting is handled in an asynchronous manner of first starting the job and then checking it's status until it is completed. However, with our SDKs, we provide a simple function that handles the whole flow and returns the data once the job is completed.

Installation

npm install @hyperbrowser/sdk dotenv zod

or

yarn add @hyperbrowser/sdk dotenv zod
pip install hyperbrowser python-dotenv pydantic

Usage

import { Hyperbrowser } from "@hyperbrowser/sdk";
import { config } from "dotenv";
import { z } from "zod";

config();

const client = new Hyperbrowser({
  apiKey: process.env.HYPERBROWSER_API_KEY,
});

const main = async () => {
  const schema = z.object({
    productName: z.string(),
    productOverview: z.string(),
    keyFeatures: z.array(z.string()),
    pricing: z.array(
      z.object({
        plan: z.string(),
        price: z.string(),
        features: z.array(z.string()),
      })
    ),
  });

  // Handles both starting and waiting for extract job response
  const result = await client.extract.startAndWait({
    urls: ["https://hyperbrowser.ai"],
    prompt:
      "Extract the product name, an overview of the product, its key features, and a list of its pricing plans from the page.",
    schema: schema,
  });

  console.log("result", JSON.stringify(result, null, 2));
};

main();
import os
import json
from typing import List
from dotenv import load_dotenv
from hyperbrowser import Hyperbrowser
from hyperbrowser.models import StartExtractJobParams
from pydantic import BaseModel


# Load environment variables from .env file
load_dotenv()

# Initialize Hyperbrowser client
client = Hyperbrowser(api_key=os.getenv("HYPERBROWSER_API_KEY"))


class PricingSchema(BaseModel):
    plan: str
    price: str
    features: List[str]


class ExtractSchema(BaseModel):
    product_name: str
    product_overview: str
    key_features: List[str]
    pricing: List[PricingSchema]


def main():
    result = client.extract.start_and_wait(
        params=StartExtractJobParams(
            urls=["https://hyperbrowser.ai"],
            prompt="Extract the product name, an overview of the product, its key features, and a list of its pricing plans from the page.",
            schema=ExtractSchema,
        )
    )
    print("result:", json.dumps(result.data, indent=2))


main()

Start Extract Job

curl -X POST https://app.hyperbrowser.ai/api/extract \
    -H 'Content-Type: application/json' \
    -H 'x-api-key: <YOUR_API_KEY>' \
    -d '{
        "urls": ["https://hyperbrowser.ai"],
        "prompt": "Extract the product name, an overview of the product, its key features, and a list of its pricing plans from the page.",
        "schema": {
          "type": "object",
          "properties": {
            "productName": {
              "type": "string"
            },
            "productOverview": {
              "type": "string"
            },
            "keyFeatures": {
              "type": "array",
              "items": {
                "type": "string"
              }
            },
            "pricing": {
              "type": "array",
              "items": {
                "type": "object",
                "properties": {
                  "plan": {
                    "type": "string"
                  },
                  "price": {
                    "type": "string"
                  },
                  "features": {
                    "type": "array",
                    "items": {
                      "type": "string"
                    }
                  }
                },
                "required": [
                  "plan",
                  "price",
                  "features"
                ]
              }
            }
          },
          "required": [
            "productName",
            "productOverview",
            "keyFeatures",
            "pricing"
          ]
        }
    }'

Get Extract Job Status and Data

curl https://app.hyperbrowser.ai/api/extract/{jobId} \
    -H 'x-api-key: <YOUR_API_KEY>'

You can configure the extract request with the following parameters:

  • urls - A required list of urls you want to use to extract data from. To allow crawling for any of the urls provided in the list, simply add /* to the end of the url (https://hyperbrowser.ai/*). This will crawl other pages on the site with the same origin and find relevant pages to use for the extraction context.

  • schema - A strict json schema you want the returned data to be structured as. Gives the best results if provided. If not provided, we will try to automatically generate one based on the prompt.

  • prompt - A prompt describing how you want the data structured and any other guiding instructions for the extraction.

  • maxLinks - The maximum number of links to look for if performing a crawl (urls with /* at the end) for any given url. We will automatically try to pick relevant links for the extraction from the links that we look at.

  • waitFor - A delay in milliseconds to wait after the page loads before initiating the scrape to get data for extraction from page. This can be useful for allowing dynamic content to fully render. This is also useful for waiting to detect CAPTCHAs on the page if you have solveCaptchas set to true in the sessionOptions.

You can provide a schema, or a prompt, or both. For best results, provide both a schema and a prompt. The schema should define exactly how you want the extract data formatted and the prompt should have any information that can help guide the extraction. If no schema is provided, then we will try to automatically generate a schema based on the prompt.

For the Node SDK, you can simply pass in a zod schema for ease of use or an actual json schema. For the Python SDK, you can pass in a pydantic model or an actual json schema.

Ensure that the root level of the schema is type: "object" .

Response

The Start Extract Job POST /extract endpoint will return a jobId in the response which can be used to get information about the job in subsequent requests.

{
    "jobId": "962372c4-a140-400b-8c26-4ffe21d9fb9c"
}

The Get Extract Job GET /extract/{jobId} will return the following data:

{
  "jobId": "962372c4-a140-400b-8c26-4ffe21d9fb9c",
  "status": "completed",
  "data": {
    "pricing": [
      {
        "plan": "Free",
        "price": "$0",
        "features": [
          "3,000 Credits Included",
          "5 Concurrent Browsers",
          "7 Days Data Retention",
          "Basic Stealth Mode"
        ]
      },
      {
        "plan": "Startup",
        "price": "$30 / Month",
        "features": [
          "18,000 Credits Included",
          "25 Concurrent Browsers",
          "30 Day Data Retention",
          "Auto Captcha Solving",
          "Basic Stealth Mode"
        ]
      },
      {
        "plan": "Scale",
        "price": "$100 / Month",
        "features": [
          "60,000 Credits Included",
          "100 Concurrent Browsers",
          "30 Day Data Retention",
          "Auto Captcha Solving",
          "Advanced Stealth Mode"
        ]
      },
      {
        "plan": "Enterprise",
        "price": "Custom",
        "features": [
          "Volume discounts available",
          "Premium Support",
          "HIPAA/SOC 2",
          "250+ Concurrent Browsers",
          "180+ Day Data Retention",
          "Auto Captcha Solving",
          "Advanced Stealth Mode"
        ]
      }
    ],
    "keyFeatures": [
      "Run headless browsers to automate tasks like web scraping, testing, and form filling.",
      "Use browsers to scrape and structure web data at scale for analysis and insights.",
      "Integrate with AI agents to enable browsing, data collection, and interaction with web apps.",
      "Automatically solve captchas to streamline automation workflows.",
      "Operate browsers in stealth mode to bypass bot detection and stay undetected.",
      "Manage browser sessions with logging, debugging, and secure resource isolation."
    ],
    "productName": "Hyperbrowser",
    "productOverview": "Hyperbrowser is a platform for running and scaling headless browsers in secure, isolated containers. Built for web automation and AI-driven use cases."
  }
}

The status of an extract job can be one of pending, running, completed, failed . There can also be an optional error field with an error message if an error was encountered.

Session Configurations

import { config } from "dotenv";
import { z } from "zod";

config();

const client = new Hyperbrowser({
  apiKey: process.env.HYPERBROWSER_API_KEY,
});

const main = async () => {
  const schema = z.object({
    productName: z.string(),
    productOverview: z.string(),
    keyFeatures: z.array(z.string()),
    pricing: z.array(
      z.object({
        plan: z.string(),
        price: z.string(),
        features: z.array(z.string()),
      })
    ),
  });

  const result = await client.extract.startAndWait({
    urls: ["https://hyperbrowser.ai"],
    prompt:
      "Extract the product name, an overview of the product, its key features, and its pricing plans from the page.",
    schema: schema,
    // include sessionOptions
    sessionOptions: {
      useProxy: true,
      solveCaptchas: true,
    },
  });

  console.log("result", JSON.stringify(result, null, 2));
};

main();
import os
from dotenv import load_dotenv
from hyperbrowser import Hyperbrowser
from hyperbrowser.models import StartExtractJobParams, CreateSessionParams
from pydantic import BaseModel


load_dotenv()


client = Hyperbrowser(api_key=os.getenv("HYPERBROWSER_API_KEY"))


class PricingSchema(BaseModel):
    plan: str
    price: str
    features: List[str]


class ExtractSchema(BaseModel):
    product_name: str
    product_overview: str
    key_features: List[str]
    pricing: List[PricingSchema]


def main():
    result = client.extract.start_and_wait(
        params=StartExtractJobParams(
            urls=["https://hyperbrowser.ai"],
            prompt="Extract the product name, an overview of the product, its key features, and a list of its pricing plans from the page.",
            schema=ExtractSchema,
            # include session_options
            session_options=CreateSessionParams(use_proxy=True, solve_captchas=True),
        )
    )
    print("result:", json.dumps(result.data, indent=2))


main()

Hyperbrowser's CAPTCHA solving and proxy usage features require being on a PAID plan.

Using proxy and solving CAPTCHAs will slow down the page scraping in the extract job so use it only if necessary.

sessionOptions - .

To see the full schema, checkout the .

You can also provide configurations for the session that will be used to execute the extract job just as you would when creating a new session itself. These could include using a proxy or solving CAPTCHAs. To see all the different available session parameters, checkout the or .

For a full reference on the extract endpoint, checkout the .

Extract API Reference
API Reference
API Reference
Options for the session
Session Parameters
API Reference