LogoLogo
SupportDashboard
  • Community
  • Welcome to Hyperbrowser
  • Get Started
    • Quickstart
      • AI Agents
        • Browser Use
        • Claude Computer Use
        • OpenAI CUA
      • Web Scraping
        • Scrape
        • Crawl
        • Extract
      • Browser Automation
        • Puppeteer
        • Playwright
        • Selenium
  • Agents
    • Browser Use
    • Claude Computer Use
    • OpenAI CUA
  • HyperAgent
    • About HyperAgent
      • HyperAgent SDK
      • HyperAgent Types
  • Quickstart
  • Multi-Page actions
  • Custom Actions
  • MCP Support
    • Tutorial
  • Examples
    • Custom Actions
    • LLM support
    • Cloud Support
      • Setting Up
      • Proxies
      • Profiles
    • MCP Examples
      • Google Sheets
      • Weather
        • Weather Server
    • Output to Schema
  • Web Scraping
    • Scrape
    • Crawl
    • Extract
  • Sessions
    • Overview
      • Session Parameters
    • Advanced Privacy & Anti-Detection
      • Stealth Mode
      • Proxies
      • Static IPs
      • CAPTCHA Solving
      • Ad Blocking
    • Profiles
    • Recordings
    • Live View
    • Extensions
    • Downloads
  • Guides
    • Model Context Protocol
    • Scraping
    • AI Function Calling
    • Extract Information with an LLM
    • Using Hyperbrowser Session
    • CAPTCHA Solving
  • Integrations
    • ⛓️LangChain
    • 🦙LlamaIndex
  • reference
    • Pricing
    • SDKs
      • Node
        • Sessions
        • Profiles
        • Scrape
        • Crawl
        • Extensions
      • Python
        • Sessions
        • Profiles
        • Scrape
        • Crawl
        • Extensions
    • API Reference
      • Sessions
      • Scrape
      • Crawl
      • Extract
      • Agents
        • Browser Use
        • Claude Computer Use
        • OpenAI CUA
      • Profiles
      • Extensions
Powered by GitBook
On this page
  • Setup
  • Installation
  • Setup your Environment
  • Code
  • Run the Scraper
  • How it Works
  • Next Steps
Export as PDF
  1. Guides

Using Hyperbrowser Session

Using Hyperbrowser's session

PreviousExtract Information with an LLMNextCAPTCHA Solving

Last updated 1 month ago

In this guide, we will see how to use Hyperbrowser and Puppeteer to create a new session, connect to it, and scrape current weather data.

Setup

First, lets create a new Node.js project.

mkdir weather-scraper && cd weather-scraper
npm init -y

Installation

Next, let's install the necessary dependencies to run our script.

npm install @hyperbrowser/sdk puppeteer-core dotenv

Setup your Environment

To use Hyperbrowser with your code, you will need an API Key. You can get one easily from the . Once you have your API Key, add it to your .env file as HYPERBROWSER_API_KEY.

Code

Next, create a new file index.js and add the following code:

import { Hyperbrowser } from "@hyperbrowser/sdk";
import { config } from "dotenv";
import { connect } from "puppeteer-core";

config();

const client = new Hyperbrowser({
  apiKey: process.env.HYPERBROWSER_API_KEY,
});

const main = async () => {
  const ___location = process.argv[2];
  if (!___location) {
    console.error("Please provide a ___location as a command line argument");
    process.exit(1);
  }

  console.log("Starting session");
  const session = await client.sessions.create();
  console.log("Session created:", session.id);

  try {
    const browser = await connect({ browserWSEndpoint: session.wsEndpoint });

    const [page] = await browser.pages();

    await page.goto("https://openweathermap.org/city", {
      waitUntil: "load",
      timeout: 20_000,
    });
    await page.waitForSelector(".search-container", {
      visible: true,
      timeout: 10_000,
    });
    await page.type(".search-container input", ___location);
    await page.click(".search button");
    await page.waitForSelector(".search-dropdown-menu", {
      visible: true,
      timeout: 10_000,
    });

    const [response] = await Promise.all([
      page.waitForNavigation(),
      page.click(".search-dropdown-menu li:first-child"),
    ]);

    await page.waitForSelector(".current-container", {
      visible: true,
      timeout: 10_000,
    });
    const locationName = await page.$eval(
      ".current-container h2",
      (el) => el.textContent
    );
    const currentTemp = await page.$eval(
      ".current-container .current-temp",
      (el) => el.textContent
    );
    const description = await page.$eval(
      ".current-container .bold",
      (el) => el.textContent
    );

    const windInfo = await page.$eval(".weather-items .wind-line", (el) =>
      el.textContent.trim()
    );
    const pressureInfo = await page.$eval(
      ".weather-items li:nth-child(2)",
      (el) => el.textContent.trim()
    );
    const humidityInfo = await page.$eval(
      ".weather-items li:nth-child(3)",
      (el) => el.textContent.trim()?.split(":")[1]
    );
    const dewpoint = await page.$eval(
      ".weather-items li:nth-child(4)",
      (el) => el.textContent.trim()?.split(":")[1]
    );
    const visibility = await page.$eval(
      ".weather-items li:nth-child(5)",
      (el) => el.textContent.trim()?.split(":")[1]
    );

    console.log("\nWeather Information:");
    console.log("------------------");
    console.log(`Location: ${locationName}`);
    console.log(`Temperature: ${currentTemp}`);
    console.log(`Conditions: ${description}`);
    console.log(`Wind: ${windInfo}`);
    console.log(`Pressure: ${pressureInfo}`);
    console.log(`Humidity: ${humidityInfo}`);
    console.log(`Dew Point: ${dewpoint}`);
    console.log(`Visibility: ${visibility}`);
    console.log("------------------\n");

    await page.screenshot({ path: "screenshot.png" });
  } catch (error) {
    console.error(`Encountered an error: ${error}`);
  } finally {
    await client.sessions.stop(session.id);
    console.log("Session stopped:", session.id);
  }
};

main().catch((error) => {
  console.error(`Encountered an error: ${error}`);
});

Run the Scraper

To run the weather scraper:

  1. Open a terminal and navigate to your project directory

  2. Run the script with a ___location argument:

node index.js "New York"

Replace "New York" with the ___location you want weather data for.

The script will:

  1. Create a new Hyperbrowser session

  2. Launch a Puppeteer browser and connect to the session

  3. Navigate to the OpenWeatherMap city page

  4. Search for the specified ___location and hit the Search button

  5. Select the first option from a list in a dropdown menu and navigate to that page

  6. Scrape the current weather data from the page

  7. Print the weather information to the console

  8. Save a screenshot of the page

  9. Close the browser and stop the Hyperbrowser session

You should see output like:

Weather Information:  
------------------ 
Location: New York City, US
Temperature: 9°C
Conditions: overcast clouds
Wind: Gentle breeze, 3.6 m/s, west-southwest  
Pressure: 1013 hPa
Humidity: 81%
Dew Point: 6°C 
Visibility: 10 km
------------------

And a screenshot.png file saved in your project directory.

How it Works

Let's break down the key steps:

  1. We import the required libraries and load the environment variables

  2. We create a new Hyperbrowser client with the API key

  3. We start a new Hyperbrowser session with client.sessions.create()

  4. We launch a Puppeteer browser and connect it to the Hyperbrowser session

  5. We navigate to the OpenWeatherMap city page

  6. We search for the ___location provided as a command line argument

  7. We wait for the search results and click the first result

  8. We scrape the weather data from the page using Puppeteer's page.$eval method

  9. We print the scraped data, take a screenshot, and save it to disk

  10. Finally, we close the browser and stop the Hyperbrowser session

Next Steps

This example demonstrates a basic weather scraping workflow using a Hyperbrowser session. You can expand on it to:

  • Accept multiple locations and fetch weather data for each

  • Get the 8-day forecast for the ___location

  • Schedule the script to run periodically and save historical weather data

dashboard