webhooks

Webhook Notifications

Get notified instantly when a scraping job completes or fails. Pass a webhookUrl when creating a job and Scrapernode will POST the results to your endpoint — no polling required.

How It Works

When you create a job with a webhookUrl, Scrapernode stores the URL alongside the job. Once the job finishes (successfully or with an error), Scrapernode sends a POST request to your URL with a JSON payload describing the outcome.

  1. 1

    Create a job with webhookUrl

    Include a webhookUrl field in your POST /api/jobs/create request body.

  2. 2

    Scrapernode processes the job

    The job runs asynchronously. You don't need to poll for status.

  3. 3

    Receive the webhook

    When the job completes or fails, Scrapernode POSTs a JSON payload to your URL with the job outcome.

Quickstart

Add webhookUrl to any job creation request. That's it.

Create a job with webhookbash
curl -X POST https://actions.scrapernode.com/api/jobs/create \
  -H "Authorization: Bearer sn_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "scraperId": "linkedin-profiles",
    "inputs": [
      { "url": "https://linkedin.com/in/satyanadella" }
    ],
    "webhookUrl": "https://your-server.com/webhooks/scrapernode"
  }'

When the job finishes, your endpoint receives:

Webhook POST to your endpointjson
POST /webhooks/scrapernode HTTP/1.1
Content-Type: application/json

{
  "event": "job.completed",
  "jobId": "k57a8b3c9d0e1f2g3h4",
  "scraperId": "linkedin-profiles",
  "resultCount": 1,
  "creditsUsed": 5,
  "completedAt": 1709568045000
}

Webhook Payload

Every webhook is a POST request with Content-Type: application/json. Your endpoint should return a 2xx status to acknowledge receipt.

FieldTypeDescription
eventstring"job.completed" or "job.failed"
jobIdstringThe ID of the completed/failed job.
scraperIdstringScraper slug (e.g. linkedin-profiles).
resultCountintegerNumber of results scraped. 0 on failure.
creditsUsedintegerCredits consumed. 0 on failure (credits are refunded).
errorstring?Error message. Only present when event is job.failed.
completedAtintegerUnix timestamp (ms) when the job finished.

Event Types

job.completed

Sent when the job finishes successfully. Use the jobId to fetch results via GET /api/jobs/results?jobId=...

job.completed payloadjson
{
  "event": "job.completed",
  "jobId": "k57a8b3c9d0e1f2g3h4",
  "scraperId": "linkedin-profiles",
  "resultCount": 2,
  "creditsUsed": 10,
  "completedAt": 1709568045000
}

job.failed

Sent when the job fails. The error field contains the failure reason. Reserved credits are refunded automatically.

job.failed payloadjson
{
  "event": "job.failed",
  "jobId": "k57a8b3c9d0e1f2g3h4",
  "scraperId": "linkedin-profiles",
  "resultCount": 0,
  "creditsUsed": 0,
  "error": "Data source temporarily unavailable",
  "completedAt": 1709568045000
}

Retry Behavior

Webhook delivery is best-effort with automatic retries for server errors.

ResponseBehavior
2xxSuccess — delivery complete.
4xxClient error — not retried. Check your endpoint.
5xxServer error — retried up to 3 times with exponential backoff (1s, 2s, 4s).
TimeoutNo response within 10 seconds — treated as a server error and retried.
Network errorConnection refused, DNS failure, etc. — retried like 5xx.
Webhooks are best-effort. If all retries fail, the webhook is dropped silently. The job itself is not affected — it still completes normally. Always use GET /api/jobs/get as the source of truth for job status.

Security

Webhook URLs must use https:// or http:// protocol. Other protocols are rejected at job creation time.

Verifying Webhook Origin

To confirm a webhook came from Scrapernode, use the jobId from the payload to call GET /api/jobs/get?jobId=... with your API key. If the job exists and belongs to you, the webhook is authentic.

Use HTTPS in production. While http:// URLs are allowed (useful for local development and testing), always use HTTPS for production webhook endpoints.

Using with n8n

Webhooks pair perfectly with n8n's Webhook Trigger node, giving you a push-based alternative to polling.

  1. 1

    Add a Webhook node in n8n

    Create a new workflow and add a "Webhook" trigger node. Set the HTTP method to POST. Copy the webhook URL (production or test).

  2. 2

    Create a job with the webhook URL

    Use the Scrapernode API or a Scrapernode scraper node to create a job. Paste the n8n webhook URL as the webhookUrl parameter.

  3. 3

    Process the results

    When the job completes, n8n receives the payload. Use an IF node to check the event type, then use a Scrapernode Jobs → Get Results node (or HTTP Request to GET /api/jobs/results) to fetch the scraped data.

n8n webhook flowjson
// n8n Webhook Trigger receives this payload:
{
  "event": "job.completed",
  "jobId": "k57a8b3c9d0e1f2g3h4",
  "scraperId": "linkedin-profiles",
  "resultCount": 2,
  "creditsUsed": 10,
  "completedAt": 1709568045000
}

// Then use the jobId to fetch results:
// GET /api/jobs/results?jobId=k57a8b3c9d0e1f2g3h4
Why webhooks over polling? The Scrapernode n8n nodes support a "Wait for Completion" polling mode, but it holds an n8n worker slot for the entire duration. Webhooks free up the worker immediately and notify your workflow only when there's something to process.

Testing Webhooks

Use a request inspection tool to test webhooks during development.

  1. 1

    Get a test URL

    Use a service like webhook.site, RequestBin, or ngrok to get a public URL that logs incoming requests.

  2. 2

    Create a job with the test URL

    Pass your test URL as the webhookUrl when creating a job via the API.

  3. 3

    Inspect the payload

    Once the job completes, check your inspection tool to see the exact JSON payload Scrapernode sent.

Test with webhook.sitebash
curl -X POST https://actions.scrapernode.com/api/jobs/create \
  -H "Authorization: Bearer sn_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "scraperId": "linkedin-profiles",
    "inputs": [{ "url": "https://linkedin.com/in/satyanadella" }],
    "webhookUrl": "https://webhook.site/your-unique-id"
  }'

FAQ

Does the webhook include the scraped data?

No. The webhook payload contains job metadata (status, result count, credits used). Use the jobId to fetch the actual results via GET /api/jobs/results.

What happens if my endpoint is down?

Scrapernode retries up to 3 times with exponential backoff. If all retries fail, the webhook is dropped. The job still completes normally — webhooks never block job processing.

Can I change the webhook URL after creating a job?

No. The webhook URL is set at job creation time and cannot be changed afterwards.

Is there a webhook signature for verification?

Not currently. To verify a webhook came from Scrapernode, call GET /api/jobs/get with the jobId and your API key to confirm the job exists.

Can I use localhost URLs?

Yes, http:// URLs are accepted. Use a tunnel like ngrok to expose your local server for testing.

What's the timeout for webhook delivery?

Your endpoint must respond within 10 seconds. Responses slower than that are treated as timeouts and retried.