Use Case
Drop a single HTTP Request node into any n8n workflow to scrape LinkedIn, Instagram, TikTok, Twitter/X, YouTube, and more. Per-result credits, structured JSON output, zero scraper maintenance.
Challenges teams face without Scrapernode
Setting up Brightdata, Apify, or PhantomBuster in n8n means configuring OAuth, pagination, webhook callbacks, and custom headers. Each tool needs a different HTTP Request node pattern.
You wrote a Puppeteer script inside n8n's Code node to scrape profiles. It worked for a week. Then the site updated its markup and now you're debugging JavaScript at midnight.
n8n has nodes for Slack, Google Sheets, and Airtable โ but nothing for extracting structured data from LinkedIn, Instagram, or TikTok. You're stuck with raw HTTP or Code nodes.
Scraping APIs return results asynchronously. In n8n, that means building Wait โ HTTP โ IF โ Loop patterns to poll for job completion. Every workflow has the same boilerplate.
How Scrapernode solves this
POST to create a job, GET to fetch results. Same node config for LinkedIn, Instagram, TikTok, Twitter/X, YouTube, Facebook, Indeed, Glassdoor, Yelp, GitHub, and Crunchbase. Copy-paste between workflows.
Scrapernode returns flat, structured JSON โ names, titles, follower counts, post content. Use n8n's Set node to extract fields, IF node to filter, and merge directly into your downstream tools.
When platforms change their markup, Scrapernode updates the scraper. Your n8n workflow keeps running. No more midnight debugging sessions for broken Code nodes.
Copy these n8n patterns to get started in minutes
The simplest Scrapernode + n8n pattern. Trigger, scrape, store. Works for any platform โ just change the scraper slug.
Manual or schedule trigger
Kick off manually or run on a schedule (daily, hourly, etc.)
HTTP Request: create job
POST /api/jobs/create with scraper slug, URLs, and API key
Wait node
Pause for 30-60 seconds while Scrapernode processes the job
HTTP Request: get results
GET /api/jobs/results returns structured JSON array
Set node: extract fields
Pull out the fields you need โ name, title, bio, followers, etc.
Google Sheets / Airtable
Append rows to your tracking sheet or base
Enrich the same list of people across LinkedIn, Twitter, and Instagram, then merge results per person.
Trigger
Webhook receives a list of people with social URLs
Split by platform
Code node groups URLs by platform (LinkedIn, Twitter, Instagram)
Parallel Scrapernode jobs
Three HTTP Request nodes fire simultaneously โ one per platform
Wait and collect
Wait for all three jobs to complete, fetch results from each
Merge node
Combine results per person using name or URL as the join key
Output
Write enriched profiles to Airtable, CRM, or send via webhook response
Scrape LinkedIn profiles, then conditionally scrape additional platforms based on the results.
Scrape LinkedIn profiles
Initial enrichment with linkedin-profiles scraper
IF: has Twitter URL?
Check if the LinkedIn profile includes a Twitter/X handle
Scrape Twitter profiles
For leads with Twitter handles, run twitter-profiles scraper
IF: follower count > 1000?
Filter for leads with significant social presence
Tag as high-value
Mark qualifying leads as high-priority in your CRM with the social context attached
The scrapers most relevant to this use case
Connect your scraped data to your favorite tools
Auto-sync results to spreadsheets
Real-time delivery to any endpoint
Programmatic access for developers
Connect to 1000+ apps
Download in standard formats
Common questions about n8n