FetchSERP community node for n8n. Provides SEO, SERP, scraping, and domain intelligence operations via the FetchSERP API.
npm install n8n-nodes-fetchserpFetchSERP Community Node for n8n
=================================================
This community node lets you access the full FetchSERP API from your n8n workflows. FetchSERP provides SEO, SERP, scraping, and domain-intelligence endpoints that help you build automations around keyword research, backlink analysis, and on-page data gathering.
---
Installation · Operations · Credentials · Usage · Version History
Follow the community-node installation guide.
``bash`
npm install n8n-nodes-fetchserp
Self-hosted n8n users: make sure the directory containing this package is referenced by the N8N_CUSTOM_EXTENSIONS environment variable (or use the in-app Community Nodes ➞ Install UI in recent n8n versions).
Create a new FetchSERP API credential in n8n and paste your API token.
Optional: change the base URL if you use a custom FetchSERP domain.
The node exposes 20 FetchSERP endpoints:
| Operation (internal value) | Description |
|---------------------------|-------------|
| Get Backlinks (get_backlinks) | Retrieve backlinks for a given domain |get_domain_emails
| Get Domain Emails () | Find emails mentioned on pages of a domain |get_domain_info
| Get Domain Info () | WHOIS, DNS, SSL, tech stack |get_keywords_search_volume
| Get Keywords Search Volume () | Monthly volume for keywords |get_keywords_suggestions
| Get Keywords Suggestions () | Autocomplete & related keywords |get_long_tail_keywords
| Get Long-Tail Keywords () | AI-generated long-tails for a seed keyword |get_moz_analysis
| Get Moz Analysis () | Domain Authority, Page Authority, etc. |check_page_indexation
| Check Page Indexation () | Whether pages rank for a keyword |get_domain_ranking
| Get Domain Ranking () | SERP positions for a domain & keyword |scrape_webpage
| Scrape Webpage () | Raw HTML without JS |scrape_domain
| Scrape Domain () | Crawl domain up to N pages |scrape_webpage_js
| Scrape Webpage JS () | Execute custom JS on a page |scrape_webpage_js_proxy
| Scrape Webpage JS & Proxy () | Same as above via geo-proxy |get_serp_results
| Get SERP Results () | Structured SERP JSON (titles, links, etc.) |get_serp_html
| Get SERP HTML () | Raw SERP HTML |get_serp_ai_mode
| Get SERP AI Mode () | AI Overview & AI-generated answer |get_serp_text
| Get SERP Text () | Extracted text-only SERP |get_user_info
| Get User Info () | Remaining credits & plan info |get_webpage_ai_analysis
| Get Webpage AI Analysis () | AI summary of page content |get_webpage_seo_analysis
| Get Webpage SEO Analysis () | SEO checklist for a page |
The node keeps the UI minimal so you can pass any existing or future parameters without updating the package.
1. Select an operation in the dropdown.
2. Query Parameters (JSON) – provide GET/querystring parameters as JSON.
3. For the two POST endpoints, also fill Request Body (JSON).
Domain info of example.com
`json`
{
"domain": "example.com"
}
Scrape a page with JS
`jsonc``
// Query Parameters (JSON)
{
"url": "https://example.com"
}
// Request Body (JSON)
{
"url": "https://example.com",
"js_script": "return document.title"
}
After execution the node returns the raw JSON from FetchSERP, so you can continue parsing it with Merge, Set, IF, etc.
* 0.1.1 — initial public release
---
Made with ❤️ by Olivier — PRs & issues welcome.