n8n node for Firecrawl v2 API - Web scraping, crawling, and data extraction tool for workflows and AI agents
npm install n8n-nodes-firecrawl-tool

An n8n community node for Firecrawl v2 API - a powerful web scraping, crawling, and data extraction tool. This node works both as a standard workflow node and as an AI tool for use with n8n's AI Agent and MCP Trigger nodes.
- π Scrape: Extract content from single webpages in multiple formats (markdown, HTML, summary, screenshots)
- π·οΈ Crawl: Recursively scrape entire websites with intelligent navigation
- πΊοΈ Map: Quickly discover all URLs on a website
- π Search: Search the web and optionally scrape results
- π€ Extract: Use AI to extract structured data from webpages
- π€ AI Tool Compatible: Full support for use as an AI agent tool with comprehensive descriptions
- β‘ Caching: Built-in caching support for improved performance
- π― Actions: Perform clicks, scrolls, and other interactions before scraping
1. In n8n, go to Settings > Community Nodes
2. Search for n8n-nodes-firecrawl-tool
3. Click Install
``bash`
npm install n8n-nodes-firecrawl-tool
Then restart your n8n instance.
`bashClone the repository
git clone https://github.com/jezweb/n8n-nodes-firecrawl-tool.git
cd n8n-nodes-firecrawl-tool
Setup
$3
1. Visit firecrawl.dev
2. Sign up for an account
3. Navigate to your dashboard to get your API key
$3
1. In n8n, go to Credentials > New
2. Search for "Firecrawl API"
3. Enter your API key
4. (Optional) Change the API host if using a self-hosted instance
5. Save the credentials
Usage
$3
1. Add the "Firecrawl Tool" node to your workflow
2. Select your Firecrawl API credentials
3. Choose an operation (Scrape, Crawl, Map, Search, or Extract)
4. Configure the operation parameters
5. Execute the workflow
$3
1. Add an "AI Agent" or "MCP Trigger" node to your workflow
2. Add the "Firecrawl Tool" node
3. Connect the Firecrawl Tool to the AI Agent's tool input
4. The AI will automatically use the tool based on the descriptions provided
Operations
$3
Extract content from a single webpage.
Parameters:
-
URL: The webpage to scrape
- Formats: Output formats (markdown, HTML, summary, screenshot, links)
- Options: Cache duration, wait time, content filtering, actions, and moreExample Use Cases:
- Extract article content for analysis
- Capture screenshots for monitoring
- Get structured data from product pages
$3
Recursively scrape an entire website or subdomain.
Parameters:
-
URL: Starting point for the crawl
- Limit: Maximum pages to crawl
- Max Depth: How deep to crawl from the starting URL
- Smart Crawl Prompt: Natural language guidance for the crawler
- Wait for Completion: Whether to wait for results or get a job IDExample Use Cases:
- Index an entire documentation site
- Extract all blog posts from a website
- Create a knowledge base from a company website
$3
Quickly discover all URLs on a website.
Parameters:
-
URL: The website to map
- Limit: Maximum URLs to return
- Search: Filter URLs by term
- Include Subdomains: Whether to include subdomain URLsExample Use Cases:
- Site structure analysis
- Finding specific page types
- SEO audits
$3
Search the web and optionally scrape the results.
Parameters:
-
Query: Search terms
- Sources: Web, news, and/or images
- Scrape Results: Whether to extract content from results
- Location: Geographic location for resultsExample Use Cases:
- Market research
- Competitive analysis
- Content aggregation
$3
Extract structured data from webpages using AI.
Parameters:
-
URLs: Pages to extract from
- Extraction Prompt: Natural language description of what to extract
- Schema: Optional JSON schema for structured outputExample Use Cases:
- Product data extraction
- Contact information gathering
- Automated form filling
AI Tool Usage
This node is designed to work seamlessly with AI agents. Each operation and parameter includes detailed descriptions that help AI models understand when and how to use the tool.
Example AI Prompts:
- "Get the content from docs.firecrawl.dev"
- "Find all URLs on example.com"
- "Search for recent news about n8n automation"
- "Extract product prices from these e-commerce pages"
Advanced Features
$3
All scrape operations support caching with the
maxAge parameter. Cached results are returned instantly if they're younger than the specified age, reducing API calls and improving performance.$3
Perform interactions before scraping:
`json
[
{"type": "wait", "milliseconds": 1000},
{"type": "click", "selector": "button.load-more"},
{"type": "scroll", "direction": "down"},
{"type": "screenshot", "fullPage": true}
]
`$3
Use natural language prompts to guide crawling:
- "Only crawl blog posts from 2024"
- "Focus on product pages under /shop"
- "Avoid PDF files and image galleries"
Rate Limits
Please refer to Firecrawl's rate limit documentation for current limits based on your plan.
Troubleshooting
$3
1. API Key Invalid: Ensure your API key is correctly entered in the credentials
2. Rate Limit Exceeded: Upgrade your Firecrawl plan or add delays between requests
3. Timeout Errors: Increase the wait time for dynamic content or use actions
4. Empty Results: Check if the site requires authentication or has anti-bot measures
$3
Enable n8n's execution details to see the full API requests and responses for debugging.
Contributing
Contributions are welcome! Please feel free to submit issues and pull requests.
1. Fork the repository
2. Create your feature branch (
git checkout -b feature/amazing-feature)
3. Commit your changes (git commit -m 'Add amazing feature')
4. Push to the branch (git push origin feature/amazing-feature`)MIT - see LICENSE file for details.
- Issues: GitHub Issues
- Discussions: n8n Community Forum
- Firecrawl Docs: docs.firecrawl.dev
See CHANGELOG.md for version history and updates.
Jeremy Dawes - Jezweb
- Firecrawl for the excellent API
- n8n for the workflow automation platform
- The n8n community for inspiration and support