Command-line interface for Firecrawl. Scrape, crawl, and extract data from any website directly from your terminal.
npm install firecrawl-cliCommand-line interface for Firecrawl. Scrape, crawl, and extract data from any website directly from your terminal.
``bash`
npm install -g firecrawl-cli
If you are using in any AI agent like Claude Code, you can install the skill with:
`bash`
npx skills add firecrawl/cli
Just run a command - the CLI will prompt you to authenticate if needed:
`bash`
firecrawl https://example.com
On first run, you'll be prompted to authenticate:
`
🔥 firecrawl cli
Turn websites into LLM-ready data
Welcome! To get started, authenticate with your Firecrawl account.
1. Login with browser (recommended)
2. Enter API key manually
Tip: You can also set FIRECRAWL_API_KEY environment variable
Enter choice [1/2]:
`
`bashInteractive (prompts automatically when needed)
firecrawl
$3
For self-hosted Firecrawl instances or local development, use the
--api-url option:`bash
Use a local Firecrawl instance (no API key required)
firecrawl --api-url http://localhost:3002 scrape https://example.comOr set via environment variable
export FIRECRAWL_API_URL=http://localhost:3002
firecrawl scrape https://example.comSelf-hosted with API key
firecrawl --api-url https://firecrawl.mycompany.com --api-key fc-xxx scrape https://example.com
`When using a custom API URL (anything other than
https://api.firecrawl.dev), authentication is automatically skipped, allowing you to use local instances without an API key.---
Commands
$3
Extract content from any webpage in various formats.
`bash
Basic usage (outputs markdown)
firecrawl https://example.com
firecrawl scrape https://example.comGet raw HTML
firecrawl https://example.com --html
firecrawl https://example.com -HMultiple formats (outputs JSON)
firecrawl https://example.com --format markdown,links,imagesSave to file
firecrawl https://example.com -o output.md
firecrawl https://example.com --format json -o data.json --pretty
`#### Scrape Options
| Option | Description |
| ------------------------ | ------------------------------------------------------- |
|
-f, --format | Output format(s), comma-separated |
| -H, --html | Shortcut for --format html |
| --only-main-content | Extract only main content (removes navs, footers, etc.) |
| --wait-for | Wait time before scraping (for JS-rendered content) |
| --screenshot | Take a screenshot |
| --include-tags | Only include specific HTML tags |
| --exclude-tags | Exclude specific HTML tags |
| -o, --output | Save output to file |
| --pretty | Pretty print JSON output |
| --timing | Show request timing info |#### Available Formats
| Format | Description |
| ------------ | -------------------------- |
|
markdown | Clean markdown (default) |
| html | Cleaned HTML |
| rawHtml | Original HTML |
| links | All links on the page |
| screenshot | Screenshot as base64 |
| json | Structured JSON extraction |#### Examples
`bash
Extract only main content as markdown
firecrawl https://blog.example.com --only-main-contentWait for JS to render, then scrape
firecrawl https://spa-app.com --wait-for 3000Get all links from a page
firecrawl https://example.com --format linksScreenshot + markdown
firecrawl https://example.com --format markdown --screenshotExtract specific elements only
firecrawl https://example.com --include-tags article,mainExclude navigation and ads
firecrawl https://example.com --exclude-tags nav,aside,.ad
`---
$3
Search the web and optionally scrape content from search results.
`bash
Basic search
firecrawl search "firecrawl web scraping"Limit results
firecrawl search "AI news" --limit 10Search news sources
firecrawl search "tech startups" --sources newsSearch images
firecrawl search "landscape photography" --sources imagesMultiple sources
firecrawl search "machine learning" --sources web,news,imagesFilter by category (GitHub, research papers, PDFs)
firecrawl search "web scraping python" --categories github
firecrawl search "transformer architecture" --categories research
firecrawl search "machine learning" --categories github,researchTime-based search
firecrawl search "AI announcements" --tbs qdr:d # Past day
firecrawl search "tech news" --tbs qdr:w # Past weekLocation-based search
firecrawl search "restaurants" --location "San Francisco,California,United States"
firecrawl search "local news" --country DESearch and scrape results
firecrawl search "firecrawl tutorials" --scrape
firecrawl search "API documentation" --scrape --scrape-formats markdown,linksOutput as pretty JSON
firecrawl search "web scraping"
`#### Search Options
| Option | Description |
| ---------------------------- | ------------------------------------------------------------------------------------------- |
|
--limit | Maximum results (default: 5, max: 100) |
| --sources | Comma-separated: web, images, news (default: web) |
| --categories | Comma-separated: github, research, pdf |
| --tbs | Time filter: qdr:h (hour), qdr:d (day), qdr:w (week), qdr:m (month), qdr:y (year) |
| --location | Geo-targeting (e.g., "Germany", "San Francisco,California,United States") |
| --country | ISO country code (default: US) |
| --timeout | Timeout in milliseconds (default: 60000) |
| --ignore-invalid-urls | Exclude URLs invalid for other Firecrawl endpoints |
| --scrape | Enable scraping of search results |
| --scrape-formats | Scrape formats when --scrape enabled (default: markdown) |
| --only-main-content | Include only main content when scraping (default: true) |
| -o, --output | Save to file |
| --json | Output as compact JSON (use -p for pretty JSON) |#### Examples
`bash
Research a topic with recent results
firecrawl search "React Server Components" --tbs qdr:m --limit 10Find GitHub repositories
firecrawl search "web scraping library" --categories github --limit 20Search and get full content
firecrawl search "firecrawl documentation" --scrape --scrape-formats markdown -p -o results.jsonFind research papers
firecrawl search "large language models" --categories research -pSearch with location targeting
firecrawl search "best coffee shops" --location "Berlin,Germany" --country DEGet news from the past week
firecrawl search "AI startups funding" --sources news --tbs qdr:w --limit 15
`---
$3
Quickly discover all URLs on a website without scraping content.
`bash
List all URLs (one per line)
firecrawl map https://example.comOutput as JSON
firecrawl map https://example.com --jsonSearch for specific URLs
firecrawl map https://example.com --search "blog"Limit results
firecrawl map https://example.com --limit 500
`#### Map Options
| Option | Description |
| --------------------------- | --------------------------------- |
|
--limit | Maximum URLs to discover |
| --search | Filter URLs by search query |
| --sitemap | include, skip, or only |
| --include-subdomains | Include subdomains |
| --ignore-query-parameters | Dedupe URLs with different params |
| --timeout | Request timeout |
| --json | Output as JSON |
| -o, --output | Save to file |#### Examples
`bash
Find all product pages
firecrawl map https://shop.example.com --search "product"Get sitemap URLs only
firecrawl map https://example.com --sitemap onlySave URL list to file
firecrawl map https://example.com -o urls.txtInclude subdomains
firecrawl map https://example.com --include-subdomains --limit 1000
`---
$3
Crawl multiple pages from a website.
`bash
Start a crawl (returns job ID)
firecrawl crawl https://example.comWait for crawl to complete
firecrawl crawl https://example.com --waitWith progress indicator
firecrawl crawl https://example.com --wait --progressCheck crawl status
firecrawl crawl Limit pages
firecrawl crawl https://example.com --limit 100 --max-depth 3
`#### Crawl Options
| Option | Description |
| --------------------------- | ---------------------------------------- |
|
--wait | Wait for crawl to complete |
| --progress | Show progress while waiting |
| --limit | Maximum pages to crawl |
| --max-depth | Maximum crawl depth |
| --include-paths | Only crawl matching paths |
| --exclude-paths | Skip matching paths |
| --sitemap | include, skip, or only |
| --allow-subdomains | Include subdomains |
| --allow-external-links | Follow external links |
| --crawl-entire-domain | Crawl entire domain |
| --ignore-query-parameters | Treat URLs with different params as same |
| --delay | Delay between requests |
| --max-concurrency | Max concurrent requests |
| --timeout | Timeout when waiting |
| --poll-interval | Status check interval |#### Examples
`bash
Crawl blog section only
firecrawl crawl https://example.com --include-paths /blog,/postsExclude admin pages
firecrawl crawl https://example.com --exclude-paths /admin,/loginCrawl with rate limiting
firecrawl crawl https://example.com --delay 1000 --max-concurrency 2Deep crawl with high limit
firecrawl crawl https://example.com --limit 1000 --max-depth 10 --wait --progressSave results
firecrawl crawl https://example.com --wait -o crawl-results.json --pretty
`---
$3
`bash
Show credit usage
firecrawl credit-usageOutput as JSON
firecrawl credit-usage --json --pretty
`---
$3
Run an AI agent that autonomously browses and extracts structured data from the web based on natural language prompts.
> Note: Agent tasks typically take 2 to 5 minutes to complete, and sometimes longer for complex extractions. Use sparingly and consider
--max-credits to limit costs.`bash
Basic usage (returns job ID immediately)
firecrawl agent "Find the pricing plans for Firecrawl"Wait for completion
firecrawl agent "Extract all product names and prices from this store" --waitFocus on specific URLs
firecrawl agent "Get the main features listed" --urls https://example.com/featuresUse structured output with JSON schema
firecrawl agent "Extract company info" --schema '{"type":"object","properties":{"name":{"type":"string"},"employees":{"type":"number"}}}'Load schema from file
firecrawl agent "Extract product data" --schema-file ./product-schema.json --waitCheck status of an existing job
firecrawl agent
firecrawl agent --wait
`#### Agent Options
| Option | Description |
| --------------------------- | ------------------------------------------------------------- |
|
--urls | Comma-separated URLs to focus extraction on |
| --model | spark-1-mini (default, cheaper) or spark-1-pro (accurate) |
| --schema | JSON schema for structured output (inline JSON string) |
| --schema-file | Path to JSON schema file for structured output |
| --max-credits | Maximum credits to spend (job fails if exceeded) |
| --status | Check status of existing agent job |
| --wait | Wait for agent to complete before returning results |
| --poll-interval | Polling interval in seconds when waiting (default: 5) |
| --timeout | Timeout in seconds when waiting (default: no timeout) |
| -o, --output | Save output to file |
| --json | Output as JSON format |
| --pretty | Pretty print JSON output |#### Examples
`bash
Research task with timeout
firecrawl agent "Find the top 5 competitors of Notion and their pricing" --wait --timeout 300Extract data with cost limit
firecrawl agent "Get all blog post titles and dates" --urls https://blog.example.com --max-credits 100 --waitUse higher accuracy model for complex extraction
firecrawl agent "Extract detailed technical specifications" --model spark-1-pro --wait --prettySave structured results to file
firecrawl agent "Extract contact information" --schema-file ./contact-schema.json --wait -o contacts.json --prettyCheck job status without waiting
firecrawl agent abc123-def456-... --jsonPoll a running job until completion
firecrawl agent abc123-def456-... --wait --poll-interval 10
`---
$3
`bash
View current configuration
firecrawl configConfigure with custom API URL
firecrawl config --api-url https://firecrawl.mycompany.com
firecrawl config --api-url http://localhost:3002 --api-key fc-xxx
`Shows authentication status and stored credentials location.
---
$3
`bash
Login
firecrawl login
firecrawl login --method browser
firecrawl login --method manual
firecrawl login --api-key fc-xxxLogin to self-hosted instance
firecrawl login --api-url https://firecrawl.mycompany.com
firecrawl login --api-url http://localhost:3002 --api-key fc-xxxLogout
firecrawl logout
`---
Global Options
These options work with any command:
| Option | Description |
| --------------------- | ------------------------------------------------------ |
|
--status | Show version, auth, concurrency, and credits |
| -k, --api-key | Use specific API key |
| --api-url | Use custom API URL (for self-hosted/local development) |
| -V, --version | Show version |
| -h, --help | Show help |$3
`bash
firecrawl --status
``
🔥 firecrawl cli v1.0.2 ● Authenticated via stored credentials
Concurrency: 0/100 jobs (parallel scrape limit)
Credits: 500,000 / 1,000,000 (50% left this cycle)
`---
Output Handling
$3
`bash
Output to stdout (default)
firecrawl https://example.comPipe to another command
firecrawl https://example.com | head -50Save to file
firecrawl https://example.com -o output.mdJSON output
firecrawl https://example.com --format links --pretty
`$3
- Single format: Outputs raw content (markdown text, HTML, etc.)
- Multiple formats: Outputs JSON with all requested data
`bash
Raw markdown output
firecrawl https://example.com --format markdownJSON output with multiple formats
firecrawl https://example.com --format markdown,links,images
`---
Tips & Tricks
$3
`bash
Using a loop
for url in https://example.com/page1 https://example.com/page2; do
firecrawl "$url" -o "$(echo $url | sed 's/[^a-zA-Z0-9]/_/g').md"
doneFrom a file
cat urls.txt | xargs -I {} firecrawl {} -o {}.md
`$3
`bash
Extract links and process with jq
firecrawl https://example.com --format links | jq '.links[].url'Convert to PDF (with pandoc)
firecrawl https://example.com | pandoc -o document.pdfSearch within scraped content
firecrawl https://example.com | grep -i "keyword"
`$3
`bash
Set API key via environment
export FIRECRAWL_API_KEY=${{ secrets.FIRECRAWL_API_KEY }}
firecrawl crawl https://docs.example.com --wait -o docs.jsonUse self-hosted instance
export FIRECRAWL_API_URL=${{ secrets.FIRECRAWL_API_URL }}
firecrawl scrape https://example.com -o output.md
`---
Telemetry
The CLI collects anonymous usage data during authentication to help improve the product:
- CLI version, OS, and Node.js version
- Detect development tools (e.g., Cursor, VS Code, Claude Code)
No command data, URLs, or file contents are collected via the CLI.
To disable telemetry, set the environment variable:
`bash
export FIRECRAWL_NO_TELEMETRY=1
``---
Documentation
For more details, visit the Firecrawl Documentation.