A tool to spider a website and check for any broken links resources load errors or script errors
npm install spider.js
#### Node Module
``js
var spider = require( "spider.js" );
spider( options );
`
#### Command line
``
spiderjs --url=http://example.com [, option1 ] [, option2 ]
#### Node Module
`js
var spider = require( "spider.js" );
spider( {
url: "http://example.com",
ignore: "error.html",
redirectError: false
} );
`
#### Command line
``
spiderjs --url=http://example.com --ignore=error.html --redirectError=false
#### Gulp.js
`js
var spider = require( "spider.js" );
gulp.task( "spider", function() {
spider( {
url: "http://localhost:8000",
ignore: "error.html"
} );
} );
`
#### url ( Required )
Type: String'http://localhost'
Default value:
a valid url for a website. The url willThis url may be local or remote.
#### ignore
Type: String''
Default value:
A string that will be used to create a regex for excluding urls from being spidered.
#### output
Type: Stringfalse
Default value:
A file to output test log and results too
#### clientError
Type: Booleantrue
Default value:
Wether or not to check 4XX Errors
#### redirectError
Type: Booleantrue
Default value:
Wether or not to check 3XX Errors
#### resourceError
Type: Booleantrue
Default value:
Wether or not to check resource Errors
#### scriptError
Type: Booleantrue
Default value:
Wether or not to check script Errors
#### linkOutputLimit
Type: Number10`
Default value:
The maximum number of pages to show per link. This helps to prevent excessive output when a link exists on every page of a site like in a header or footer.