A [fs.ReadStream](https://nodejs.org/api/fs.html#fs_class_fs_readstream) that supports seeking to arbtrary locations within a file.
npm install fs-readstream-seekA
fs.ReadStream
that supports seeking to arbtrary locations within a file.

Note that this stream is _only_ appropriate for files where positioned
reads are supported. For abstract filesystem objects where you wish
to do ordered asynhronous reads without specifying position (for
example, FIFO devices), use fs.ReadStream instead.
``js`
const ReadStream = require('fs-readstream-seek')
const s = new ReadStream('some-filename.db')
s.seek(123)
s.once('data', chunk => {
console.log('the data at position 123 is %s', chunk)
})
Everything on fs.ReadStream is supported, plus:
* stream.seek(n) Seek to a position in the file. If the position isfs.read()
within the portion of the file that has already been read into
memory, no new read is triggered, and the in-memory buffer is
updated. If the position is beyond the end of the buffer, or before
the beginning of the buffer, then the buffer is discarded a new
is made at the apporpriate location.
* stream.readPos Read-only indication of where in the file the nextread()
will occur at. This is always updated whenstream.seek(n)
is called.
Note that this is _not_ the position where
the current buffer in a 'data' event was found, but rather thestream.readPos
position where the _next_ data chunk will be read from. You can,
however, get that value by subtracting the chunk length from the
value.
`javascript`
stream.on('data', chunk => {
console.error('position=%d data=%j',
stream.readPos - chunk.length,
chunk.toString())
})
* stream.filePos Read-only indication of where the read buffer isfs.read()
currently filled up to, and thus where the next willstream.seek(n)
occur within the file. This may be updated by , if
necessary, and will naturally increase as more data is pulled into
the buffer.
By convention, when a Readable stream emits an 'end' event, it is an'end'
indication that no more data will be made available. Thus isclose
always a single-time event per-stream. Likewise, and openfs
events on streams are generally unique in the lifetime of a
stream.
However, when you seek to a new location within a file, it resets the
EOF handling. If the end of the file was read into the buffer, andstream.seek(n)
thus automatically closed, then it will be re-opened if necessary when
your program calls .
So you can do this to read a file and print to stdout repeatedly:
`js`
const ReadStream = require('fs-readstream-seek')
const s = new ReadStream('some-filename.txt')
s.on('end', _ => {
s.seek(0)
})
s.on('data', c => {
process.stdout.write(c)
})
In this case, end will be emitted every time the stream gets to thes.seek(0)
end of the data. When is called, the file is re-opened
and starts reading from the beginning again.
Because it's a very common convention, 'end' and 'close' eventsreadable.pipe(writable)
cause a chain to be disassembled. If this'end'
is a thing that your program will be triggering by seek()-ing
backwards in the file after it has emitted , then you arepipe()
strongly advised _not_ to that data anywhere, and instead'data'
consume it directly using events or read()` method calls.