Performance testing library
npm install kelonioKelonio also works in the browser (as long as you use a tool like
Webpack or
Browserify),
and it comes with built-in reporters for the following test frameworks
without any direct dependency on them:
For simple, one-off checks, like in the console or a script, use the measure function:
``typescript
import { measure } from "kelonio";
import axios from "axios";
measure(() => axios.get("http://www.httpbin.org/get"))
.then(measurement => console.log(Mean: ${measurement.mean} ms));`
By default, the check is repeated 100 times, but you can customize this.
If you measure a function that returns a promise,
Kelonio will automatically measure the time until it's resolved as well.
The resulting measurement exposes various stats,
like mean time, maximum time, and standard deviation.
For aggregating results from multiple measurements,
create a Benchmark and use its record method to store the state:
`typescript
import { Benchmark, Criteria } from "kelonio";
const benchmark = new Benchmark();
await benchmark.record("RegExp#test", () => /o/.test("Hello World"));
await benchmark.record("String#indexOf", () => "Hello World!".indexOf("o") > -1);
const fastest = benchmark.find(Criteria.Fastest);
console.log(Fastest: ${fastest?.description} with mean ${fastest?.mean} ms);`
// Fastest: String#indexOf with mean 0.004199049999999999 ms
For aggregating results inside of a test framework,
use the default benchmark instance and its record method.
Click to expand an example:
Jest doesn't currently expose a way to get each individual test's name while running, Tests: ` describe("An HTTP client", () => { it("can send POST requests", async () => { Output: ` ● An HTTP client › can send POST requests Mean time of 49.43073600000001 ms exceeded threshold of 10 ms Test Suites: 1 failed, 1 total - - - - - - - - - - - - - - - - - Performance - - - - - - - - - - - - - - - - - The first time on each line is the mean duration,
Example: Jest
so you have to provide a description to record().typescript
import { benchmark } from "kelonio";
import axios from "axios";
it("can send GET requests", async () => {
await benchmark.record(
["HTTP client", "GET"],
() => axios.get("http://www.httpbin.org/get")
);
}, 30_000);
await benchmark.record(
["HTTP client", "POST"],
() => axios.post("http://www.httpbin.org/post"),
{ iterations: 10, meanUnder: 10 },
);
}, 30_000);
});
`
FAIL ./index.test.ts (16.576s)
An HTTP client
√ can send GET requests (8332ms)
× can send POST requests (508ms)
Tests: 1 failed, 1 passed, 2 total
Snapshots: 0 total
Time: 18.296s
HTTP client:
GET:
83.25152 ms (+/- 58.77542 ms) from 100 iterations
POST:
49.43074 ms (+/- 2.39217 ms) from 10 iterations
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
`
and the +/- time is the margin of error at a 95% confidence level.
The Mocha reporter can automatically infer the descriptions from the test names, Tests: ` describe("An HTTP client", () => { it("can send POST requests", async function (this: Mocha.Test) { Output: ` 1) An HTTP client The first time on each line is the mean duration,
Example: Mocha
but you're still free to pass additional descriptions to record(),
such as if one test performs several different measurements.typescript
import { benchmark } from "kelonio";
import axios from "axios";
it("can send GET requests", async function (this: Mocha.Test) {
this.timeout(30_000);
await benchmark.record(() => axios.get("http://www.httpbin.org/get"));
});
this.timeout(30_000);
await benchmark.record(
() => axios.post("http://www.httpbin.org/post"),
{ iterations: 10, meanUnder: 10 },
);
});
});
`
An HTTP client
√ can send GET requests
1) can send POST requests
1 passing (8332ms)
1 failing
can send POST requests:
Error: Mean time of 49.43073600000001 ms exceeded threshold of 10 ms
- - - - - - - - - - - - - - - - - Performance - - - - - - - - - - - - - - - - -
An HTTP client:
can send GET requests:
83.25152 ms (+/- 58.77542 ms) from 100 iterations
can send POST requests:
49.43074 ms (+/- 2.39217 ms) from 10 iterations
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
`
and the +/- time is the margin of error at a 95% confidence level.
Refer to the examples folder for sample projects
that integrate Kelonio with different test frameworks.
* All items that can be imported from "kelonio" and their public attributes.node_modules/kelonio/out/plugin/jestReporter.js
* The location of reporter modules:
* .node_modules/kelonio/out/plugin/jestReporterSetup.js
* .node_modules/kelonio/out/plugin/karmaReporter.js
* .node_modules/kelonio/out/plugin/karmaReporterSetup.js
* .node_modules/kelonio/out/plugin/mochaReporter.js
* .
style because they need access to this,
which is not accounted for by
@types/benchmark.
* Nanobench:
* Requires defining tests in its own framework.
* The CLI can only handle JavaScript code, so in a TypeScript project,
you either have to compile the tests in addition to the main source
or you have to use ts-node (which appears to degrade the performance results).
* No typings available for TypeScript.
* Matcha:
* Requires defining tests in its own framework.
* The CLI can only handle JavaScript code, so in a TypeScript project,
you either have to compile the tests instead of just the main source
or you have to use ts-node` (which appears to degrade the performance results).