## Setup
```
yarn add --dev jest-environment-puppeteer @telefonica/acceptance-testing
`js`
module.exports = {
//...
globalSetup: 'jest-environment-puppeteer/setup',
globalTeardown: 'jest-environment-puppeteer/teardown',
testEnvironment: 'jest-environment-puppeteer',
//...
};
`js`
const config = require('@telefonica/acceptance-testing/jest-puppeteer.config.js');
module.exports = config;
This will make your tests to run inside a dockerized chromium when they run headless or in CI, and they will
run in a local chromium (the one provided by puppeteer) when you run them with UI, for example while
debugging.
If you want to autostart a server before running the acceptance tests, you can configure it in your project
package.json like follows:
`json`
{
"acceptanceTests": {
"devServer": {
// This is the command that starts your dev server and the port where it runs
"command": "yarn dev",
"port": 3000
},
"ciServer": {
// The same for CI server (tipically a production build)
"command": "yarn start",
"port": 3000
}
}
}
Additionally, you can include host, protocol and path parameters. The path will be used to check if the
server is ready:
`json`
{
"acceptanceTests": {
"ciServer": {
"command": "yarn dev",
"host": "127.0.0.1",
"port": 3000,
"path": "api/health",
"protocol": "https"
}
}
}
The command can be overridden by setting the ACCEPTANCE_TESTING_SERVER_COMMAND environment variable. For
example:
`sh`
ACCEPTANCE_TESTING_SERVER_COMMAND="yarn start" yarn test-acceptance
#### protocol
Type: string, (https, http, tcp, socket) defaults to tcp or http if path is set.
To wait for an HTTP or TCP endpoint before considering the server running, include http or tcp as aport
protocol. Must be used in conjunction with .
#### path
Type: string, default to null.
Path to resource to wait for activity on before considering the server running. Must be used in conjunction
with host and port.
Github actions example:
`yaml`
jobs:
build:
runs-on: self-hosted-novum
container: docker.tuenti.io/service-inf/web-builder:pptr10.4-1.0.0
Important: you must use the same docker image version and remember to update it in your CI config if you
update the @telefonica/acceptance-testing package. This is the best way to make sure CI uses the same
dockerized chromium version that the developers use in their laptops. Otherwise the screenshot tests snapshots
may not match.
`ts
import {openPage, screen, serverHostName} from '@telefonica/acceptance-testing';
test('example screenshot test', async () => {
const page = await openPage({path: '/foo'});
await screen.findByText('Some text in the page');
expect(await page.screenshot()).toMatchImageSnapshot();
});
`
Just run:
``
yarn test-acceptance
or with ui:
``
yarn test-acceptance --ui
Important: test-acceptance script needs a valid jest.acceptance.config.js file in your repo to work.jest-environment-puppeteer
That file should be configured with as described previously. If for some reasonpackage.json
you need a different jest config file name you can manually setup some scripts in your :
`json`
"test-acceptance": "HEADLESS=1 jest --config your-jest-config.js",
"test-acceptance-ui": "jest --config your-jest-config.js",
Just take into account that the jest-environment-puppeteer must always be configured in your jest configHEADLESS=1
file. Also note that tests run in UI mode by default, unless you set the env var.
If you can intercept and mock requests in your acceptance tests you can use the interceptRequest function:
`ts
import {openPage, screen, interceptRequest} from '@telefonica/acceptance-testing';
test('example screenshot test', async () => {
const imageSpy = interceptRequest((req) => req.url().endsWith('.jpg'));
imageSpy.mockReturnValue({
status: 200,
contentType: 'image/jpeg',
body: myMockedJpeg,
});
const page = await openPage({path: '/foo'});
expect(imageSpy).toHaveBeenCalled();
});
`
To mock JSON api endpoints you can use interceptRequest too, but we also provide a more convenient apiinterceptRequest
wrapper over : createApiEndpointMock
`ts
import {openPage, screen, createApiEndpointMock} from '@telefonica/acceptance-testing';
test('example screenshot test', async () => {
const api = createApiEndpointMock({origin: 'https://my-api-endpoint.com'});
const getSpy = api.spyOn('/some-path').mockReturnValue({a: 1, b: 2});
const postSpy = api.spyOn('/other-path', 'POST').mockReturnValue({c: 3});
const page = await openPage({path: '/foo'});
expect(getSpy).toHaveBeenCalled();
await page.click(await screen.findByRole('button', {name: 'Send'}));
expect(postSpy).toHaveBeenCalled();
});
`
By default every mocked response will have a 200 status code. If you want to mock any other status code:
`ts
import {openPage, screen, createApiEndpointMock} from '@telefonica/acceptance-testing';
test('example screenshot test', async () => {
const api = createApiEndpointMock({origin: 'https://my-api-endpoint.com'});
const postSpy = api
.spyOn('/other-path', 'POST')
.mockReturnValue({status: 500, body: {message: 'Internal error'}});
const page = await openPage({path: '/foo'});
await page.click(await screen.findByRole('button', {name: 'Send'}));
expect(postSpy).toHaveBeenCalled();
});
`
- createApiEndpointMock automatically mocks CORS response headers and preflight (OPTIONS) requests forinterceptRequest
you.
- Both and createApiEndpointMock return a jest
mock function.
You can also use globs for API paths and origins if you need.
Some examples:
`ts
// any origin (default)
createApiEndpointMock({origin: '*'});
// any port
createApiEndpointMock({origin: 'https://example.com:*'});
// any domain
createApiEndpointMock({origin: 'https://*:3000'});
// any subdomain
createApiEndpointMock({origin: 'https://*.example.com:3000'});
// any second level path
api.spyOn('/some/*/path');
// accept any params
api.spyOn('/some/path?*');
// accept any value in specific param
api.spyOn('/some/path?param=*');
`
:information_source: We use the glob-to-regexp lib
internally.
:warning: Headless acceptance tests run in a dockerized chromium, so you can't use localhost as origin. The*
origin will depend on the docker configuration and host OS. For simplicity, we recommend to use as origin
for tests that mock local APIs (eg. Next.js apps).
Due to a puppeteer bug or limitation, when the chromium is dockerized, the file to upload must exist in the
host and the container with the same path.
A helper function prepareFile is provided to facilitate this:
`js`
await elementHandle.uploadFile(prepareFile('/path/to/file'));
Set the ACCEPTANCE_TESTING_COLLECT_COVERAGE environment variable to enable coverage collection or run with--coverage
the flag.
The code must be instrumented with nyc,
babel-plugin-istanbul or any
istanbul compatible tool.
After each test the coverage information will be collected by reading the window.__coverage__ object from
the opened page.
To collect coverage from your backend, you must create an endpoint that serves the coverage information and
specify it the coverageUrls property in your config. The library will make a GET request to each URL andjson
save the report from the response as a file. The default value is [].
The backend coverage will be collected after all the tests in the suite have run.
The response must be a JSON with the following structure: {coverage: data}.
Example route in NextJS to serve coverage information:
`ts
import {NextResponse} from 'next/server';
export const GET = (): NextResponse => {
const coverage = (globalThis as any).__coverage__;
if (coverage) {
return NextResponse.json({coverage});
}
return NextResponse.json({error: 'Not found'}, {status: 404});
};
export const dynamic = 'force-dynamic';
`
The coverage information will be saved as json files. To change the destination folder, set thecoveragePath property in your config. The default value is reports/coverage-acceptance. The json files
will be stored inside .
Example config:
`json`
{
"acceptanceTests": {
"coveragePath": "coverage/acceptance",
"coverageUrls": ["http://localhost:3000/api/coverage"]
}
}
After running the tests, you can use a tool like nyc to generate the coverage reports.
If you see an acceptance test failing without any apparent reason, it could be caused by an unhandled error in
the browser. You can inspect it by adding a listener to the pageerror event:
`ts`
page.on('pageerror', (err) => {
console.log('Unhandled browser error:', err);
process.emit('uncaughtException', err);
});
Page errors can be ignored by setting the ACCEPTANCE_TESTING_IGNORE_PAGE_ERRORS environment variable. Do not
enable this by default as it could hide legitimate errors in your tests.
If your desktop environment uses Wayland, you may see the following error when running the tests with the
--ui flag:
``
Error: Jest: Got error running globalSetup - /home/pladaria/bra/mistica-web/node_modules/jest-environment-puppeteer/setup.js, reason: ErrorEvent {
"error": [Error: socket hang up],
"message": "socket hang up",
...
To workaround this issue, you can install a newer Chrome in the repo where you are using the
acceptance-testing library:
- From the repo root: npx @puppeteer/browsers install chrome@stablerm -rf node_modules/puppeteer/.local-chromium/linux-901912/chrome-linux
- Remove the chrome installed by puppeteer:
mv chrome/linux-
- Move downloaded chrome to the expected location:
rm -rf chrome
- Cleanup. Remove chrome folder from the repo root:
Note that this browser will only be used when running the tests with the --ui flag. In headless mode, the
dockerized chromium will be used.
If you need additional logs to debug the acceptance-testing library, you can set theACCEPTANCE_TESTING_DEBUG environment variable or run the acceptance-testing command with the --debug`
flag.