An MCP server enabling LLMs to write integration tests through live test environment interaction
npm install testing-mcp!Node CI

!license
Write complex integration tests with AI - AI assistants see your live page structure, execute code, and iterate until tests work
- Quick Start
- Why Testing MCP
- What Testing MCP Does
- Installation
- Configure MCP Server
- Connect From Tests
- MCP Tools
- Context and Available APIs
- Multi-Client Architecture
- CLI Commands
- Environment Variables
- FAQ
- How It Works
Step 1: Install
``bash`
npm install -D testing-mcp
Step 2: Configure Model Context Protocol (MCP) server (e.g., in Claude Desktop config):
`json`
{
"testing-mcp": {
"command": "npx",
"args": ["-y", "testing-mcp@latest"]
}
}
Step 3: Connect from your test:
`ts
import { render, screen, fireEvent } from "@testing-library/react";
import { connect } from "testing-mcp";
it("your test", async () => {
render(
await connect({
context: { screen, fireEvent },
});
}, 600000); // 10 minute timeout for AI interaction
`
Step 4: Run with MCP enabled:
Prompt:
`examples/react-jest
Please run the persistent test in the directory:
TESTING_MCP=true RTL_SKIP_AUTO_CLEANUP=true npm test test/App.test.tsx
Then, use the testing-mcp tool to write the test by following these steps:
1. Click the button displaying "count is 0".
2. Verify that the button text changes to "count is 1".
3. Write the test code to a file.
`
Now your AI assistant can see the page structure, execute code in the test, and help you write assertions.
Traditional test writing is slow and frustrating:
- Write ā Run ā Read errors ā Guess ā Repeat - endless debugging cycles
- Add console.log statements manually - slow feedback loop
- AI assistants can't see your test state - you must describe everything
- Must manually explain available APIs - AI generates invalid code
Testing MCP solves this by giving AI assistants live access to your test environment:
- AI sees actual page structure (DOM), console logs, and rendered output
- AI executes code directly in tests without editing files
- AI knows exactly which testing APIs are available (screen, fireEvent, etc.)
- You iterate faster with real-time feedback instead of blind guessing
View live page structure snapshots, console logs, and test metadata through MCP tools. No more adding temporary console.log statements or running tests repeatedly.
Execute JavaScript/TypeScript directly in your running test environment. Test interactions, check page state, or run assertions without modifying test files.
Automatically collects and exposes available testing APIs (like screen, fireEvent, waitFor) with type information and descriptions. AI assistants know exactly what's available and generate valid code on the first try.
`ts`
await connect({
context: { screen, fireEvent, waitFor },
contextDescriptions: {
screen: "React Testing Library screen with query methods",
fireEvent: "Function to trigger DOM events",
},
});
Reliable WebSocket connections with session tracking, reconnection support, and automatic cleanup. Multiple tests can connect simultaneously.
Automatically disabled in continuous integration (CI) environments. The connect() call becomes a no-op when TESTING_MCP is not set(particularly utilised hooks), so your tests run normally in production.
Built specifically for AI assistants and the Model Context Protocol. Provides structured metadata, clear tool descriptions, and predictable responses optimized for AI understanding.
Run multiple MCP clients simultaneously (Claude Desktop, Cursor, VS Code, etc.) without port conflicts. The daemon architecture automatically manages connections and port allocation.
Install dependencies and build the project before launching the MCP server or consuming the client helper.
`bash`
npm install -D testing-mcpor
yarn add -D testing-mcpor
pnpm add -D testing-mcp
Node 18+ is required because the project uses ES modules and the WebSocket API.
Add the MCP server to your AI assistant's configuration (e.g., Claude Desktop, VSCode, etc.):
`json`
{
"testing-mcp": {
"command": "npx",
"args": ["-y", "testing-mcp@latest"]
}
}
The server automatically discovers and connects to the bridge daemon, which manages WebSocket connections on dynamically assigned ports.
Import the client helper in your Jest or Vitest suites hook to expose the page state to the MCP server.
Example Jest setup file(setupFilesAfterEnv)
`ts
// jest.setup.ts
import { screen, fireEvent } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { connect } from "testing-mcp";
const timeout = 10 60 1000;
if (process.env.TESTING_MCP) {
jest.setTimeout(timeout);
}
afterEach(async () => {
if (!process.env.TESTING_MCP) return;
const state = expect.getState();
await connect({
filePath: state.testPath,
context: {
userEvent,
screen,
fireEvent,
},
});
}, timeout);
`
It also supports usage in test files:
`ts
// example.test.tsx
import { render, screen, fireEvent, waitFor } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { connect } from "testing-mcp";
it(
"logs the dashboard state",
async () => {
render(
await connect({
filePath: import.meta.url,
context: {
screen,
fireEvent,
userEvent,
waitFor,
},
// Optional: provide descriptions to help LLMs understand the APIs
contextDescriptions: {
screen: "React Testing Library screen with query methods",
fireEvent: "Synchronous event triggering function",
userEvent: "User interaction simulation library",
waitFor: "Async utility for waiting on conditions",
},
});
},
1000 60 10
);
`
Set TESTING_MCP=true locally to enable the bridge. The helper no-ops when the variable is missing or the tests run in continuous integration.
> If the DOM has been automatically cleared after the afterEach hook executes, please set RTL_SKIP_AUTO_CLEANUP=true.
Once connected, your AI assistant can use these tools:
| Tool | Purpose | When to Use |
| ------------------------ | ------------------------------------------------------ | --------------------------------------------------- |
| get_current_test_state | Fetch current page structure, console logs, and APIs | Inspect what's rendered and what APIs are available |execute_test_step
| | Run JavaScript/TypeScript code in the test environment | Trigger interactions, check state, run assertions |finalize_test
| | Remove connect() call and clean up test file | After test is complete and working |list_active_tests
| | Show all connected tests with timestamps | See which tests are available |get_generated_code
| | Extract code blocks inserted by the helper | Audit what code was added |
Returns the current test state including:
- Page structure snapshot: Current rendered HTML (DOM)
- Console logs: Captured console output
- Test metadata: Test file path, test name, session ID
- Available context: List of all APIs/variables available in execute_test_step, including their types, signatures, and descriptions
Response includes availableContext field:
`json`
{
"availableContext": [
{
"name": "screen",
"type": "object",
"description": "React Testing Library screen object"
},
{
"name": "fireEvent",
"type": "function",
"signature": "(element, event) => ...",
"description": "Function to trigger DOM events"
}
]
}
Executes JavaScript/TypeScript code in the connected test client. The code can use any APIs listed in the availableContext field from get_current_test_state.
Best Practice: Always call get_current_test_state first to check which APIs are available before using execute_test_step.
Inject testing utilities so AI knows what's available:
The connect() function accepts a context object that exposes APIs to the test execution environment. This allows AI assistants to know exactly what APIs are available when generating code.
`ts`
await connect({
context: {
screen, // React Testing Library queries
fireEvent, // DOM event triggering
userEvent, // User interaction simulation
waitFor, // Async waiting utility
},
});
Provide descriptions for each context key to help AI understand what's available:
`ts`
await connect({
context: {
screen,
fireEvent,
waitFor,
customHelper: async (text: string) => {
const button = screen.getByText(text);
fireEvent.click(button);
await waitFor(() => {});
},
},
contextDescriptions: {
screen: "Query methods like getByText, findByRole, etc.",
fireEvent: "Trigger DOM events: click, change, etc.",
waitFor: "Wait for assertions: waitFor(() => expect(...).toBe(...))",
customHelper: "async (text: string) => void - Clicks button by text",
},
});
How it works: The client collects metadata (name, type, function signature) for each context key. When AI calls get_current_test_state, it receives the full list of available APIs with their metadata, enabling accurate code generation.
Testing MCP v0.4.0 introduces a Daemon + Adapter architecture that allows multiple MCP clients to work simultaneously without port conflicts.
``
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā MCP Client A (Claude Desktop) ā
ā ā ā
ā testing-mcp serve (Adapter A) āāā ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā MCP Client B (Cursor) ā
ā ā ā
ā testing-mcp serve (Adapter B) āāā¼āā RPC āāā Bridge Daemon ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā (Single Instance)
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā MCP Client C (VS Code) ā
ā ā ā
ā testing-mcp serve (Adapter C) āāā ā ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā
āāāāāāāāāāāāāāāāāāāāāāāāāāā
ā Test Client ā
ā await connect() ā
ā (Auto-discovers port) ā
āāāāāāāāāāāāāāāāāāāāāāāāāāā
| Component | Description |
| ----------------- | ----------------------------------------------------------------------------------------------------- |
| Bridge Daemon | Single background process that manages WebSocket connections from tests. Automatically assigns ports. |
| MCP Adapter | Lightweight stdio MCP server that each client spawns. Communicates with daemon via RPC. |
| Registry File | ~/.testing-mcp/bridge.json - Contains daemon port and auth token for auto-discovery. |
Test clients automatically discover the daemon's WebSocket port by reading the registry file. No manual port configuration required:
`ts`
// Port auto-discovered from ~/.testing-mcp/bridge.json
await connect({
context: { screen, fireEvent },
});
The daemon starts automatically when needed. For manual control:
`bashStart daemon manually
testing-mcp bridge
CLI Commands
`bash
testing-mcp [command] [options]Commands:
serve Run as MCP adapter via stdio (default)
bridge Start the bridge daemon
bridge stop Stop the running daemon
bridge status Show daemon status
Options:
--help, -h Show this help message
--version, -v Show version number
`$3
`bash
Run as MCP server (for MCP client configuration)
testing-mcpStart the bridge daemon (for multi-client support)
testing-mcp bridgeCheck daemon status
testing-mcp bridge status
Output:
Status: Running
PID: 12345
WebSocket: ws://127.0.0.1:53718
RPC: ws://127.0.0.1:53719
Version: 0.4.0
Uptime: 5m 32s
Connections: 2
Stop the daemon
testing-mcp bridge stop
`Environment Variables
-
TESTING_MCP: When set to true, enables the WebSocket bridge to the MCP server. Leave unset to disable (automatically disabled in CI environments).
- TESTING_MCP_PORT: Overrides the WebSocket port for test clients. In most cases, this is not needed as ports are auto-discovered from the daemon registry.$3
The
connect() function resolves the WebSocket port in this order:1. Explicit
port option: connect({ port: 3001 })
2. Environment variable: TESTING_MCP_PORT=3001
3. Registry file: Auto-discovered from ~/.testing-mcp/bridge.json
4. Default fallback: 3001FAQ
$3
If you see that testing-mcp fails to start in Cursor IDE, you can check detailed logs:
In Cursor IDE: Go to Output > MCP:user-testing-mcp to see detailed error information.
This will show you the exact error messages and help diagnose startup issues.
$3
With the new daemon architecture (v0.4.0+), port conflicts are automatically resolved. The daemon uses dynamic port allocation (
port=0), so it always finds an available port.If you're using an older version or manual port configuration:
1. Upgrade to v0.4.0+ for automatic port management
2. Or kill the process using the port:
`bash
macOS/Linux
lsof -ti:3001 | xargs kill -9
`$3
Yes! The daemon architecture (v0.4.0+) supports multiple MCP clients:
- Claude Desktop, Cursor, VS Code can all connect at the same time
- Each adapter connects to the shared daemon via RPC
- No port conflicts - the daemon handles all connections
$3
Testing MCP currently supports only one WebSocket connection per test at a time.
When your MCP client runs the same test command multiple times (like in watch mode), each run creates a new WebSocket connection. This can cause conflicts and unexpected behavior.
Recommendation: Run tests individually without watch mode when using
TESTING_MCP=true.$3
If tests with
TESTING_MCP=true timeout quickly, you need to increase the test timeout.AI assistants need time to inspect state and write tests - usually 5+ minutes minimum.
Set timeout in your test:
`ts
it("your test", async () => {
render( );
await connect({ context: { screen, fireEvent } });
}, 600000); // 10 minutes = 600000ms
`$3
Yes, if your tests don't automatically clear the DOM between tests.
By placing
connect() in an afterEach hook in your setup file, you can make testing completely non-invasive and easier for automated test writing.Example Jest setup file(
setupFilesAfterEnv)`ts
// jest.setup.ts
import { screen, fireEvent } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { connect } from "testing-mcp";const timeout = 10 60 1000;
if (process.env.TESTING_MCP) {
jest.setTimeout(timeout);
}
afterEach(async () => {
if (!process.env.TESTING_MCP) return;
const state = expect.getState();
await connect({
filePath: state.testPath,
context: {
userEvent,
screen,
fireEvent,
},
});
}, timeout);
`Example Vitest setup file(
setupFiles):`ts
// vitest.setup.ts
import { beforeEach, afterEach, expect } from "vitest";
import { screen, fireEvent } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { connect } from "testing-mcp";const timeout = 10 60 1000;
beforeEach((context) => {
if (!process.env.TESTING_MCP) return;
Object.assign(context.task, {
timeout,
});
});
afterEach(async () => {
if (!process.env.TESTING_MCP) return;
const state = expect.getState();
await connect({
filePath: state.testPath,
context: {
userEvent,
screen,
expect,
fireEvent,
},
});
}, timeout);
`Important: This approach only works if your
afterEach hooks don't automatically remove the DOM (e.g., you're not calling cleanup() before connect()).$3
The registry file stores daemon connection info for auto-discovery:
| Platform | Path |
| ----------- | ---------------------------------------- |
| macOS/Linux |
~/.testing-mcp/bridge.json |
| Windows | %LOCALAPPDATA%\testing-mcp\bridge.json |Example registry content:
`json
{
"pid": 12345,
"wsPort": 53718,
"rpcPort": 53719,
"token": "abc123...",
"startedAt": "2024-01-15T10:30:00.000Z",
"version": "0.4.0",
"protocol": 1
}
`How It Works
Testing MCP uses a Daemon + Adapter architecture for robust multi-client support:
$3
`
āāāāāāāāāāāāāāāāāāāā āāāāāāāāāāāāāāāāāāāā āāāāāāāāāāāāāāāāāāāā
ā Node.js Test ā ā Bridge Daemon ā ā LLM/MCP ā
ā Process ā ā (Singleton) ā ā Client ā
āāāāāāāāāā¬āāāāāāāāāā āāāāāāāāāā¬āāāāāāāāāā āāāāāāāāāā¬āāāāāāāāāā
ā ā ā
ā ā āāāāāāāāāāāāāāāāāāāā¤
ā ā ā MCP Adapter ā
ā āāāāāāāāāā⤠(per client) ā
ā ā RPC āāāāāāāāāāāāāāāāāāāā
ā ā ā
ā 1. await connect() ā ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāŗā ā
ā (Auto-discovers port) ā ā
ā ā ā
ā 2. WebSocket: "ready" ā 3. MCP Tool Call ā
ā {dom, logs, context} ā (Stdio/JSON-RPC) ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāŗāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā¤
ā ā ā
ā 4. "connected" ā 5. RPC: getCurrentState ā
ā {sessionId} āāāāāāāāāāāāāāāāāāāāāāāāāāāāāā¤
āāāāāāāāāāāāāāāāāāāāāāāāāāāāā⤠ā
ā ā 6. Returns state ā
ā Test waits... āāāāāāāāāāāāāāāāāāāāāāāāāāāāāŗā
ā ā ā
ā ā 7. RPC: sendExecute ā
ā 8. "execute" āāāāāāāāāāāāāāāāāāāāāāāāāāāāāā¤
ā {code, executionId} ā ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāā⤠ā
ā ā ā
ā Runs code with context ā ā
ā ā ā
ā 9. "executed" ā ā
ā {result, newState} ā 10. Returns result ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāŗāāāāāāāāāāāāāāāāāāāāāāāāāāāāāŗā
ā ā ā
ā ā 11. finalize_test ā
ā 12. "close" āāāāāāāāāāāāāāāāāāāāāāāāāāāāāā¤
āāāāāāāāāāāāāāāāāāāāāāāāāāāāā⤠(Adapter edits file) ā
ā ā ā
ā Test completes ā 13. Returns success ā
ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāŗā
ā¼ ā¼ ā¼
`$3
| Component | Responsibility |
| ----------------- | ----------------------------------------------------------------------------------- |
| Bridge Daemon | Singleton process managing WebSocket connections, session state, and code execution |
| MCP Adapter | Per-client stdio MCP server that forwards tool calls to daemon via RPC |
| Registry File | Stores daemon port/token for auto-discovery by adapters and test clients |
| Test Client |
connect()` function that establishes WebSocket to daemon || Communication | Protocol | Purpose |
| ---------------- | -------------- | -------------------------- |
| Test ā Daemon | WebSocket | State sync, code execution |
| Adapter ā Daemon | WebSocket RPC | Tool call forwarding |
| Client ā Adapter | Stdio JSON-RPC | MCP protocol |
1. No port conflicts: Daemon uses dynamic port allocation
2. Multi-client support: Multiple AI assistants can connect simultaneously
3. Auto-discovery: Test clients find daemon automatically via registry
4. Graceful lifecycle: Daemon starts on-demand, can be managed manually
5. Security: Token-based authentication between components
MIT