<h1><div align="center"> <img alt="pipecat react" width="500px" height="auto" src="https://raw.githubusercontent.com/pipecat-ai/pipecat-client-web/main/pipecat-react.png"> </div></h1>
npm install @pipecat-ai/client-react

!NPM Version
``bash`
npm install @pipecat-ai/client-js @pipecat-ai/client-react
Instantiate a PipecatClient instance and pass it down to the PipecatClientProvider. Render the component to have audio output setup automatically.
`tsx
import { PipecatClient } from "@pipecat-ai/client-js";
import { PipecatClientAudio, PipecatClientProvider } from "@pipecat-ai/client-react";
const client = new PipecatClient({
transport: myTransportType.create(),
});
render(
);
`
We recommend starting the voiceClient from a click of a button, so here's a minimal implementation of to get started:
`tsx
import { usePipecatClient } from "@pipecat-ai/client-react";
const MyApp = () => {
const client = usePipecatClient();
return ;
};
`
The root component for providing Pipecat client context to your application.
#### Props
- client (PipecatClient, required): A singleton instance of PipecatClient.
`jsx`
{/ Child components /}
Creates a new
#### Props
No props
`jsx`
Creates a new
#### Props
- participant ("local" | "bot"): Defines which participant's video track is renderedfit
- ("contain" | "cover", optional): Defines whether the video should be fully contained or cover the box. Default: 'contain'.mirror
- (boolean, optional): Forces the video to be mirrored, if set.onResize(dimensions: object)
- (function, optional): Triggered whenever the video's rendered width or height changes. Returns the video's native width, height and aspectRatio.
`jsx`
fit="cover"
mirror
onResize={({ aspectRatio, height, width }) => {
console.log("Video dimensions changed:", { aspectRatio, height, width });
}}
/>
This is a stateful headless component and exposes the user's camEnabled state and an onClick handler to toggle the state.
#### Props
- onCamEnabledChanged(enabled: boolean) (function, optional): Triggered when the user's camEnabled state changesdisabled
- (boolean, optional): Disables the cam toggle
`jsx`
{({ disabled, isCamEnabled, onClick }) => (
)}
This is a stateful headless component and exposes the user's micEnabled state and an onClick handler to toggle the state.
#### Props
- onMicEnabledChanged(enabled: boolean) (function, optional): Triggered when the user's micEnabled state changesdisabled
- (boolean, optional): Disables the mic toggle
`jsx`
{({ disabled, isMicEnabled, onClick }) => (
)}
Renders a visual representation of audio input levels on a
#### Props
- participantType (string, required): The participant type to visualize audio for.backgroundColor
- (string, optional): The background color of the canvas. Default: 'transparent'.barColor
- (string, optional): The color of the audio level bars. Default: 'black'.barCount
- (number, optional): The amount of bars to render. Default: 5barGap
- (number, optional): The gap between bars in pixels. Default: 12.barLineCap
- ('round' | 'square', optional): The line cap for each bar. Default: 'round'barOrigin
- ('bottom' | 'center' | 'top', optional): The origin from where the bars grow to full height. Default: 'center'barWidth
- (number, optional): The width of each bar in pixels. Default: 30.barMaxHeight
- (number, optional): The maximum height at full volume of each bar in pixels. Default: 120.
`jsx`
backgroundColor="white"
barColor="black"
barGap={1}
barWidth={4}
barMaxHeight={24}
/>
Provides access to the PipecatClient instance originally passed to PipecatClientProvider.
`jsx
import { usePipecatClient } from "@pipecat-ai/client-react";
function MyComponent() {
const pcClient = usePipecatClient();
}
`
Allows subscribing to RTVI events.
It is advised to wrap handlers with useCallback.
#### Arguments
- event (RTVIEvent, required)handler
- (function, required)
`jsx
import { useCallback } from "react";
import { RTVIEvent, TransportState } from "@pipecat-ai/client-js";
import { useRTVIClientEvent } from "@pipecat-ai/client-react";
function EventListener() {
useRTVIClientEvent(
RTVIEvent.TransportStateChanged,
useCallback((transportState: TransportState) => {
console.log("Transport state changed to", transportState);
}, [])
);
}
`
Allows to control the user's camera state.
`jsx
import { usePipecatClientCamControl } from "@pipecat-ai/client-react";
function CustomCamToggle() {
const { enableCam, isCamEnabled } = usePipecatClientCamControl();
}
`
Allows to control the user's microphone state.
`jsx
import { usePipecatClientMicControl } from "@pipecat-ai/client-react";
function CustomMicToggle() {
const { enableMic, isMicEnabled } = usePipecatClientMicControl();
}
`
Manage and list available media devices.
`jsx
import { usePipecatClientMediaDevices } from "@pipecat-ai/client-react";
function DeviceSelector() {
const {
availableCams,
availableMics,
selectedCam,
selectedMic,
updateCam,
updateMic,
} = usePipecatClientMediaDevices();
return (
<>
name="cam"
onChange={(ev) => updateCam(ev.target.value)}
value={selectedCam?.deviceId}
>
{availableCams.map((cam) => (
))}
name="mic"
onChange={(ev) => updateMic(ev.target.value)}
value={selectedMic?.deviceId}
>
{availableMics.map((mic) => (
))}
>
);
}
`
Access audio and video tracks.
#### Arguments
- trackType ("audio" | "video", required)participantType
- ("bot" | "local", required)
`jsx
import { usePipecatClientMediaTrack } from "@pipecat-ai/client-react";
function MyTracks() {
const localAudioTrack = usePipecatClientMediaTrack("audio", "local");
const botAudioTrack = usePipecatClientMediaTrack("audio", "bot");
}
`
Returns the current transport state.
`jsx
import { usePipecatClientTransportState } from "@pipecat-ai/client-react";
function ConnectionStatus() {
const transportState = usePipecatClientTransportState();
}
``