React hooks for Unith AI digital humans
npm install @unith-ai/reactA React hooks library for building complex digital human experiences that run on Unith AI.
Before proceeding with using this library, you're expected to have an account on Unith AI, create a digital human and take note of your API key. You can create an account here in minutes!
Install the package in your project through package manager.
``shell`
npm install @unith-ai/reactor
yarn add @unith-ai/reactor
pnpm install @unith-ai/react
This library provides React hooks for integrating Unith AI digital humans into your React applications.
The useConversation hook manages the digital human conversation state and provides methods to control the session.
`jsx
import { useConversation } from "@unith-ai/react";
function MyComponent() {
const conversation = useConversation({
orgId: "your-org-id",
headId: "your-head-id",
apiKey: "your-api-key",
});
// Use conversation methods and state
}
`
#### Configuration
The hook accepts a configuration object with the following properties:
##### Required Parameters
- orgId - Your organization ID
- headId - The digital human head ID to use
- apiKey - API key for authentication
##### Optional Parameters
- mode - Conversation mode (default: "default")
- language - Language code for the conversation (default: browser language)
- allowWakeLock - Prevent screen from sleeping during conversation (default: true)
- microphoneProvider - Provider for the microphone - "azure" | "eleven_labs".sendMessage
- microphoneEvents - Callbacks for microphone events
- onMicrophoneSpeechRecognitionResult ({ transcript: string }) - Called when microphone recognises your user's speech. This returns a transcript. Ideal behavior is to call the method with your transcript as microphone doesn't automatically commit / send users text to our AI."ON" | "OFF" | "PROCESSING"
- onMicrophoneStatusChange ({status}) Called when microphone status changes
- status Shows current status of microphone.ElevenLabsOptions
- onMicrophoneError ({ message: string }) - Called when microphone has an error with the error message.
- elevenLabsOptions - Custom options to customize behavior of ElevenLabs STT providerBoolean
- noiseSuppression Number
- vadSilenceThresholdSecs Number
- vadThreshold Number
- minSpeechDurationMs Number
- minSilenceDurationMs
#### Returned Values
The hook returns an object containing methods and state:
##### Methods
- startDigitalHuman(element, options?) - Initialize and start the digital human
- element HTMLElement - DOM element where the video will be renderedPartial
- options - Optional event callbacksPromise
- Returns: - The user IDPromise
- getBackgroundVideo() - Retrieve the idle background video URL
- Returns: - Video URLPromise
- startSession() - Start the conversation session and begin audio & video playback
- Returns: string
- sendMessage(text) - Send a text message to the digital human
- text - Message text to sendPromise
- Returns: number | undefined
- toggleMute() - Toggle the mute status of the audio output
- Returns: - New volume (0 for muted, 1 for unmuted)Promise
- keepSession() - Send keep-alive event to prevent session timeout
- Returns: Promise
- toggleMicrophone() - Toggles microphone status from OFF to ON & vice versa.
- Returns: string | undefined
- getUserId() - Get the current user's ID
- Returns: Promise
- endSession() - End the conversation session and clean up resources
- Returns:
##### State
- status "connecting" | "connected" | "disconnecting" | "disconnected" - Current WebSocket connection statusboolean
- isConnected - True if status is "connected"boolean
- isDisconnected - True if status is "disconnected"boolean
- isNotConnected - True if status is not "connected"boolean
- sessionStarted - True if session has been started"listening" | "speaking" | "thinking" | "stopping"
- mode - Current conversation modeboolean
- isSpeaking - True if mode is "speaking"MessageEventData[]
- messages - Array of conversation messagesstring[]
- suggestions - Array of suggestion strings.number
- messageCounter - Count of messages sentstring | null
- userId - Current user's unique identifierConnectHeadType | null
- headInfo - Information about the digital humanstring
- name - Digital human head namestring[]
- phrases - Array with phrases set during digital human creationstring
- language - Language code setup during digital human creationstring
- avatar - Static image URL for digital humanboolean
- microphoneAccess - True if microphone access was grantedboolean
- isMuted - True if audio is mutedboolean
- timeOutWarning - True when session timeout warning is activeboolean
- timeOutBanner - True when session has timed outboolean
- capacityError - True if a capacity error occurred
`jsx
import { useConversation } from "@unith-ai/react";
import { useRef, useEffect } from "react";
function DigitalHumanChat() {
const videoRef = useRef(null);
const conversation = useConversation({
orgId: "your-org-id",
headId: "your-head-id",
apiKey: "your-api-key",
});
useEffect(() => {
if (videoRef.current) {
conversation.startDigitalHuman(videoRef.current, {
onConnect: ({ userId, headInfo, microphoneAccess }) => {
console.log("Connected:", userId);
},
onMessage: ({ timestamp, sender, text, visible }) => {
console.log("Message:", text);
},
onError: ({ message, endConversation, type }) => {
console.error("Error:", message);
},
});
}
}, []);
const handleSendMessage = () => {
conversation.sendMessage("Hello!");
};
const handleStartSession = () => {
conversation.startSession();
};
return (
{conversation.isConnected && !conversation.sessionStarted && (
)}
{conversation.sessionStarted && (
)}
$3
`jsx
import { useConversation } from "@unith-ai/react";
import { useRef, useEffect, useState } from "react";function AdvancedChat() {
const videoRef = useRef(null);
const [inputText, setInputText] = useState("");
const [micStatus, setMicStatus] = useState("OFF")
const conversation = useConversation({
orgId: "your-org-id",
headId: "your-head-id",
apiKey: "your-api-key",
mode: "default",
language: "en-US",
microphoneProvider: "azure",
microphoneEvents: {
onMicrophoneSpeechRecognitionResult: ({ transcript }) => {
console.log(transcript);
},
onMicrophoneStatusChange:({status}) => {
setMicStatus(status)
},
onMicrophoneError({message}) => {
console.log(message)
}
},
});
useEffect(() => {
if (videoRef.current) {
conversation.startDigitalHuman(videoRef.current, {
onConnect: ({ userId, headInfo, microphoneAccess }) => {
console.log("Connected with user ID:", userId);
console.log("Digital human:", headInfo.name);
},
onMessage: ({ timestamp, sender, text, visible }) => {
console.log(
[${sender}] ${text});
},
onSpeakingStart: () => {
console.log("Digital human started speaking");
},
onSpeakingEnd: () => {
console.log("Digital human finished speaking");
},
onSuggestions: ({ suggestions }) => {
console.log({suggestions});
},
onTimeoutWarning: () => {
console.log("Session will timeout soon");
},
onTimeout: () => {
console.log("Session timed out");
},
onError: ({ message, type }) => {
if (type === "toast") {
alert(message);
}
},
});
}
}, []); const handleSendMessage = async e => {
e.preventDefault();
if (inputText.trim()) {
await conversation.sendMessage(inputText);
setInputText("");
}
};
const handleKeepSession = () => {
conversation.keepSession();
};
return (
Status: {conversation.status}
Mode: {conversation.mode}
{conversation.isSpeaking && Digital human is speaking...
}
{conversation.isConnected && !conversation.sessionStarted && (
)}
{conversation.timeOutWarning && (
Your session will timeout soon
)} {conversation.sessionStarted && (
)}
Messages ({conversation.messageCounter})
{conversation.messages.map(
(msg, index) =>
msg.visible && (
{msg.sender}: {msg.text}
({msg.timestamp.toLocaleTimeString()})
)
)}
);
}
`$3
Messages in the conversation follow this structure:
`typescript
interface MessageEventData {
timestamp: Date;
sender: "user" | "ai";
text: string;
visible: boolean;
}
`$3
When calling
startDigitalHuman, you can pass event callbacks:- onConnect ({userId, headInfo, microphoneAccess}) - Called when the WebSocket connection is established
- userId
Boolean Unique Identifier for the users session.
- headInfo ConnectHeadType Object with data about the digital human.
- name String Digital human head name
- phrases String[] Array with phrases set during digital human creation.
- language String Language code setup during digital human creation.
- avatar String Static image url for digital human.
- microphoneAccess Boolean True if microphone access was granted, False otherwise.
- onDisconnect () - Called when the connection is closed
- onStatusChange ({status}) - Called when connection status changes
- status "connecting" | "connected" | "disconnecting" | "disconnected" Shows current websocket connection status.
- onMessage ({ timestamp, speaker, text, visible }) - Called when websocket receives a message or sends a response.
- timestamp Date Timestamp when message was received/sent
- sender "user" | "ai" Shows who the message came from.
- text String Message text
- visible Boolean Flag that you can use to control visibility of message. Sometimes, message comes before the video response starts playing. In such cases, this is usually false. Listen the onSpeakingStart event to change visibility when the video response starts playing.
- onMuteStatusChange - Called when mute status changes
- onSpeakingStart - Called when the digital human starts speaking
- onSpeakingEnd - Called when the digital human finishes speaking
- onSuggestions ({suggestions}) - Invoked when the system generates or updates query suggestions.
- suggestions String[] A list of suggested query strings.
- onTimeout - Called when the session times out due to inactivity
- onTimeoutWarning - Called before the session times out. This event warns you that the customers session is going to end in a bit. You can call the keepSession method to extend the customers session.
- onKeepSession - Called when a keep-alive request is processed
- onError - Called when an error occurs$3
Handle errors using the
onError callback:`jsx
conversation.startDigitalHuman(videoRef.current, {
onError: ({ message, endConversation, type }) => {
if (type === "toast") {
// Show toast notification
showToast(message);
if (endConversation) {
// Restart the session
conversation.endSession();
}
} else if (type === "modal") {
// Show modal dialog
showModal(message);
}
},
});
`$3
Retrieve the background video URL for welcome screens:
`jsx
const { getBackgroundVideo } = useConversation({
orgId: "your-org-id",
headId: "your-head-id",
apiKey: "your-api-key",
});useEffect(() => {
async function loadBackgroundVideo() {
const videoUrl = await getBackgroundVideo();
// Use videoUrl for your background/welcome screen
}
loadBackgroundVideo();
}, []);
`TypeScript Support
Full TypeScript types are included with the library. Import types as needed:
`typescript
import { useConversation } from "@unith-ai/react";
import type {
HeadConfigOptions,
MessageEventData,
Status,
Mode,
} from "@unith-ai/react";
`Best Practices
1. Call startSession() after user interaction - This ensures audio context is properly initialized, especially on mobile browsers
2. Handle the listening mode - Only send messages when
mode === "listening" to avoid interrupting the digital human
3. Clean up on unmount - The hook automatically calls endSession() on unmount, but you can call it manually if needed
4. Use keepSession() - Respond to onTimeoutWarning by calling keepSession() to extend the session
5. Handle errors gracefully - Always implement the onError` callback to handle connection and capacity errorsPlease refer to the README.md file in the root of this repository.