Official n8n node for Palatine Speech API: transcription, diarization, sentiment analysis, summarization and more
npm install n8n-nodes-palatine-speech> Designed for seamless integration of Palatine Speech API into n8n workflows.
This is an n8n community node that integrates Palatine Speech into your workflows and enables audio processing tasks such as transcription, diarization, sentiment analysis, and summarization.
Supported Operations\
Installation\
Credentials\
Workflow Example\
Use Cases \
Compatibility\
Useful Resources\
Keywords \
License \
Support
> Looking for a Russian version? Here
For details on each operation, see the Palatine Speech documentation. You can also open the documentation by clicking the operation title below.
* Full list of supported file types
* Full list of supported languages
Speech-to-Text (STT) converts audio/video into a written transcript.
It supports multiple languages and can automatically detect the language spoken in the recording.
It is well-suited for calls, interviews, lectures, and any other recordings where you need an accurate text version of what was said.
Speaker diarization separates a recording into speaker segments and identifies who is speaking in each segment. This is useful for meetings and interviews, where keeping the conversation structure by speaker matters.
Determines the emotional tone of speech in audio/video (and can also analyze text). The result is a ranked list of sentiment classes with probabilities, where the first item is the most likely: Very Negative, Negative, Neutral, Positive, Very Positive.
Generates a structured summary from audio/video (or from already available text). In addition to built-in scenarios such as meeting_summary, it supports user_prompt — you can provide a custom prompt for the LLM to produce output in the structure you need (bullet points, decisions and action items, risks, Q&A, etc.).
1. In your n8n instance, go to Settings > Community Nodes → Install
2. Enter: n8n-nodes-palatine-speech
3. Click Install
> ⚠️ Make sure the environment variable N8N_COMMUNITY_PACKAGES_ENABLED=true is set.
1. Go to Credentials → + Create
2. Find Palatine Speech API
3. Fill in the fields:
* API Key — available in your Palatine Speech dashboard
* Base URL — default is https://api.palatine.ru
1. Webhook → Receive an audio file
2. Config → Configure parameters
3. Palatine Speech → Transcribe the file
4. Create record → Create a table record
5. Telegram → Send the result to a chat
* Meeting summaries \
For meeting and interview recordings, it is recommended to use Summarize (meeting_summary): it produces a brief summary, a list of decisions, and group action items by owner and due date; the result can be sent to the team chat if needed. \
For non-standard requests, specify Prompt in Summarize and provide the required instruction, for example: "Additionally, structure the agreements by deadlines and owners."
* Lecture/webinar notes \
Session recording → Transcribe → generate a full transcript. \
Save the resulting text alongside the session materials.
* Automatic subtitles for video \
Extract the audio track → Transcribe + Diarize → convert the result to SRT/VTT and attach it to the video. \Transcribe produces the transcript, while Diarize provides speaker segmentation for multi-speaker recordings.
* Customer support assistant \
Processing voice messages with Sentiment Analysis helps determine the emotional tone of the request. \
Based on the result, tickets can be created in the CRM and a priority can be assigned.
This node was tested with n8n v1.39.1 and later.
* Palatine Speech documentation
* n8n Community Nodes guide
* Official n8n GitHub
n8n-community-node-package, n8n, palatine, speech-to-text, transcribe, transcription, stt, audio, ai, automation, voice-to-text, speech-recognition, audio-transcription, audio2text, audio-processing, diarization, speaker-diarization, speaker-segmentation, summarization, audio-summarization, sentiment-analysis, emotion-detection, tone-analysis