MCP server for bidirectional communication between Claude Code CLI and Codex CLI
npm install tulayngmamamoMCP server enabling bidirectional communication between Claude Code CLI and OpenAI Codex CLI. Allows AI assistants to delegate tasks (research, code review) to each other and receive responses.
- AI-to-AI Communication: Claude and Codex can send messages to each other
- Queue-First Delivery: Fire-and-forget messaging for both Claude and Codex
- Agent Personas: Architect (critical code review) and Oracle (debugging/root cause analysis) personas
- Conversation Tracking: SQLite-backed message history
- memorantado Integration: Sync conversation summaries to knowledge graph
- Unlimited Connections: Multiple Claude Code clients can connect simultaneously
``bashGlobal install
npm install -g tulayngmamamo
tulayngmamamo
$3
`bash
git clone https://github.com/alfredosdpiii/tulayngmamamo.git
cd tulayngmamamo
npm install
npm run build
npm start
`Server runs at
http://127.0.0.1:3790MCP Configuration
Add to your
.mcp.json or MCP client configuration:$3
Start the server first (
tulayngmamamo or npx tulayngmamamo), then configure your MCP client:`json
{
"mcpServers": {
"tulayngmamamo": {
"type": "http",
"url": "http://localhost:3790/mcp"
}
}
}
`$3
`json
{
"mcpServers": {
"tulayngmamamo": {
"command": "npx",
"args": ["tulayngmamamo", "--stdio", "--client-id", "claude"]
}
}
}
`For Codex, use
"codex" as the client id instead of "claude".MCP Tools
| Tool | Description |
|------|-------------|
|
who_am_i | Returns client identity (claude/codex) |
| send_message | Send message to another AI (supports agent parameter) |
| delegate_research | Request research on a topic |
| request_review | Request code/architecture review |
| create_conversation | Start a new conversation |
| get_history | Retrieve conversation messages |
| share_context | Share files/snippets between AIs |
| list_conversations | List active/completed conversations |
| close_conversation | Close and optionally summarize a conversation |Agent Personas
When sending messages to Codex, you can specify an agent persona:
| Agent | Use When | Behavior |
|-------|----------|----------|
|
architect | Architecture questions, code review | Critical analysis, challenges assumptions, examines codebase |
| oracle | Debugging, "why" questions | Root cause analysis, strategic reasoning, action steps |Auto-selected based on message content, or specify explicitly:
`
send_message(target="codex", content="Review this code", agent="architect")
send_message(target="codex", content="Why is this failing?", agent="oracle")
`$3
The Architect acts as a critical technical partner who:
- Examines the codebase independently before responding
- Agrees when you're right (but explains why)
- Disagrees when they see problems (with specific alternatives)
- Never just rubber-stamps suggestions
$3
The Oracle acts as a strategic advisor who:
- Focuses on root cause analysis
- Provides concrete action steps
- Helps understand "why" questions
- Uses strategic reasoning for complex problems
Configuration
| Environment Variable | Default | Description |
|---------------------|---------|-------------|
|
TULAYNGMAMAMO_PORT | 3790 | Server port |
| TULAYNGMAMAMO_DB | ~/.tulayngmamamo/tulayngmamamo.sqlite | Database path |
| MEMORANTADO_URL | http://127.0.0.1:3789 | memorantado integration URL |
| TULAYNGMAMAMO_CLIENT_ID | auto-detect (fallback: claude) | Client identity for stdio mode (claude or codex) |
| TULAYNGMAMAMO_CODEX_MODEL | gpt-5.3-codex | Default Codex model |
| TULAYNGMAMAMO_CODEX_REASONING_EFFORT | xhigh | Default Codex reasoning effort |Architecture
`
Claude Code CLI <----> tulayngmamamo <----> Codex CLI
| | |
| | |
MCP Client MCP Server MCP Server
(this project) (codex mcp-server)
`$3
1. Client sends message via MCP tool
2. Message is persisted and enqueued in SQLite (
message_queue)
3. Queue processor delivers when target client is online
4. Target replies with a new message linked by response_to_id
5. Sender fetches replies via get_response` or conversation historyMIT