MCP server for Limitless API - Connect your Pendant data to Claude and other LLMs
npm install limitless-mcp


> Connect your Limitless Pendant data to Claude and other LLMs via the Model Context Protocol (MCP).
Limitless MCP is a server implementation of the Model Context Protocol that provides seamless access to your Limitless API data for AI assistants like Claude.

- 🔍 Enhanced Search with relevance-based scoring and content snippets
- 🔮 Semantic Search using text embeddings for concept-based retrieval
- 📅 Natural Language Time parsing for intuitive date filtering (e.g., "last week")
- 📝 Smart Summarization at different detail levels and focus areas
- 📄 Transcript Generation in multiple formats for easy reading
- 📊 Time Analysis to understand your recording patterns
- 🔎 Content Filtering by speaker, type, or timeframe
- 🧠 Topic Extraction to identify key themes across lifelogs
- 😊 Sentiment Analysis for conversations with speaker breakdown
- 🔌 Plugin Architecture for extending functionality with custom features
- ⚡ Performance Optimization with configurable caching
- 🎛️ Customizable via environment variables
- 🔒 Secure Authentication using your Limitless API key
- 🔄 Seamless Integration with Claude Desktop, Cursor, and other MCP-compatible clients
- Node.js 18 or higher (required for native fetch API and ReadableStream support)
- ⚠️ Important: Node.js 16 and below are NOT supported
- To check your version: node --version
- To upgrade: Visit nodejs.org or use nvm
- A Limitless account with a paired Pendant
- A Limitless API key (available to Pendant owners)
``bash`
npm install -g limitless-mcp
1. Open your Claude Desktop configuration file:
- macOS: ~/Library/Application Support/Claude/claude_desktop_config.json%APPDATA%\Claude\claude_desktop_config.json
- Windows: ~/.config/Claude/claude_desktop_config.json
- Linux:
2. Add the Limitless MCP server to your configuration:
`json`
{
"mcpServers": {
"limitless-mcp": {
"command": "npx",
"args": ["-y", "limitless-mcp"],
"env": {
"LIMITLESS_API_KEY": "your-api-key-here"
}
}
}
}
3. Restart Claude Desktop
1. Open your Cursor MCP configuration file:
- macOS: ~/.cursor/mcp.json%USERPROFILE%\.cursor\mcp.json
- Windows: ~/.cursor/mcp.json
- Linux:
2. Add the Limitless MCP server to your configuration:
`json`
{
"mcpServers": {
"limitless-mcp": {
"command": "npx",
"args": ["-y", "limitless-mcp"],
"env": {
"LIMITLESS_API_KEY": "your-api-key-here"
}
}
}
}
3. Restart Cursor
Any application that supports the Model Context Protocol can use limitless-mcp with a similar configuration. The essential elements are:
`json`
{
"command": "npx",
"args": ["-y", "limitless-mcp"],
"env": {
"LIMITLESS_API_KEY": "your-api-key-here"
}
}
Once configured, you can interact with your Limitless data using natural language within Claude or other MCP-enabled AI assistants.
- List recent lifelogs:
``
Show me my recent lifelogs.
- Search for specific content:
``
Search my lifelogs for conversations about artificial intelligence.
- Get a daily summary:
``
Give me a summary of my day on May 1, 2025.
- Retrieve a full conversation:
``
Show me the full text of lifelog OFe86CdN11YCe22I9Jv4.
- Generate a transcript:
``
Create a dialogue transcript from lifelog OFe86CdN11YCe22I9Jv4.
- Analyze recording time:
``
Show me a time analysis of my recordings from last week.
- Filter by speaker:
``
Filter lifelog OFe86CdN11YCe22I9Jv4 to only show what Jake said.
#### list_lifelogslimit
Lists your lifelogs with filtering options:
- : Maximum number of lifelogs to return (default: 10)date
- : Date in YYYY-MM-DD formattimezone
- : IANA timezone specifier (e.g., "America/Los_Angeles")start
- : Start date/timeend
- : End date/timedirection
- : Sort direction ("asc" or "desc")includeContent
- : Whether to include markdown contentfields
- : Specific fields to include (title, time, id, etc.)
#### get_paged_lifelogscursor
Navigates through paginated results:
- : Pagination cursor from previous resultslimit
- : Maximum number of lifelogs to returndate
- , timezone, direction: Same as aboveincludeContent
- : Whether to include markdown contentfields
- : Specific fields to include (title, time, id, etc.)
#### search_lifelogsquery
Searches your lifelogs with relevance-based scoring:
- : Text to search forlimit
- : Maximum number of results to returndate
- , timezone, start, end: Same as abovesearchMode
- : Search mode ("basic" or "advanced" with scoring)includeSnippets
- : Whether to include matching content snippets
#### get_lifelogid
Retrieves a specific lifelog with selective field retrieval:
- : The ID of the lifelog to retrieveincludeContent
- : Whether to include full content or just metadatafields
- : Specific fields to include (title, time, speakers, etc.)
#### get_lifelog_metadataid
Retrieves only metadata about a lifelog (faster than full content):
- : The ID of the lifelog to retrieve metadata for
#### filter_lifelog_contentsid
Filters lifelog content by various criteria:
- : The ID of the lifelog to filterspeakerName
- : Filter by speaker namecontentType
- : Filter by content type (e.g., heading1, blockquote)timeStart
- : Filter content after this time (ISO-8601)timeEnd
- : Filter content before this time (ISO-8601)
#### generate_transcriptid
Creates a formatted transcript from a lifelog:
- : The ID of the lifelog to generate transcript fromformat
- : Transcript format style ("simple", "detailed", or "dialogue")
#### get_time_summarydate
Provides time-based analytics of your recordings:
- : Date in YYYY-MM-DD formattimezone
- : IANA timezone specifierstart
- : Start date for range analysisend
- : End date for range analysisgroupBy
- : How to group statistics ("hour", "day", or "week")
#### get_day_summarydate
Provides a formatted summary of a specific day's lifelogs:
- : Date in YYYY-MM-DD formattimezone
- : IANA timezone specifier
#### summarize_lifelogid
Creates intelligent summaries at different levels of detail:
- : The ID of the lifelog to summarizelevel
- : Level of summarization detail ("brief", "detailed", or "comprehensive")focus
- : Focus of the summary ("general", "key_points", "decisions", "questions", "action_items")
#### summarize_lifelogsids
Summarizes multiple lifelogs with optional combined view:
- : Array of lifelog IDs to summarizelevel
- : Level of detail ("brief" or "detailed")combinedView
- : Whether to provide a combined summary
#### extract_topicsids
Identifies key topics and themes across lifelogs:
- : Array of lifelog IDs to analyzemaxTopics
- : Maximum number of topics to extractminOccurrences
- : Minimum occurrences required for a topicmode
- : Extraction mode ("keywords" or "phrases")excludeCommonWords
- : Whether to exclude common English words
#### analyze_sentimentid
Analyzes sentiment in lifelog content:
- : The ID of the lifelog to analyze sentiment forbySpeaker
- : Whether to analyze sentiment by speakerincludeSentences
- : Whether to include individual sentences in the analysis
#### compare_sentimentids
Compares sentiment across multiple lifelogs:
- : Array of lifelog IDs to compare sentimentbySpeaker
- : Whether to compare sentiment by speaker across lifelogs
#### manage_cacheaction
Manages the caching system:
- : Action to perform ("stats" or "clear")
#### manage_pluginsaction
Manages the plugin system:
- : Action to perform ("list", "enable", "disable", or "info")name
- : Plugin name for enable/disable/info actions
#### Content Processor Plugin
##### process_contentid
Processes and transforms lifelog content:
- : The ID of the lifelog to processoperations
- : List of operations to perform (filter, replace, extract, transform)format
- : Output format (markdown, text, or json)
##### batch_processids
Processes multiple lifelogs with the same operations:
- : Array of lifelog IDs to processoperations
- : List of operations to performmergeResults
- : Whether to merge results into a single output
#### Decorator Plugin
##### apply_templateid
Applies templates to format lifelog content:
- : The ID of the lifelog to formattemplate
- : Name of the template to use or custom template stringvariables
- : Additional variables to use in the template
##### manage_templatesaction
Manages content templates:
- : Action to perform ("list", "get", "add", or "delete")name
- : Template name for get/add/delete actionstemplate
- : Template content for add action
#### Semantic Search Plugin
##### create_embeddingsid
Creates embeddings for a lifelog to enable semantic search:
- : The ID of the lifelog to create embeddings forchunkSize
- : Size of text chunks for embeddings (in characters)chunkOverlap
- : Overlap between chunks (in characters)forceRefresh
- : Whether to force refresh embeddings
##### semantic_searchquery
Searches for semantically similar content:
- : The query to search for semantically similar contentids
- : Optional array of specific lifelog IDs to search withintopK
- : Number of top results to returnthreshold
- : Similarity threshold (0-1)
##### manage_embeddingsaction
Manages semantic search embeddings:
- : Action to perform ("list", "delete", "clear", or "info")id
- : Lifelog ID for delete/info actions
#### Time Parser Plugin
##### parse_time_referencetimeReference
Parses natural language time references:
- : Natural language time reference (e.g., "yesterday", "last week")timezone
- : IANA timezone specifierreferenceDate
- : Reference date (defaults to today)
##### search_with_timequery
Searches lifelogs with natural language time references:
- : Search query texttimeReference
- : Natural language time reference (e.g., "yesterday", "last week")timezone
- : IANA timezone specifierlimit
- : Maximum number of results to returnincludeContent
- : Whether to include content in results
Limitless MCP can be configured using environment variables:
- LIMITLESS_API_KEY: Your Limitless API key (required)LIMITLESS_API_BASE_URL
- : Limitless API base URL (default: "https://api.limitless.ai/v1")LIMITLESS_API_TIMEOUT_MS
- : Timeout in milliseconds for API calls (default: 120000)LIMITLESS_API_MAX_RETRIES
- : Maximum retries for failed API calls (default: 3)
- LIMITLESS_MAX_LIFELOG_LIMIT: Maximum number of results per request (default: 100)LIMITLESS_DEFAULT_PAGE_SIZE
- : Default page size for listing results (default: 10)LIMITLESS_SEARCH_MULTIPLIER
- : Multiplier for search results retrieval (default: 3)
- LIMITLESS_CACHE_TTL: Cache time-to-live in seconds (default: 300)LIMITLESS_CACHE_CHECK_PERIOD
- : Cache cleanup interval in seconds (default: 600)LIMITLESS_CACHE_MAX_KEYS
- : Maximum number of items in cache (default: 500)CACHE_TTL_METADATA
- : TTL multiplier for metadata (default: 3)CACHE_TTL_LISTINGS
- : TTL multiplier for listings (default: 2)CACHE_TTL_SEARCH
- : TTL multiplier for search results (default: 1.5)CACHE_TTL_SUMMARIES
- : TTL multiplier for summaries (default: 4)
- LIMITLESS_PLUGINS_ENABLED: Set to "false" to disable all pluginsLIMITLESS_PLUGIN_CONTENT_PROCESSOR
- : Set to "false" to disable the Content Processor pluginLIMITLESS_PLUGIN_DECORATOR
- : Set to "false" to disable the Decorator pluginLIMITLESS_DECORATOR_TEMPLATES
- : JSON string with custom templatesLIMITLESS_PLUGIN_SEMANTIC_SEARCH
- : Set to "false" to disable the Semantic Search pluginLIMITLESS_SEMANTIC_SEARCH_TTL
- : TTL for embeddings cache in seconds (default: 3600)LIMITLESS_SEMANTIC_SEARCH_MAX_KEYS
- : Maximum number of embeddings to cache (default: 1000)LIMITLESS_PLUGIN_TIME_PARSER
- : Set to "false" to disable the Time Parser pluginLIMITLESS_DEFAULT_TIMEZONE
- : Default timezone for time parsing (default: "UTC")
For more details on plugin configuration, see plugins.md.
`bashClone the repository
git clone https://github.com/jakerains/limitless-mcp.git
cd limitless-mcp
$3
To test with a locally running instance:
`json
{
"mcpServers": {
"limitless-mcp": {
"command": "node",
"args": ["/path/to/limitless-mcp/dist/main.js"],
"env": {
"LIMITLESS_API_KEY": "your-api-key-here"
}
}
}
}
`🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (
git checkout -b feature/amazing-feature)
3. Commit your changes (git commit -m 'Add some amazing feature')
4. Push to the branch (git push origin feature/amazing-feature`)This project is licensed under the MIT License - see the LICENSE file for details.
- Limitless AI for their incredible Pendant device and API
- Model Context Protocol team for creating the standard
- All contributors and users of this project
---
Made with ❤️ for enhancing AI interactions with your personal data