Skip to main content

Welcome to Contextbase API

Contextbase is a memory compression engine designed to enhance your LLM applications with contextual awareness. Publish unstructured data to contexts and fetch compressed, data-rich content to prompt your LLMs with relevant information while preventing context rot.

API Specification

View the complete OpenAPI specification

Base URL

All API requests should be made to:
https://api.contextbase.dev/v1

Authentication

All API endpoints are authenticated using API keys passed in the x-api-key header.
curl -H "x-api-key: YOUR_API_KEY" \
     -H "Content-Type: application/json" \
     https://api.contextbase.dev/v1/contexts/user-interactions/resolve
You can obtain your API key from the Contextbase dashboard.

Core Concepts

Contexts

A context is a data source that contains published information about a specific domain or topic. Contexts can be enriched with data over time and can be scoped to provide personalized content. Examples include:
  • User interaction history
  • Knowledge base articles
  • System logs and events
  • Document repositories

Prompts

Prompts are collections of multiple contexts combined in a specific order to create comprehensive prompts for your LLMs. Prompts allow you to structure complex AI interactions by combining different types of contextual information.

Scopes

Scopes are parameters that personalize your context. They allow you to filter context content specific to users, projects, environments, or any other criteria. Example: A user interactions context might be scoped by:
  • user_id - to get interactions for a specific user
  • project - to get interactions for a specific project
  • session_id - to get interactions for a specific session

Publishing Data

You can publish unstructured data to contexts to enrich them over time. This might include:
  • User messages and interactions
  • Profile updates and preferences
  • System events and audit logs
  • Document changes and updates
  • File uploads (PDFs, images, spreadsheets, etc.)

API Endpoints

Context Resolution

  • POST /contexts/{context_name}/resolve - Resolve content from a single context
  • POST /prompts/{prompt_name}/resolve - Resolve content from multiple contexts combined in order

Data Publishing

  • POST /contexts/{context_name}/data - Publish data or files to a context

Preventing Context Rot

Context rot occurs when prompts become too long or filled with irrelevant information, degrading LLM performance. Contextbase prevents this through:

RAG (Retrieval Augmented Generation)

Use RAG contexts to fetch only data relevant to your query. Instead of including all published data, RAG filters content based on semantic similarity to your query string.

Agentic Context Compression

An LLM compresses your context by eliminating information irrelevant to your specific query, ensuring prompts stay focused and effective.

Token Limits

Set token limits at both context and prompt levels to prevent contexts from becoming too large:
  • Prompt-level limits: Maximum total tokens for the entire resolved prompt
  • Context-level limits: Maximum tokens per individual context

Example Workflow

  1. Create contexts for different data sources (user interactions, knowledge base, system events)
  2. Create a prompt that combines these contexts in a specific order
  3. Publish data to contexts: messages, preferences, interactions (scoped by user_id)
  4. Resolve prompt with a user’s ID and optional query to get compressed, relevant content
  5. Use the result to prompt your LLM with rich contextual awareness

Rate Limits

The API is rate limited to ensure fair usage. Please refer to the response headers for current rate limit information:
  • X-RateLimit-Limit: Request limit per time window
  • X-RateLimit-Remaining: Remaining requests in current window
  • X-RateLimit-Reset: Time when the rate limit resets