Skip to main content
POST
/
prompts
/
{prompt_name}
/
resolve
curl -X POST \
  -H "x-api-key: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "scopes": {
      "user_id": 123,
      "project": "web-app"
    },
    "query": "user authentication issues"
  }' \
  https://api.contextbase.dev/v1/prompts/chatbot-assistant/resolve
{
  "content": "SYSTEM: You are a helpful assistant for web-app support.\n\nUSER CONTEXT: User ID 123 has previously reported login issues on 2024-01-15. Recent interactions show confusion about password reset flow. User preferences: technical explanations, quick solutions.\n\nKNOWLEDGE BASE: Authentication troubleshooting guide for web-app project shows common issues with OAuth redirects...\n\nRECENT EVENTS: System maintenance completed on auth service...",
  "metadata": {
    "token_count": 1247
  }
}

Overview

The prompt resolve endpoint fetches compressed, data-rich content from all contexts within a specified prompt. This combines multiple contexts in their defined order to create comprehensive prompts that give your LLM contextual awareness across different data sources without context rot.

Path Parameters

prompt_name
string
required
Name of the prompt to resolve (e.g., “chatbot-assistant”)

Request Body

scopes
object
Scoping parameters to personalize the context. These filter contexts to include only data relevant to specific users, projects, or environments.
query
string
Optional query string for RAG contexts. This helps filter and rank relevant content, preventing context rot by including only information semantically similar to your query.

Response

content
string
required
The compressed prompt content from all contexts combined in order, optimized for prompting your LLM with comprehensive contextual awareness.
metadata
object
Additional metadata about the resolved prompt.

Example Request

curl -X POST \
  -H "x-api-key: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "scopes": {
      "user_id": 123,
      "project": "web-app"
    },
    "query": "user authentication issues"
  }' \
  https://api.contextbase.dev/v1/prompts/chatbot-assistant/resolve

Example Response

{
  "content": "SYSTEM: You are a helpful assistant for web-app support.\n\nUSER CONTEXT: User ID 123 has previously reported login issues on 2024-01-15. Recent interactions show confusion about password reset flow. User preferences: technical explanations, quick solutions.\n\nKNOWLEDGE BASE: Authentication troubleshooting guide for web-app project shows common issues with OAuth redirects...\n\nRECENT EVENTS: System maintenance completed on auth service...",
  "metadata": {
    "token_count": 1247
  }
}

Use Cases

  • Multi-Context AI Applications: Combine user data, knowledge bases, and system information in a single prompt
  • Comprehensive AI Agents: Give AI agents awareness across multiple data sources and contexts
  • Structured Prompt Building: Create prompts with ordered contexts for specific use cases
  • Enterprise AI Systems: Build complex prompts that include user context, company knowledge, and real-time data

How Prompt Resolution Works

  1. Context Retrieval: All contexts within the prompt are fetched in their defined order
  2. Individual Resolution: Each context is resolved with the provided scopes and query
  3. Content Combination: Context contents are combined in the order specified in the prompt
  4. Agentic Compression: If agentic mode is enabled, an LLM further compresses the combined content to be relevant to the query
  5. Token Limiting: Result is trimmed to respect the prompt’s token limit
  6. Structured Output: Final compressed prompt ready for your LLM with metadata

Difference from Context Resolve

  • Context Resolve: Returns content from a single context
  • Prompt Resolve: Returns combined content from multiple contexts in a specific order
  • Use Prompts: When you need to combine different types of context (user data + knowledge base + system info)
  • Use Contexts: When you only need content from a single source or context type