RAGChat provides a powerful debugging feature that allows you to see the inner workings of your RAG applications. By enabling debug mode, you can trace the entire process from user input to final response.

Enable Debugging

To activate the debugging feature, simply initialize RAGChat with the debug option set to true:

new RAGChat({ debug: true });

Debug Output

When debug mode is enabled, RAGChat will log detailed information about each step of the RAG process. Here’s a breakdown of the debug output:

  1. SEND_PROMPT: Logs the initial user query.

    {
      "timestamp": 1722950191207,
      "logLevel": "INFO",
      "eventType": "SEND_PROMPT",
      "details": {
        "prompt": "Where is the capital of Japan?"
      }
    }
    
  2. RETRIEVE_CONTEXT: Shows the relevant context retrieved from the vector store.

    {
      "timestamp": 1722950191480,
      "logLevel": "INFO",
      "eventType": "RETRIEVE_CONTEXT",
      "details": {
        "context": [
          {
            "data": "Tokyo is the Capital of Japan.",
            "id": "F5BWpryYkkcKLrp-GznwK"
          }
        ]
      },
      "latency": "171ms"
    }
    
  3. RETRIEVE_HISTORY: Displays the chat history retrieved for context.

    {
      "timestamp": 1722950191727,
      "logLevel": "INFO",
      "eventType": "RETRIEVE_HISTORY",
      "details": {
        "history": [
          {
            "content": "Where is the capital of Japan?",
            "role": "user",
            "id": "0"
          }
        ]
      },
      "latency": "145ms"
    }
    
  4. FORMAT_HISTORY: Shows how the chat history is formatted for the prompt.

    {
      "timestamp": 1722950191828,
      "logLevel": "INFO",
      "eventType": "FORMAT_HISTORY",
      "details": {
        "formattedHistory": "USER MESSAGE: Where is the capital of Japan?"
      }
    }
    
  5. FINAL_PROMPT: Displays the complete prompt sent to the language model.

    {
      "timestamp": 1722950191931,
      "logLevel": "INFO",
      "eventType": "FINAL_PROMPT",
      "details": {
        "prompt": "You are a friendly AI assistant augmented with an Upstash Vector Store.\n  To help you answer the questions, a context and/or chat history will be provided.\n  Answer the question at the end using only the information available in the context or chat history, either one is ok.\n\n  -------------\n  Chat history:\n  USER MESSAGE: Where is the capital of Japan?\n  -------------\n  Context:\n  - Tokyo is the Capital of Japan.\n  -------------\n\n  Question: Where is the capital of Japan?\n  Helpful answer:"
      }
    }
    
  6. LLM_RESPONSE: Shows the final response from the language model.

    {
      "timestamp": 1722950192593,
      "logLevel": "INFO",
      "eventType": "LLM_RESPONSE",
      "details": {
        "response": "According to the context, Tokyo is the capital of Japan!"
      },
      "latency": "558ms"
    }
    

Was this page helpful?