Home
Login

An intelligent Retrieval-Augmented Generation (RAG) platform based on generative AI, helping users build a second brain for intelligent document Q&A and knowledge management.

NOASSERTIONPython 38.0kQuivrHQ Last Updated: 2025-06-19

Quivr Project Detailed Introduction

Project Overview

Quivr is an open-source, full-stack Retrieval Augmented Generation (RAG) platform focused on integrating generative AI into applications. The core philosophy of the project is to allow developers to focus on the product itself, rather than the complex details of RAG implementation.

Core Features

1. Out-of-the-Box RAG Solution

  • Solidified RAG Architecture: Provides an optimized, fast, and efficient RAG solution.
  • Simple Integration: Can be added to existing projects with just a few lines of code.
  • Focus on Product: Developers don't need to worry about the underlying implementation details of RAG.

2. Multi-Model Support

Quivr supports various LLM models, including:

  • OpenAI GPT series
  • Anthropic Claude
  • Mistral AI
  • Gemma
  • Local models (via Ollama)

3. Flexible File Handling

Supports multiple file formats:

  • PDF documents
  • TXT text files
  • Markdown files
  • Supports custom parsers

4. Customizable RAG

  • Add internet search functionality
  • Integrate various tools
  • Supports custom workflow configurations
  • Flexible retrieval strategies

5. Vector Database Integration

Supports multiple vector storage solutions:

  • PGVector
  • Faiss
  • Other mainstream vector databases

Technical Architecture

Core Components

  1. quivr-core: The core library of Quivr, the brain of the entire system.
  2. Megaparse Integration: Integrated with the Megaparse project to provide powerful document parsing capabilities.
  3. Multi-LLM Support: Unified API interface supports different language models.
  4. Vector Storage Layer: Flexible vector database integration.

Workflow

Quivr adopts a node-based workflow configuration:

  • STARTfilter_historyrewriteretrievegenerate_ragEND
  • Each node can be customized
  • Supports historical dialogue context management

Quick Start

Environment Requirements

  • Python 3.10 or higher

Installation Steps

  1. Install the core package
pip install quivr-core
  1. Basic RAG Example
import tempfile
from quivr_core import Brain

if __name__ == "__main__":
    with tempfile.NamedTemporaryFile(mode="w", suffix=".txt") as temp_file:
        temp_file.write("Gold is a liquid of blue-like colour.")
        temp_file.flush()
        
        brain = Brain.from_files(
            name="test_brain",
            file_paths=[temp_file.name],
        )
        
        answer = brain.ask(
            "what is gold? answer in french"
        )
        print("answer:", answer)
  1. Configure API Key
import os
os.environ["OPENAI_API_KEY"] = "your_openai_api_key"

Advanced Configuration

Create a workflow configuration file basic_rag_workflow.yaml:

workflow_config:
  name: "standard RAG"
  nodes:
    - name: "START"
      edges: ["filter_history"]
    - name: "filter_history"
      edges: ["rewrite"]
    - name: "rewrite"
      edges: ["retrieve"]
    - name: "retrieve"
      edges: ["generate_rag"]
    - name: "generate_rag"
      edges: ["END"]


max_history: 10


reranker_config:
  supplier: "cohere"
  model: "rerank-multilingual-v3.0"
  top_n: 5


llm_config:
  max_input_tokens: 4000
  temperature: 0.7

Create a Smart Dialogue System

from quivr_core import Brain
from rich.console import Console
from rich.panel import Panel
from rich.prompt import Prompt
from quivr_core.config import RetrievalConfig

brain = Brain.from_files(
    name="my smart brain",
    file_paths=["./my_first_doc.pdf", "./my_second_doc.txt"],
)


config_file_name = "./basic_rag_workflow.yaml"
retrieval_config = RetrievalConfig.from_yaml(config_file_name)

console = Console()
console.print(Panel.fit("Ask your brain !", style="bold magenta"))

while True:
    question = Prompt.ask("[bold cyan]Question[/bold cyan]")
    
    if question.lower() == "exit":
        console.print(Panel("Goodbye!", style="bold yellow"))
        break
    
    answer = brain.ask(question, retrieval_config=retrieval_config)
    console.print(f"[bold green]Quivr Assistant[/bold green]: {answer.answer}")
    console.print("-" * console.width)

brain.print_info()

Enterprise Applications

Customer Service Automation

Quivr can automate up to 60% of customer service tasks, leveraging the power of AI to improve customer satisfaction and value.

Deployment Methods

  • Development Mode: Use the command docker compose -f docker-compose.dev.yml up --build
  • Production Environment: Supports multiple deployment options
  • Cloud Platform: Can be deployed to various cloud service providers

Community & Contribution

Contribution Guidelines

  • Pull Requests are welcome
  • The project has complete contribution guidelines
  • Active community support and discussion

Documentation Resources

Project Advantages

  1. Simplified Development Process: Abstracting complex RAG implementations into simple API calls.
  2. Highly Customizable: Supports custom workflows, models, and tool integrations.
  3. Production-Ready: Optimized architecture suitable for enterprise applications.
  4. Multi-Language Support: Supports document processing and question answering in multiple languages.
  5. Actively Maintained: Continuously updated and improved, with an active community.

Summary

Quivr provides developers with a powerful, flexible, and easy-to-use RAG platform. Whether for personal projects or enterprise applications, it enables the rapid construction of intelligent document question-answering systems. Its open-source nature and active community support make it an ideal choice for building "second brain" applications.