A visual workflow platform for AI agents, providing a graphical interface to build, debug, and evaluate LLM workflows, enabling AI engineers to iterate 10x faster.
PySpur - AI Agent Visual Development Platform
Project Overview
PySpur is a visual workflow platform for AI agents, enabling AI engineers to iterate and develop AI agents 10x faster. This is an open-source project backed by Y Combinator, designed to address the key pain points AI engineers face when building intelligent agents.
Core Problems Solved
AI engineers commonly face three major challenges when building AI agents:
- Prompt Engineering Hell: Spending excessive time on prompt tuning and iterative trial and error.
- Workflow Blind Spots: Lack of visualization of step interactions, leading to hidden failures and confusion.
- End-to-End Testing Nightmare: Needing to stare at raw output and manually parse JSON.
Core Features
🔄 Workflow Management
- Visual Graphical Interface: Build AI workflows through drag-and-drop.
- Loop Support: Supports iterative tool calls with memory.
- Human-in-the-Loop: Persist workflows and support waiting for human approval.
- Breakpoint Debugging: Workflow pause points that require manual approval to continue execution.
📤 Multi-Modal Data Processing
- File Upload: Supports uploading files or pasting URLs to process documents.
- Multi-Modal Support: Handles various formats such as video, images, audio, text, and code.
- Structured Output: Provides a UI editor for JSON Schema.
🗃️ RAG System
- Complete RAG Process: Parses, chunks, embeds, and inserts data into vector databases.
- Vector Database Integration: Supports various vector databases.
🧰 Tool Integration
- Rich Tool Support: Integrates with Slack, Firecrawl.dev, Google Sheets, GitHub, etc.
- Extensibility: Add new nodes by creating a single Python file.
📊 Monitoring & Evaluation
- Automatic Tracking: Automatically captures the execution traces of deployed agents.
- Evaluation System: Evaluates agent performance on real-world datasets.
- One-Click Deployment: Publish as an API and integrate anywhere.
🎛️ Multi-Vendor Support
- 100+ Provider Support: Supports over 100 LLM providers, embedders, and vector databases.
- Python-Driven: Built on Python, easy to extend and customize.
Quick Start
Installation Requirements
- Python 3.11 or higher
Basic Installation Steps
- Install PySpur
pip install pyspur
- Initialize a New Project
pyspur init my-project
cd my-project
- Start the Server
pyspur serve --sqlite
By default, this will start the PySpur application at http://localhost:6080
using a sqlite database. It is recommended to configure a postgres instance URL in the .env
file for a more stable experience.
- Configure Environment and API Keys (Optional)
- Application Interface Method: Navigate to the API Keys tab to add provider keys (OpenAI, Anthropic, etc.).
- Manual Method: Edit the
.env
file (recommended to configure postgres) and restart usingpyspur serve
.
Development Environment Setup
Recommended Method: Using Development Containers
It is recommended to use Cursor/VS Code with a development container (.devcontainer/devcontainer.json
) to get:
- A consistent development environment with pre-configured tools and extensions.
- Optimized settings for Python and TypeScript development.
- Automatic hot reloading and port forwarding.
Steps:
- Install Cursor/VS Code and the Dev Containers extension.
- Clone and open the repository.
- Click "Reopen in Container" when prompted.
Manual Setup Method
- Clone the Repository
git clone https://github.com/PySpur-com/pyspur.git
cd pyspur
- Start with docker-compose
docker compose -f docker-compose.dev.yml up --build -d
- Custom Settings: Edit
.env
to configure the environment (e.g., PostgreSQL settings).
Note: Manual setup requires additional configuration and may not include all development container features.
Use Cases
PySpur is particularly suitable for the following scenarios:
- Complex AI workflows requiring visual debugging.
- Quality assurance processes requiring human supervision.
- Multi-modal data processing applications.
- RAG system construction and optimization.
- Large-scale intelligent agent application deployment.
Technical Architecture
- Frontend: Workflow editor based on a graphical interface.
- Backend: Python-driven execution engine.
- Database: Supports SQLite and PostgreSQL.
- Deployment: Supports containerized deployment and one-click API publishing.