Home
Login

A fully localized AI intelligent assistant that requires no API calls and has autonomous web browsing, code writing, and task planning capabilities.

GPL-3.0Python 19.2kFosowlagenticSeek Last Updated: 2025-06-22

AgenticSeek Project Detailed Introduction

Project Overview

AgenticSeek is a fully localized AI intelligent assistant, serving as an open-source alternative to Manus AI. It eliminates the need for API calls and high monthly fees, allowing users to enjoy autonomous intelligent agent services with only electricity costs. This project is designed for local inference models, running entirely on user hardware to ensure complete privacy and zero cloud dependency.

Core Features

🔒 Fully Localized and Privacy-Protected

  • 100% Local Execution: All functions are executed on the user's device, with no cloud dependency.
  • Data Privacy: Files, conversations, and search history are completely retained on the local device.
  • Zero Data Sharing: No personal data is transmitted to external services.

🌐 Intelligent Web Browsing

AgenticSeek can independently browse the internet—search, read, extract information, fill out web forms—completely without manual intervention. Supports:

  • Automatic search and information extraction
  • Automatic web form filling
  • Intelligent content analysis and summarization

💻 Autonomous Programming Assistant

Capable of writing, debugging, and running programs in multiple languages such as Python, C, Go, Java—without supervision or external dependencies. Features include:

  • Multi-language code generation
  • Automatic debugging and error fixing
  • Code execution and testing

🧠 Intelligent Agent Selection

Automatically determines the best AI agent for each task, like having a team of professional experts ready to help. System features:

  • Automatic task routing
  • Specialized agent division of labor
  • Intelligent decision-making mechanisms

📋 Complex Task Planning and Execution

From travel planning to complex projects—breaks down large tasks into manageable steps and executes them using multiple AI agents. Capabilities include:

  • Automatic task decomposition
  • Multi-agent collaboration
  • Progress tracking and management

🎙️ Voice Interaction

Clear, fast, and futuristic voice and speech-to-text functionality allows you to interact naturally with the AI assistant. Features:

  • Voice wake-up function
  • Real-time speech recognition
  • Natural language interaction

Technical Architecture

Supported Local LLM Providers

Provider Local Description
ollama Yes Easily run LLMs locally using ollama
lm-studio Yes Run LLMs locally using LM Studio
server Yes Host models on other machines
openai Depends on configuration Use ChatGPT API or compatible API

Recommended Model Configuration

The project is primarily developed and optimized using the deepseek r1 14b model on an RTX 3060.

Model Size GPU Requirement Performance Evaluation
7B 8GB VRAM Basic functionality
14B 12GB VRAM (e.g., RTX 3060) ✅ Usable for simple tasks, web browsing and planning tasks may be difficult
32B 24+GB VRAM (e.g., RTX 4090) 🚀 Most tasks succeed, task planning may still be difficult
70B+ 48+GB VRAM (e.g., Mac Studio) 💪 Excellent, recommended for advanced use cases

Installation and Configuration

System Requirements

  • Python 3.10 or later
  • Chrome browser and ChromeDriver
  • Docker and Docker Compose

Quick Installation

git clone https://github.com/Fosowl/agenticSeek.git
cd agenticSeek
mv .env.example .env
python3 -m venv agentic_seek_env
source agentic_seek_env/bin/activate

Automatic Installation (Recommended)

# Linux/macOS
./install.sh

# Windows
./install.bat

Manual Installation

pip3 install -r requirements.txt
# Or
python3 setup.py install

Configuration Example

[MAIN]
is_local = True
provider_name = ollama
provider_model = deepseek-r1:32b
provider_server_address = 127.0.0.1:11434
agent_name = Friday
recover_last_session = False
save_session = False
speak = False
listen = False
work_dir = /Users/mlg/Documents/ai_folder
jarvis_personality = False
languages = en zh

[BROWSER]
headless_browser = False
stealth_mode = False

Configuration Parameter Description

  • is_local: Local execution (True) or remote server (False)
  • provider_name: Provider name (ollama, server, lm-studio, etc.)
  • provider_model: Model used, such as deepseek-r1:32b
  • agent_name: Agent name, used as a voice trigger word
  • work_dir: Folder path accessible to AI
  • jarvis_personality: Enable JARVIS-style personality
  • languages: List of supported languages

Running Method

Start Services

# Activate Python environment
source agentic_seek_env/bin/activate

# Start required services
sudo ./start_services.sh  # macOS/Linux
start ./start_services.cmd  # Windows

Running Options

Option 1: CLI Interface

python3 cli.py

Option 2: Web Interface

# Start backend
python3 api.py

# Access http://localhost:3000/

Usage Examples

Here are some typical usage scenarios:

  • Make a snake game in python
  • Show me how to multiply matrice in C
  • Make a blackjack in golang
  • Do a web search to find cool tech startup in Japan working on cutting edge AI research
  • Can you find on the internet who created AgenticSeek?
  • Can you use a fuel calculator online to estimate the cost of a Nice - Milan trip

Voice Functionality

Speech-to-Text Configuration

Enable in config.ini:

listen = True

Usage Process

  1. Say the agent name to wake it up (e.g., "Friday")
  2. Clearly state your query
  3. End the request with a confirmation phrase, such as: "do it", "go ahead", "execute", etc.

Remote Deployment

The project supports running the LLM on a remote server:

Server Side

git clone --depth 1 https://github.com/Fosowl/agenticSeek.git
cd agenticSeek/server/
pip3 install -r requirements.txt
python3 app.py --provider ollama --port 3333

Client Configuration

[MAIN]
is_local = False
provider_name = server
provider_model = deepseek-r1:70b
provider_server_address = x.x.x.x:3333

Common Questions

Q: What hardware configuration is required?

Refer to the model configuration table above, it is recommended to have at least 12GB of VRAM for basic functions.

Q: Why choose Deepseek R1?

Deepseek R1 performs excellently in reasoning and tool usage, making it an ideal choice for project needs.

Q: Can it truly run 100% locally?

Yes, when using Ollama, LM Studio, or the server provider, all speech-to-text, LLM, and text-to-speech models run locally.

Q: What are the advantages compared to Manus?

AgenticSeek prioritizes independence from external systems, providing users with more control, privacy protection, and avoiding API costs.

Project Links

Star History Chart