AgenticSeek is a fully localized AI intelligent assistant, serving as an open-source alternative to Manus AI. It eliminates the need for API calls and high monthly fees, allowing users to enjoy autonomous intelligent agent services with only electricity costs. This project is designed for local inference models, running entirely on user hardware to ensure complete privacy and zero cloud dependency.
AgenticSeek can independently browse the internet—search, read, extract information, fill out web forms—completely without manual intervention. Supports:
Capable of writing, debugging, and running programs in multiple languages such as Python, C, Go, Java—without supervision or external dependencies. Features include:
Automatically determines the best AI agent for each task, like having a team of professional experts ready to help. System features:
From travel planning to complex projects—breaks down large tasks into manageable steps and executes them using multiple AI agents. Capabilities include:
Clear, fast, and futuristic voice and speech-to-text functionality allows you to interact naturally with the AI assistant. Features:
Provider | Local | Description |
---|---|---|
ollama | Yes | Easily run LLMs locally using ollama |
lm-studio | Yes | Run LLMs locally using LM Studio |
server | Yes | Host models on other machines |
openai | Depends on configuration | Use ChatGPT API or compatible API |
The project is primarily developed and optimized using the deepseek r1 14b model on an RTX 3060.
Model Size | GPU Requirement | Performance Evaluation |
---|---|---|
7B | 8GB VRAM | Basic functionality |
14B | 12GB VRAM (e.g., RTX 3060) | ✅ Usable for simple tasks, web browsing and planning tasks may be difficult |
32B | 24+GB VRAM (e.g., RTX 4090) | 🚀 Most tasks succeed, task planning may still be difficult |
70B+ | 48+GB VRAM (e.g., Mac Studio) | 💪 Excellent, recommended for advanced use cases |
git clone https://github.com/Fosowl/agenticSeek.git
cd agenticSeek
mv .env.example .env
python3 -m venv agentic_seek_env
source agentic_seek_env/bin/activate
# Linux/macOS
./install.sh
# Windows
./install.bat
pip3 install -r requirements.txt
# Or
python3 setup.py install
[MAIN]
is_local = True
provider_name = ollama
provider_model = deepseek-r1:32b
provider_server_address = 127.0.0.1:11434
agent_name = Friday
recover_last_session = False
save_session = False
speak = False
listen = False
work_dir = /Users/mlg/Documents/ai_folder
jarvis_personality = False
languages = en zh
[BROWSER]
headless_browser = False
stealth_mode = False
# Activate Python environment
source agentic_seek_env/bin/activate
# Start required services
sudo ./start_services.sh # macOS/Linux
start ./start_services.cmd # Windows
python3 cli.py
# Start backend
python3 api.py
# Access http://localhost:3000/
Here are some typical usage scenarios:
Make a snake game in python
Show me how to multiply matrice in C
Make a blackjack in golang
Do a web search to find cool tech startup in Japan working on cutting edge AI research
Can you find on the internet who created AgenticSeek?
Can you use a fuel calculator online to estimate the cost of a Nice - Milan trip
Enable in config.ini
:
listen = True
The project supports running the LLM on a remote server:
git clone --depth 1 https://github.com/Fosowl/agenticSeek.git
cd agenticSeek/server/
pip3 install -r requirements.txt
python3 app.py --provider ollama --port 3333
[MAIN]
is_local = False
provider_name = server
provider_model = deepseek-r1:70b
provider_server_address = x.x.x.x:3333
Refer to the model configuration table above, it is recommended to have at least 12GB of VRAM for basic functions.
Deepseek R1 performs excellently in reasoning and tool usage, making it an ideal choice for project needs.
Yes, when using Ollama, LM Studio, or the server provider, all speech-to-text, LLM, and text-to-speech models run locally.
AgenticSeek prioritizes independence from external systems, providing users with more control, privacy protection, and avoiding API costs.