Qwen-Agent is an intelligent agent development framework based on the Qwen large language model, specifically designed for developing LLM applications with instruction following, tool usage, planning, and memory capabilities. This project is developed and maintained by the Alibaba Qwen team and currently serves as the backend support for the Qwen Chat service.
reasoning_content
field, adjusts the default function calling templatepip install -U "qwen-agent[gui,rag,code_interpreter,mcp]"
# Or minimal installation
pip install -U qwen-agent
git clone https://github.com/QwenLM/Qwen-Agent.git
cd Qwen-Agent
pip install -e ./"[gui,rag,code_interpreter,mcp]"
[gui]
: Gradio graphical interface support[rag]
: RAG retrieval augmentation functionality[code_interpreter]
: Code interpreter functionality[mcp]
: MCP protocol supportllm_cfg = {
'model': 'qwen-max-latest',
'model_server': 'dashscope',
# 'api_key': 'YOUR_DASHSCOPE_API_KEY',
'generate_cfg': {
'top_p': 0.8
}
}
llm_cfg = {
'model': 'Qwen2.5-7B-Instruct',
'model_server': 'http://localhost:8000/v1',
'api_key': 'EMPTY',
}
import pprint
import urllib.parse
import json5
from qwen_agent.agents import Assistant
from qwen_agent.tools.base import BaseTool, register_tool
from qwen_agent.utils.output_beautify import typewriter_print
# Step 1: Add a custom tool
@register_tool('my_image_gen')
class MyImageGen(BaseTool):
description = 'AI painting (image generation) service, input text description, and return the image URL drawn based on text information.'
parameters = [{
'name': 'prompt',
'type': 'string',
'description': 'Detailed description of the desired image content, in English',
'required': True
}]
def call(self, params: str, **kwargs) -> str:
prompt = json5.loads(params)['prompt']
prompt = urllib.parse.quote(prompt)
return json5.dumps(
{'image_url': f'https://image.pollinations.ai/prompt/{prompt}'},
ensure_ascii=False)
# Step 2: Configure LLM
llm_cfg = {
'model': 'qwen-max-latest',
'model_server': 'dashscope',
'generate_cfg': {
'top_p': 0.8
}
}
# Step 3: Create an agent
system_instruction = '''After receiving the user's request, you should:
- first draw an image and obtain the image url,
- then run code `request.get(image_url)` to download the image,
- and finally select an image operation from the given document to process the image.
Please show the image using `plt.show()`.'''
tools = ['my_image_gen', 'code_interpreter']
files = ['./examples/resource/doc.pdf']
bot = Assistant(llm=llm_cfg,
system_message=system_instruction,
function_list=tools,
files=files)
# Step 4: Run the agent chat
messages = []
while True:
query = input('\nuser query: ')
messages.append({'role': 'user', 'content': query})
response = []
response_plain_text = ''
print('bot response:')
for response in bot.run(messages=messages):
response_plain_text = typewriter_print(response, response_plain_text)
messages.extend(response)
from qwen_agent.gui import WebUI
WebUI(bot).run()
{
"mcpServers": {
"memory": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-memory"]
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"]
},
"sqlite": {
"command": "uvx",
"args": [
"mcp-server-sqlite",
"--db-path",
"test.db"
]
}
}
}
The project provides a fast RAG solution, as well as competitive agents for ultra-long documents, outperforming native long-context models in two challenging benchmarks and performing perfectly in a single-shot "needle in a haystack" stress test involving 1 million token contexts.
BrowserQwen is a browser assistant built on Qwen-Agent, providing web browsing, operation, and information extraction capabilities.
Qwen-Agent is a powerful and easy-to-use intelligent agent development framework that provides developers with a complete toolchain for building complex LLM applications. Whether it's a simple chatbot or a complex multifunctional intelligent assistant, it can be quickly implemented and deployed through this framework.