Qwen-Agent 是一个基于千问大语言模型的智能体开发框架,专门用于开发具备指令遵循、工具使用、规划和记忆能力的 LLM 应用程序。该项目由阿里巴巴千问团队开发维护,目前作为千问聊天服务 (Qwen Chat) 的后端支撑。
reasoning_content
字段,调整默认函数调用模板pip install -U "qwen-agent[gui,rag,code_interpreter,mcp]"
# 或最小安装
pip install -U qwen-agent
git clone https://github.com/QwenLM/Qwen-Agent.git
cd Qwen-Agent
pip install -e ./"[gui,rag,code_interpreter,mcp]"
[gui]
: Gradio 图形界面支持[rag]
: RAG 检索增强功能[code_interpreter]
: 代码解释器功能[mcp]
: MCP 协议支持llm_cfg = {
'model': 'qwen-max-latest',
'model_server': 'dashscope',
# 'api_key': 'YOUR_DASHSCOPE_API_KEY',
'generate_cfg': {
'top_p': 0.8
}
}
llm_cfg = {
'model': 'Qwen2.5-7B-Instruct',
'model_server': 'http://localhost:8000/v1',
'api_key': 'EMPTY',
}
import pprint
import urllib.parse
import json5
from qwen_agent.agents import Assistant
from qwen_agent.tools.base import BaseTool, register_tool
from qwen_agent.utils.output_beautify import typewriter_print
# 步骤1: 添加自定义工具
@register_tool('my_image_gen')
class MyImageGen(BaseTool):
description = 'AI painting (image generation) service, input text description, and return the image URL drawn based on text information.'
parameters = [{
'name': 'prompt',
'type': 'string',
'description': 'Detailed description of the desired image content, in English',
'required': True
}]
def call(self, params: str, **kwargs) -> str:
prompt = json5.loads(params)['prompt']
prompt = urllib.parse.quote(prompt)
return json5.dumps(
{'image_url': f'https://image.pollinations.ai/prompt/{prompt}'},
ensure_ascii=False)
# 步骤2: 配置 LLM
llm_cfg = {
'model': 'qwen-max-latest',
'model_server': 'dashscope',
'generate_cfg': {
'top_p': 0.8
}
}
# 步骤3: 创建智能体
system_instruction = '''After receiving the user's request, you should:
- first draw an image and obtain the image url,
- then run code `request.get(image_url)` to download the image,
- and finally select an image operation from the given document to process the image.
Please show the image using `plt.show()`.'''
tools = ['my_image_gen', 'code_interpreter']
files = ['./examples/resource/doc.pdf']
bot = Assistant(llm=llm_cfg,
system_message=system_instruction,
function_list=tools,
files=files)
# 步骤4: 运行智能体聊天
messages = []
while True:
query = input('\nuser query: ')
messages.append({'role': 'user', 'content': query})
response = []
response_plain_text = ''
print('bot response:')
for response in bot.run(messages=messages):
response_plain_text = typewriter_print(response, response_plain_text)
messages.extend(response)
from qwen_agent.gui import WebUI
WebUI(bot).run()
{
"mcpServers": {
"memory": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-memory"]
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"]
},
"sqlite": {
"command": "uvx",
"args": [
"mcp-server-sqlite",
"--db-path",
"test.db"
]
}
}
}
项目提供了快速 RAG 解决方案,以及针对超长文档的竞争性智能体,在两个具有挑战性的基准测试中表现优于原生长上下文模型,并在涉及 100万 token 上下文的单针"大海捞针"压力测试中表现完美。
BrowserQwen 是基于 Qwen-Agent 构建的浏览器助手,提供网页浏览、操作和信息提取能力。
Qwen-Agent 是一个功能强大、易于使用的智能体开发框架,为开发者提供了构建复杂 LLM 应用的完整工具链。无论是简单的聊天机器人还是复杂的多功能智能助手,都可以通过该框架快速实现和部署。