Qwen-Agent 是一個基於千問大語言模型的智能體開發框架,專門用於開發具備指令遵循、工具使用、規劃和記憶能力的 LLM 應用程式。該項目由阿里巴巴千問團隊開發維護,目前作為千問聊天服務 (Qwen Chat) 的後端支撐。
reasoning_content
字段,調整默認函數調用模板pip install -U "qwen-agent[gui,rag,code_interpreter,mcp]"
# 或最小安裝
pip install -U qwen-agent
git clone https://github.com/QwenLM/Qwen-Agent.git
cd Qwen-Agent
pip install -e ./"[gui,rag,code_interpreter,mcp]"
[gui]
: Gradio 圖形界面支持[rag]
: RAG 檢索增強功能[code_interpreter]
: 代碼解釋器功能[mcp]
: MCP 協議支持llm_cfg = {
'model': 'qwen-max-latest',
'model_server': 'dashscope',
# 'api_key': 'YOUR_DASHSCOPE_API_KEY',
'generate_cfg': {
'top_p': 0.8
}
}
llm_cfg = {
'model': 'Qwen2.5-7B-Instruct',
'model_server': 'http://localhost:8000/v1',
'api_key': 'EMPTY',
}
import pprint
import urllib.parse
import json5
from qwen_agent.agents import Assistant
from qwen_agent.tools.base import BaseTool, register_tool
from qwen_agent.utils.output_beautify import typewriter_print
# 步驟1: 添加自定義工具
@register_tool('my_image_gen')
class MyImageGen(BaseTool):
description = 'AI painting (image generation) service, input text description, and return the image URL drawn based on text information.'
parameters = [{
'name': 'prompt',
'type': 'string',
'description': 'Detailed description of the desired image content, in English',
'required': True
}]
def call(self, params: str, **kwargs) -> str:
prompt = json5.loads(params)['prompt']
prompt = urllib.parse.quote(prompt)
return json5.dumps(
{'image_url': f'https://image.pollinations.ai/prompt/{prompt}'},
ensure_ascii=False)
# 步驟2: 配置 LLM
llm_cfg = {
'model': 'qwen-max-latest',
'model_server': 'dashscope',
'generate_cfg': {
'top_p': 0.8
}
}
# 步驟3: 創建智能體
system_instruction = '''After receiving the user's request, you should:
- first draw an image and obtain the image url,
- then run code `request.get(image_url)` to download the image,
- and finally select an image operation from the given document to process the image.
Please show the image using `plt.show()`.'''
tools = ['my_image_gen', 'code_interpreter']
files = ['./examples/resource/doc.pdf']
bot = Assistant(llm=llm_cfg,
system_message=system_instruction,
function_list=tools,
files=files)
# 步驟4: 運行智能體聊天
messages = []
while True:
query = input('\nuser query: ')
messages.append({'role': 'user', 'content': query})
response = []
response_plain_text = ''
print('bot response:')
for response in bot.run(messages=messages):
response_plain_text = typewriter_print(response, response_plain_text)
messages.extend(response)
from qwen_agent.gui import WebUI
WebUI(bot).run()
{
"mcpServers": {
"memory": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-memory"]
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"]
},
"sqlite": {
"command": "uvx",
"args": [
"mcp-server-sqlite",
"--db-path",
"test.db"
]
}
}
}
項目提供了快速 RAG 解決方案,以及針對超長文檔的競爭性智能體,在兩個具有挑戰性的基準測試中表現優於原生長上下文模型,並在涉及 100萬 token 上下文的單針"大海撈針"壓力測試中表現完美。
BrowserQwen 是基於 Qwen-Agent 構建的瀏覽器助手,提供網頁瀏覽、操作和信息提取能力。
Qwen-Agent 是一個功能強大、易於使用的智能體開發框架,為開發者提供了構建複雜 LLM 應用的完整工具鏈。無論是簡單的聊天機器人還是複雜的多功能智能助手,都可以通過該框架快速實現和部署。