feat: 工作流记忆与内置工具、知你客服脚本、Agent管理技能展示与能力配置、文档与Windows启动脚本;忽略 redis_temp 二进制目录

Made-with: Cursor
This commit is contained in:
renjianbo
2026-04-08 11:44:24 +08:00
parent 599b8f2851
commit bd3f8be781
66 changed files with 10104 additions and 469 deletions

View File

@@ -0,0 +1,269 @@
#!/usr/bin/env python3
"""
从「知你客服9号」复制为「知你客服10号」改进记忆与连接策略
1. 在 json-parse 之后插入 code-identity-merge把用户指定的「助手称呼」写入 memory.context.assistant_display_name
(与 user_profile.name 区分)。
2. code-build-context向 LLM 注入 assistant_display_name。
3. llm-unified 提示词:自我介绍优先用 assistant_display_nameuser_profile.name 仅表示用户。
4. condition-need-summaryhistory_count >= 2 即走摘要分支(原常为 >=4summary 易长期为空)。
需 Celery 已加载含 re/hashlib 注入的 workflow_engine代码节点内勿写 import re
"""
from __future__ import annotations
import json
import os
import sys
import requests
BASE = os.getenv("PLATFORM_BASE_URL", "http://127.0.0.1:8037").rstrip("/")
SOURCE_AGENT_ID = os.getenv("ZHINI_9_AGENT_ID", "de5932d6-3c05-4b27-ab08-f6cb403ce4b9")
USER = os.getenv("PLATFORM_USERNAME", "admin")
PWD = os.getenv("PLATFORM_PASSWORD", "123456")
NEW_NAME = "知你客服10号"
NEW_DESC = (
"在知你客服9号基础上① memory.context.assistant_display_name 存助手对外称呼,与 user_profile.name用户分离"
"② 摘要分支 history_count>=2 更易生成 conversation_summary"
"③ 工作流在 json-parse 后增加 code-identity-merge 再进入抽取/写记忆。"
)
CODE_IDENTITY_MERGE = r"""mem = dict(input_data.get('memory') or {})
ctx = dict(mem.get('context') or {})
q = str(input_data.get('query') or input_data.get('user_input') or '').strip()
for pat in (
r'你的\s*名字\s*叫\s*([^\s,.!?]{1,32})',
r'\s*叫\s*(?!什么)([^\s,.!?]{1,32})',
r'(?:客服|助手)\s*叫\s*([^\s,.!?]{1,32})',
):
m = re.search(pat, q)
if not m:
continue
name = m.group(1).strip().strip(',。!?,.!?')
if not name:
continue
if any(b in name for b in ('什么', '哪位', '', '')):
continue
ctx['assistant_display_name'] = name
break
mem['context'] = ctx
out = dict(input_data)
out['memory'] = mem
result = out
"""
CODE_BUILD_CONTEXT_V10 = r"""left = input_data.get('left') or {}
right = input_data.get('right') or []
if not isinstance(right, list):
right = []
mem = left.get('memory') or {}
hist = mem.get('conversation_history') or []
if not isinstance(hist, list):
hist = []
summary = mem.get('conversation_summary') or ''
ctx = mem.get('context') or {}
if not isinstance(ctx, dict):
ctx = {}
assistant_name = str(ctx.get('assistant_display_name') or '').strip()
recent_n = 16
recent = hist[-recent_n:] if len(hist) > recent_n else hist
recent_str = '\n'.join(f"{x.get('role', '')}: {x.get('content', '')}" for x in recent)
vec_str = '\n'.join((rec.get('text') or rec.get('content') or '') for rec in right)
query = (left.get('user_input') or left.get('query') or '').strip()
older = hist[:-recent_n] if len(hist) > recent_n else []
def _tok(s):
s = str(s)
ch = {c for c in s if '\u4e00' <= c <= '\u9fff'}
wd = set(s.lower().replace('\n', ' ').split())
return ch | wd
qt = _tok(query) if query else set()
scored = []
for m in older:
c = str(m.get('content', ''))
if not c:
continue
sc = len(qt & _tok(c)) if qt else 0
if sc > 0:
scored.append((sc, str(m.get('role', '')), c[:240]))
scored.sort(key=lambda x: -x[0])
kw_lines = [f"{role}: {text}" for _, role, text in scored[:6]]
kw_str = '\n'.join(kw_lines)
relevant_str = vec_str.strip()
if kw_str:
if relevant_str:
relevant_str = relevant_str + '\n---\n关键词相关历史\n' + kw_str
else:
relevant_str = '关键词相关历史:\n' + kw_str
result = {
'user_input': left.get('user_input') or left.get('query') or '',
'memory': {
'user_profile': mem.get('user_profile') or {},
'conversation_summary': summary,
'relevant_from_retrieval': relevant_str,
'recent_turns': recent_str,
'assistant_display_name': assistant_name,
},
'query': left.get('query') or '',
'user_id': left.get('user_id'),
}
"""
LLM_PROMPT_V10 = """你是客服助手。根据用户输入、用户画像、助手称呼、远期摘要、检索片段与最近对话生成回复。
【称呼规则】
- user_profile.name及同类字段仅表示「用户」的昵称/姓名。
- memory.assistant_display_name 表示用户为你指定的「对外称呼」。若非空,用户问「你叫什么名字」「你是谁」时,须用该称呼自称(可带「客服助手」类前缀,但核心名须一致);禁止忽略已保存的 assistant_display_name 改回默认虚构名。
- 若 assistant_display_name 为空,可自称「客服助手」等通用名。
【任务】
1判断意图2自然、有帮助的 replyJSON 内一条字符串);
3用户自我介绍姓名时写入 user_profile如 name勿把用户姓名写入 assistant_display_name
4用户问「我叫什么」时依据 user_profile 与历史/摘要回答。
只输出一行合法 JSON不要 markdown。示例
{"intent":"chat","reply":"你好!","user_profile":{"name":"小明"}}
用户输入:{{user_input}}
用户画像:{{memory.user_profile}}
助手对外称呼(用户指定,可能为空):{{memory.assistant_display_name}}
远期摘要:{{memory.conversation_summary}}
相关历史(检索):{{memory.relevant_from_retrieval}}
最近几轮:{{memory.recent_turns}}
要求reply 200 字以内user_profile 为对象。"""
def _insert_identity_node_and_edges(wf: dict) -> None:
nodes = wf.setdefault("nodes", [])
edges = wf.setdefault("edges", [])
if any(n.get("id") == "code-identity-merge" for n in nodes):
return
# 参考 json-parse 位置:在其右侧插入
jx, jy = 2200, 400
for n in nodes:
if n.get("id") == "json-parse":
pos = n.get("position") or {}
jx = pos.get("x", jx) + 80
jy = pos.get("y", jy)
break
nodes.append(
{
"id": "code-identity-merge",
"type": "code",
"position": {"x": jx, "y": jy},
"data": {
"label": "合并助手称呼到 context",
"language": "python",
"code": CODE_IDENTITY_MERGE,
},
}
)
new_edges = []
removed = False
for e in edges:
if e.get("source") == "json-parse" and e.get("target") == "transform-extract-reply-and-profile":
removed = True
continue
new_edges.append(e)
if not removed:
print("警告: 未找到 json-parse -> transform-extract-reply-and-profile 的边,仍追加新边", file=sys.stderr)
new_edges.append(
{
"id": "e11a-identity",
"source": "json-parse",
"target": "code-identity-merge",
"sourceHandle": "right",
"targetHandle": "left",
}
)
new_edges.append(
{
"id": "e11b-identity",
"source": "code-identity-merge",
"target": "transform-extract-reply-and-profile",
"sourceHandle": "right",
"targetHandle": "left",
}
)
wf["edges"] = new_edges
def _patch_nodes(wf: dict) -> None:
nodes = wf.get("nodes") or []
for n in nodes:
nid = n.get("id")
if nid == "llm-unified":
n.setdefault("data", {})["prompt"] = LLM_PROMPT_V10
elif nid == "code-build-context":
n.setdefault("data", {})["code"] = CODE_BUILD_CONTEXT_V10
elif nid == "condition-need-summary":
d = n.setdefault("data", {})
c = d.get("condition", "")
if "history_count" in c and ">=" in c:
d["condition"] = "{history_count} >= 2"
else:
d["condition"] = "{history_count} >= 2"
elif nid == "code-identity-merge":
n.setdefault("data", {})["code"] = CODE_IDENTITY_MERGE
def main() -> int:
r = requests.post(
f"{BASE}/api/v1/auth/login",
data={"username": USER, "password": PWD},
headers={"Content-Type": "application/x-www-form-urlencoded"},
timeout=15,
)
if r.status_code != 200:
print("登录失败:", r.status_code, r.text[:500], file=sys.stderr)
return 1
token = r.json().get("access_token")
if not token:
print("无 access_token", file=sys.stderr)
return 1
h = {"Authorization": f"Bearer {token}", "Content-Type": "application/json"}
dup = requests.post(
f"{BASE}/api/v1/agents/{SOURCE_AGENT_ID}/duplicate",
headers=h,
json={"name": NEW_NAME},
timeout=30,
)
if dup.status_code != 201:
print("复制失败:", dup.status_code, dup.text[:800], file=sys.stderr)
return 1
new_id = dup.json()["id"]
print("已创建副本:", new_id, NEW_NAME)
g = requests.get(f"{BASE}/api/v1/agents/{new_id}", headers=h, timeout=30)
if g.status_code != 200:
print("读取 Agent 失败:", g.text, file=sys.stderr)
return 1
agent = g.json()
wf = agent["workflow_config"]
_insert_identity_node_and_edges(wf)
_patch_nodes(wf)
up = requests.put(
f"{BASE}/api/v1/agents/{new_id}",
headers=h,
json={"description": NEW_DESC, "workflow_config": wf},
timeout=120,
)
if up.status_code != 200:
print("更新失败:", up.status_code, up.text[:800], file=sys.stderr)
return 1
print("已更新identity 节点与边、摘要阈值>=2、上下文与 LLM 提示")
print("Agent ID:", new_id)
print(json.dumps({"id": new_id, "name": NEW_NAME}, ensure_ascii=False))
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,122 @@
#!/usr/bin/env python3
"""
从「知你客服10号」复制为「知你客服11号」
- 在 llm-unified 上开启工具调用,仅启用 http_request拉取网页/API 再总结)。
- 提示词要求:需要访问 URL 时必须先调工具;最终对用户仍只输出一行合法 JSON供下游 json-parse
环境变量PLATFORM_BASE_URL、ZHINI_10_AGENT_ID默认 10 号 ID、登录账号密码。
"""
from __future__ import annotations
import json
import os
import sys
import requests
BASE = os.getenv("PLATFORM_BASE_URL", "http://127.0.0.1:8037").rstrip("/")
SOURCE_AGENT_ID = os.getenv("ZHINI_10_AGENT_ID", "c853482b-d298-44e4-9862-c84318f71abb")
USER = os.getenv("PLATFORM_USERNAME", "admin")
PWD = os.getenv("PLATFORM_PASSWORD", "123456")
NEW_NAME = "知你客服11号"
NEW_DESC = (
"在知你客服10号基础上主 LLM 开启工具调用,仅注册 http_request"
"可对用户给出的 http(s) 链接先抓取再摘要;"
"提示词要求最终仍输出单行 JSON兼容原记忆/解析链路。"
)
LLM_PROMPT_V11 = """你是客服助手。根据用户输入、用户画像、助手称呼、远期摘要、检索片段与最近对话生成回复。
【工具 http_request重要
- 当用户给出明确的 http:// 或 https:// 链接,并希望你「看看网页讲了什么」「总结这个页面」「打开链接」等时,你必须先调用工具 http_request参数 url 为完整链接method 一般填 GET调用时 method 为必填)。
- 根据工具返回的 JSON 中的 body 字段作答body 可能是 HTML 或 JSON 文本;请提炼要点写入最终 reply不要谎称「无法直接访问」。
- 若 body 过长reply 中用简明中文摘要即可200 字内为主)。
- 非 URL 类普通问答不要无故调用 http_request。
【称呼规则】(与 10 号一致)
- user_profile.name 仅表示「用户」昵称/姓名。
- memory.assistant_display_name 非空时,用户问「你叫什么名字」等须用该称呼自称;勿把用户姓名写入 assistant_display_name。
- 用户自我介绍姓名时写入 user_profile勿写入 assistant_display_name。
【最终输出格式(强制)】
- 无论你调用了几次工具,给用户的**最后一条消息内容**必须是**一行合法 JSON**,且**不要** markdown、**不要**代码围栏。
- JSON 必须可被解析且至少包含intent字符串、reply字符串、user_profile对象无新信息可为 {})。
- 示例:{"intent":"summarize_url","reply":"该网页主要介绍了……","user_profile":{}}
上下文:
用户输入:{{user_input}}
用户画像:{{memory.user_profile}}
助手对外称呼:{{memory.assistant_display_name}}
远期摘要:{{memory.conversation_summary}}
相关历史(检索):{{memory.relevant_from_retrieval}}
最近几轮:{{memory.recent_turns}}
"""
def _patch_llm_unified(wf: dict) -> None:
for n in wf.get("nodes") or []:
if n.get("id") != "llm-unified":
continue
d = n.setdefault("data", {})
d["prompt"] = LLM_PROMPT_V11
d["enable_tools"] = True
d["tools"] = ["http_request"]
d["selected_tools"] = ["http_request"]
return
print("警告: 未找到节点 llm-unified", file=sys.stderr)
def main() -> int:
r = requests.post(
f"{BASE}/api/v1/auth/login",
data={"username": USER, "password": PWD},
headers={"Content-Type": "application/x-www-form-urlencoded"},
timeout=15,
)
if r.status_code != 200:
print("登录失败:", r.status_code, r.text[:500], file=sys.stderr)
return 1
token = r.json().get("access_token")
if not token:
print("无 access_token", file=sys.stderr)
return 1
h = {"Authorization": f"Bearer {token}", "Content-Type": "application/json"}
dup = requests.post(
f"{BASE}/api/v1/agents/{SOURCE_AGENT_ID}/duplicate",
headers=h,
json={"name": NEW_NAME},
timeout=30,
)
if dup.status_code != 201:
print("复制失败:", dup.status_code, dup.text[:800], file=sys.stderr)
return 1
new_id = dup.json()["id"]
print("已创建副本:", new_id, NEW_NAME)
g = requests.get(f"{BASE}/api/v1/agents/{new_id}", headers=h, timeout=30)
if g.status_code != 200:
print("读取 Agent 失败:", g.text, file=sys.stderr)
return 1
agent = g.json()
wf = agent["workflow_config"]
_patch_llm_unified(wf)
up = requests.put(
f"{BASE}/api/v1/agents/{new_id}",
headers=h,
json={"description": NEW_DESC, "workflow_config": wf},
timeout=120,
)
if up.status_code != 200:
print("更新失败:", up.status_code, up.text[:800], file=sys.stderr)
return 1
print("已开启 enable_tools + http_request并更新提示词")
print("Agent ID:", new_id)
print(json.dumps({"id": new_id, "name": NEW_NAME}, ensure_ascii=False))
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,132 @@
#!/usr/bin/env python3
"""
从「知你客服11号」复制为「知你客服12号」
- llm-unified 开启工具http_request + file_read + file_write受工作区根目录与大小限制约束见 LOCAL_FILE_TOOLS_ROOT
- 提示词URL 用 http_request读写本地文件用 file_read / file_write最终仍输出单行 JSON。
环境变量PLATFORM_BASE_URL、ZHINI_11_AGENT_ID默认 11 号 ID、登录账号密码。
"""
from __future__ import annotations
import json
import os
import sys
import requests
BASE = os.getenv("PLATFORM_BASE_URL", "http://127.0.0.1:8037").rstrip("/")
SOURCE_AGENT_ID = os.getenv("ZHINI_11_AGENT_ID", "d39748ad-277f-48ac-9eb5-168ad2f1b470")
USER = os.getenv("PLATFORM_USERNAME", "admin")
PWD = os.getenv("PLATFORM_PASSWORD", "123456")
NEW_NAME = "知你客服12号"
NEW_DESC = (
"在知你客服11号基础上增加本地文件读写工具 file_read、file_write路径限制在平台配置的工作区内"
"默认可视为仓库根目录),并保留 http_request"
"输出仍为单行 JSON兼容记忆与 json-parse 链路。"
)
TOOLS_V12 = ["http_request", "file_read", "file_write", "system_info"]
LLM_PROMPT_V12 = """你是客服助手。根据用户输入、用户画像、助手称呼、远期摘要、检索片段与最近对话生成回复。
【工具 http_request】
- 用户给出 http(s) 链接且需要抓网页/API 时,先调用 http_request参数 url 为完整链接method 必填(一般为 GET
- 根据返回 JSON 中的 body 字段提炼要点;非 URL 问答不要无故调用。
【工具 system_info工作区路径
- 用户问「工作区路径」「能访问哪个目录」「file 根目录在哪」时,**必须调用 system_info**,用返回 JSON 里的 **local_file_workspace_root** 原样告知用户(不要用「临时目录」「无法显示」等推脱)。
【工具 file_read / file_write本地文件
- 仅当用户明确要「读文件」「写入某路径」「保存到本地文件」等时使用。
- file_read参数 file_path 可为**相对工作区根的相对路径**,或**落在工作区根之下的绝对路径**Windows 如 `D:\\...`Linux 如 `/home/...`),二者等价,由后端校验。
- file_write参数 file_path、contentmode 用 w 覆盖或 a 追加。写入前确认路径有意、避免覆盖重要文件;不要写入密钥、令牌。
- **禁止**以「不能访问 D: 盘」「只能相对路径」「工具看不到绝对路径」等理由拒绝用户:只要用户给的绝对路径以 `system_info` 返回的 `local_file_workspace_root` 为前缀(同一盘符、规范化后在其子路径下),就应**直接调用 file_write**,例如根为 `D:\\aaa\\aiagent` 时,`D:\\aaa\\aiagent\\user_data\\xxx.md` **合法**,可优先用用户原文路径或简写为相对路径 `user_data/xxx.md`。
- 路径必须落在平台允许的工作区内,否则会报错;不要尝试访问工作区外的路径。
- **禁止**假设工作区是 `/workspace` 或未经验证的目录;工作区根**只信** `local_file_workspace_root`。
- **每次调用 file_write / file_read 后,必须在最终 reply 中说明工具返回结果**:成功则写明路径与要点;失败则引用返回 JSON 中的 error 字段,不得假装已成功。
- **严禁编造工具返回**reply 中若引用 file_write/file_read/system_info 的 JSON必须与工具实际返回字符串一致可原样粘贴。禁止臆造路径例如 /tmp/...、/workspace/...)或与当前系统不符的路径;若未调用工具,禁止在 reply 里写伪造的 JSON。
【称呼规则】(与 10/11 一致)
- user_profile.name 表示用户昵称assistant_display_name 表示用户为你起的称呼。
- 用户问「你叫什么」时用 assistant_display_name若有勿把用户姓名写入 assistant_display_name。
【最终输出格式(强制)】
- 最后一条回复必须是**一行合法 JSON**,无 markdown、无代码围栏含 intent、reply、user_profile对象
上下文:
用户输入:{{user_input}}
用户画像:{{memory.user_profile}}
助手对外称呼:{{memory.assistant_display_name}}
远期摘要:{{memory.conversation_summary}}
相关历史(检索):{{memory.relevant_from_retrieval}}
最近几轮:{{memory.recent_turns}}
"""
def _patch_llm_unified(wf: dict) -> None:
for n in wf.get("nodes") or []:
if n.get("id") != "llm-unified":
continue
d = n.setdefault("data", {})
d["prompt"] = LLM_PROMPT_V12
d["enable_tools"] = True
d["tools"] = list(TOOLS_V12)
d["selected_tools"] = list(TOOLS_V12)
return
print("警告: 未找到节点 llm-unified", file=sys.stderr)
def main() -> int:
r = requests.post(
f"{BASE}/api/v1/auth/login",
data={"username": USER, "password": PWD},
headers={"Content-Type": "application/x-www-form-urlencoded"},
timeout=15,
)
if r.status_code != 200:
print("登录失败:", r.status_code, r.text[:500], file=sys.stderr)
return 1
token = r.json().get("access_token")
if not token:
print("无 access_token", file=sys.stderr)
return 1
h = {"Authorization": f"Bearer {token}", "Content-Type": "application/json"}
dup = requests.post(
f"{BASE}/api/v1/agents/{SOURCE_AGENT_ID}/duplicate",
headers=h,
json={"name": NEW_NAME},
timeout=30,
)
if dup.status_code != 201:
print("复制失败:", dup.status_code, dup.text[:800], file=sys.stderr)
return 1
new_id = dup.json()["id"]
print("已创建副本:", new_id, NEW_NAME)
g = requests.get(f"{BASE}/api/v1/agents/{new_id}", headers=h, timeout=30)
if g.status_code != 200:
print("读取 Agent 失败:", g.text, file=sys.stderr)
return 1
agent = g.json()
wf = agent["workflow_config"]
_patch_llm_unified(wf)
up = requests.put(
f"{BASE}/api/v1/agents/{new_id}",
headers=h,
json={"description": NEW_DESC, "workflow_config": wf},
timeout=120,
)
if up.status_code != 200:
print("更新失败:", up.status_code, up.text[:800], file=sys.stderr)
return 1
print("已注册工具:", ", ".join(TOOLS_V12))
print("Agent ID:", new_id)
print(json.dumps({"id": new_id, "name": NEW_NAME}, ensure_ascii=False))
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,280 @@
#!/usr/bin/env python3
"""
从「知你客服12号」复制为「知你客服13号」
- **画布**:去除自环边、合并重复的 source→target 边;统一锚点 sourceHandle=right、targetHandle=left
按从「开始」出发的分层布局重排节点坐标,减少交叉与「绕圈」观感(不改变节点 id/业务配置)。
- **提示词**:在 12 号能力http_request、file_read、file_write、system_info基础上增加工具调用纪律
(避免同轮重复 file_write、勿刷屏 DSML
若已存在同名 Agent「知你客服13号」则仅更新其 workflow + 描述(不新建)。
用法:
cd backend && .\\venv\\Scripts\\python.exe scripts/create_zhini_kefu_13.py
环境变量: PLATFORM_BASE_URL, PLATFORM_USERNAME, PLATFORM_PASSWORD,
SOURCE_AGENT_NAME默认 知你客服12号, TARGET_NAME默认 知你客服13号
"""
from __future__ import annotations
import copy
import json
import os
import sys
from collections import defaultdict
from typing import Any, Dict, List, Optional, Tuple
import requests
BASE = os.getenv("PLATFORM_BASE_URL", "http://127.0.0.1:8037").rstrip("/")
USER = os.getenv("PLATFORM_USERNAME", "admin")
PWD = os.getenv("PLATFORM_PASSWORD", "123456")
SOURCE_NAME = os.getenv("SOURCE_AGENT_NAME", "知你客服12号")
TARGET_NAME = os.getenv("TARGET_NAME", "知你客服13号")
TOOLS_V13 = ["http_request", "file_read", "file_write", "system_info"]
# 在 12 号提示词基础上追加create_zhini_kefu_12 正文过长时由脚本从源 Agent 读取再拼接)
PROMPT_V13_EXTRA = """
【画布/执行说明13 号)】
- 工作流连线已整理为从左到右主线,减少自环与重复边带来的误解;逻辑仍以引擎与节点配置为准。
【工具调用纪律13 号)】
- 同一轮用户请求中,对 **file_write** 无特殊说明时不要重复调用多次;每个明确文件需求通常 **一次写入** 即可。
- 不要在回复正文中 **重复刷屏** DSML、`<DSML`、`invoke name=` 等标签行;工具返回后应用自然语言说明,并仍以 **单行 JSON** 收尾。
- 若上一轮已写入成功,除非用户要求修改或另存,不要再次写入相同路径。
【单行 JSON 与用户画像(与 12 号一致,勿留空)】
- 最后一行 JSON 的 user_profile 须与事实一致:用户已告知昵称时须包含 "name"(如「小七」);**禁止**用空的 user_profile 覆盖会话记忆。
- 仅靠 file_write 写入本地文件**不能**替代上述 JSON 中的 user_profile多轮称呼以 JSON + 会话记忆为准。
"""
def _sanitize_edges(edges: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""去掉自环、按 (source,target) 去重,统一左右锚点。"""
seen: set = set()
out: List[Dict[str, Any]] = []
for e in edges or []:
s, t = e.get("source"), e.get("target")
if not s or not t:
continue
if s == t:
continue
key = (s, t)
if key in seen:
continue
seen.add(key)
ne = dict(e)
ne["sourceHandle"] = "right"
ne["targetHandle"] = "left"
if not ne.get("id"):
ne["id"] = f"edge_{s}_{t}"
out.append(ne)
return out
def _find_start_node_ids(nodes: List[Dict[str, Any]]) -> List[str]:
ids: List[str] = []
for n in nodes or []:
nid = n.get("id") or ""
nt = (n.get("type") or (n.get("data") or {}).get("type") or "").lower()
if nt == "start" or nid in ("start", "start-1") or str(nid).startswith("start-"):
ids.append(nid)
return ids
def _compute_ranks(
nodes: List[Dict[str, Any]], edges: List[Dict[str, Any]]
) -> Dict[str, int]:
node_ids = [n["id"] for n in nodes if n.get("id")]
start_ids = _find_start_node_ids(nodes)
incoming: Dict[str, int] = {nid: 0 for nid in node_ids}
for e in edges:
s, t = e.get("source"), e.get("target")
if not s or not t or s == t:
continue
if t in incoming:
incoming[t] += 1
if not start_ids:
start_ids = [nid for nid in node_ids if incoming.get(nid, 0) == 0] or ([node_ids[0]] if node_ids else [])
rank: Dict[str, int] = {s: 0 for s in start_ids}
nmax = max(len(nodes), 8)
for _ in range(nmax + 5):
updated = False
for e in edges:
s, t = e.get("source"), e.get("target")
if not s or not t or s == t:
continue
if s not in rank:
continue
nv = rank[s] + 1
if t not in rank or rank[t] < nv:
rank[t] = nv
updated = True
if not updated:
break
max_r = max(rank.values(), default=0)
for nid in node_ids:
if nid not in rank:
rank[nid] = max_r + 1
max_r += 1
return rank
def _apply_layered_positions(nodes: List[Dict[str, Any]], ranks: Dict[str, int]) -> None:
layers: Dict[int, List[str]] = defaultdict(list)
for nid, r in ranks.items():
layers[r].append(nid)
for r in layers:
layers[r].sort()
x0, y0 = 80.0, 140.0
x_step = 300.0
y_step = 110.0
for r in sorted(layers.keys()):
ids = layers[r]
nlen = len(ids)
y_base = y0 - (nlen - 1) * y_step / 2.0
for j, nid in enumerate(ids):
for node in nodes:
if node.get("id") != nid:
continue
pos = node.setdefault("position", {})
pos["x"] = x0 + r * x_step
pos["y"] = y_base + j * y_step
break
def improve_workflow_layout_and_edges(wf: Dict[str, Any]) -> Tuple[int, int]:
"""
返回 (去掉的自环条数, 去掉的重复边条数)。
"""
nodes = wf.get("nodes") or []
raw_edges = wf.get("edges") or []
loops = sum(
1
for e in raw_edges
if e.get("source") and e.get("target") and e.get("source") == e.get("target")
)
clean = _sanitize_edges(raw_edges)
removed_dup = len(raw_edges) - len(clean) - loops
wf["edges"] = clean
ranks = _compute_ranks(nodes, clean)
_apply_layered_positions(nodes, ranks)
return loops, max(0, removed_dup)
def _patch_llm_unified(wf: dict, base_prompt: Optional[str] = None) -> None:
for n in wf.get("nodes") or []:
if n.get("id") != "llm-unified":
continue
d = n.setdefault("data", {})
prompt = base_prompt if base_prompt else d.get("prompt") or ""
if PROMPT_V13_EXTRA.strip() not in prompt:
prompt = (prompt.rstrip() + "\n" + PROMPT_V13_EXTRA).strip()
d["prompt"] = prompt
d["enable_tools"] = True
d["tools"] = list(TOOLS_V13)
d["selected_tools"] = list(TOOLS_V13)
return
print("警告: 未找到节点 llm-unified", file=sys.stderr)
def _find_agent_id_by_name(h: Dict[str, str], name: str) -> Optional[str]:
r = requests.get(f"{BASE}/api/v1/agents", params={"search": name, "limit": 50}, headers=h, timeout=30)
if r.status_code != 200:
return None
for a in r.json() or []:
if a.get("name") == name:
return a.get("id")
return None
def main() -> int:
r = requests.post(
f"{BASE}/api/v1/auth/login",
data={"username": USER, "password": PWD},
headers={"Content-Type": "application/x-www-form-urlencoded"},
timeout=15,
)
if r.status_code != 200:
print("登录失败:", r.status_code, r.text[:500], file=sys.stderr)
return 1
token = r.json().get("access_token")
if not token:
print("无 access_token", file=sys.stderr)
return 1
h = {"Authorization": f"Bearer {token}", "Content-Type": "application/json"}
src_id = _find_agent_id_by_name(h, SOURCE_NAME)
if not src_id:
print(f"未找到源 Agent: {SOURCE_NAME}", file=sys.stderr)
return 1
existing_13 = _find_agent_id_by_name(h, TARGET_NAME)
if existing_13:
print("已存在", TARGET_NAME, "-> 仅更新工作流", existing_13)
new_id = existing_13
g = requests.get(f"{BASE}/api/v1/agents/{new_id}", headers=h, timeout=30)
if g.status_code != 200:
print("读取失败:", g.text, file=sys.stderr)
return 1
agent = g.json()
else:
dup = requests.post(
f"{BASE}/api/v1/agents/{src_id}/duplicate",
headers=h,
json={"name": TARGET_NAME},
timeout=60,
)
if dup.status_code != 201:
print("复制失败:", dup.status_code, dup.text[:800], file=sys.stderr)
return 1
new_id = dup.json()["id"]
agent = dup.json()
print("已创建副本:", new_id, TARGET_NAME)
wf = copy.deepcopy(agent["workflow_config"])
loops, dup_edges = improve_workflow_layout_and_edges(wf)
print(f"连线整理: 去掉自环 {loops} 条, 合并重复边 {dup_edges}")
g2 = requests.get(f"{BASE}/api/v1/agents/{src_id}", headers=h, timeout=30)
base_prompt = None
if g2.status_code == 200:
try:
for n in g2.json().get("workflow_config", {}).get("nodes") or []:
if n.get("id") == "llm-unified":
base_prompt = (n.get("data") or {}).get("prompt")
break
except Exception:
pass
_patch_llm_unified(wf, base_prompt=base_prompt)
desc = (
"在知你客服12号基础上整理工作流连线去自环/重复边、分层布局、统一左右锚点),"
"并强化工具调用纪律(避免同轮重复 file_write、勿刷屏 DSML"
"工具仍为 http_request、file_read、file_write、system_info输出单行 JSON。"
)
up = requests.put(
f"{BASE}/api/v1/agents/{new_id}",
headers=h,
json={"description": desc, "workflow_config": wf},
timeout=120,
)
if up.status_code != 200:
print("更新失败:", up.status_code, up.text[:1200], file=sys.stderr)
return 1
print("已写入工具:", ", ".join(TOOLS_V13))
print("Agent ID:", new_id)
print(json.dumps({"id": new_id, "name": TARGET_NAME}, ensure_ascii=False))
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,297 @@
#!/usr/bin/env python3
"""
从「知你客服13号」复制为「知你客服14号」
- **画布**:与 13 号脚本相同(去自环/重复边、分层布局、统一左右锚点)。
- **工具**:在 13 号http_request、file_read、file_write、system_info基础上增加平台已注册的内置工具
text_analyze、datetime、math_calculate、json_process、database_query、adb_log与 `tools_bootstrap` 对齐)。
- **提示词**:在 13 号提示词后追加 14 号扩展工具说明与纪律。
若已存在同名 Agent「知你客服14号」则仅更新其 workflow + 描述(不新建)。
用法:
cd backend && .\\venv\\Scripts\\python.exe scripts/create_zhini_kefu_14.py
环境变量: PLATFORM_BASE_URL, PLATFORM_USERNAME, PLATFORM_PASSWORD,
SOURCE_AGENT_NAME默认 知你客服13号, TARGET_NAME默认 知你客服14号
"""
from __future__ import annotations
import copy
import json
import os
import sys
from collections import defaultdict
from typing import Any, Dict, List, Optional, Tuple
import requests
BASE = os.getenv("PLATFORM_BASE_URL", "http://127.0.0.1:8037").rstrip("/")
USER = os.getenv("PLATFORM_USERNAME", "admin")
PWD = os.getenv("PLATFORM_PASSWORD", "123456")
SOURCE_NAME = os.getenv("SOURCE_AGENT_NAME", "知你客服13号")
TARGET_NAME = os.getenv("TARGET_NAME", "知你客服14号")
# 与 app.core.tools_bootstrap.ensure_builtin_tools_registered 中注册列表一致(全量内置工具)
TOOLS_V14: List[str] = [
"http_request",
"file_read",
"file_write",
"text_analyze",
"datetime",
"math_calculate",
"system_info",
"json_process",
"database_query",
"adb_log",
]
PROMPT_V14_MARKER = "【知你客服 14 号 · 扩展工具】"
PROMPT_V14_EXTRA = f"""
{PROMPT_V14_MARKER}
在 13 号既有能力与纪律之上,可使用下列额外工具(按需调用,避免无关刷屏;仍以 **单行 JSON** 收尾):
【text_analyze】文本分析`text` 为正文,`operation` 为 `count`(字数/行数等统计)、`keywords`(简单词频)、`summary`(取前几句摘要)。
【datetime】日期时间`operation` 常用 `now``format` 为 strftime 格式串(可选)。
【math_calculate】数学计算`expression` 为安全算术表达式(如 `2+2*3`、`sqrt(16)`),勿编造结果,以工具返回为准。
【json_process】JSON 处理:`json_string` + `operation` 为 `parse` | `stringify` | `validate`。
【database_query】只读 SQL**仅允许 SELECT**。未指定数据源时使用平台默认库;若需指定外部数据源可传 `data_source_id`。不得编造查询结果;大表注意 `timeout`(秒)。
【adb_log】Android 日志:依赖运行环境已安装 **adb** 且设备可用;`command` 等参数按工具 schema。仅在用户明确需要拉取/分析设备日志时使用,避免滥用。
【纪律】
- 继承 13 号:同轮避免无故重复 `file_write`;勿在正文中刷屏 DSML。
- `database_query` 禁止非 SELECT`adb_log` 需环境与权限,失败时如实说明工具返回。
"""
def _sanitize_edges(edges: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""去掉自环、按 (source,target) 去重,统一左右锚点。"""
seen: set = set()
out: List[Dict[str, Any]] = []
for e in edges or []:
s, t = e.get("source"), e.get("target")
if not s or not t:
continue
if s == t:
continue
key = (s, t)
if key in seen:
continue
seen.add(key)
ne = dict(e)
ne["sourceHandle"] = "right"
ne["targetHandle"] = "left"
if not ne.get("id"):
ne["id"] = f"edge_{s}_{t}"
out.append(ne)
return out
def _find_start_node_ids(nodes: List[Dict[str, Any]]) -> List[str]:
ids: List[str] = []
for n in nodes or []:
nid = n.get("id") or ""
nt = (n.get("type") or (n.get("data") or {}).get("type") or "").lower()
if nt == "start" or nid in ("start", "start-1") or str(nid).startswith("start-"):
ids.append(nid)
return ids
def _compute_ranks(
nodes: List[Dict[str, Any]], edges: List[Dict[str, Any]]
) -> Dict[str, int]:
node_ids = [n["id"] for n in nodes if n.get("id")]
start_ids = _find_start_node_ids(nodes)
incoming: Dict[str, int] = {nid: 0 for nid in node_ids}
for e in edges:
s, t = e.get("source"), e.get("target")
if not s or not t or s == t:
continue
if t in incoming:
incoming[t] += 1
if not start_ids:
start_ids = [nid for nid in node_ids if incoming.get(nid, 0) == 0] or ([node_ids[0]] if node_ids else [])
rank: Dict[str, int] = {s: 0 for s in start_ids}
nmax = max(len(nodes), 8)
for _ in range(nmax + 5):
updated = False
for e in edges:
s, t = e.get("source"), e.get("target")
if not s or not t or s == t:
continue
if s not in rank:
continue
nv = rank[s] + 1
if t not in rank or rank[t] < nv:
rank[t] = nv
updated = True
if not updated:
break
max_r = max(rank.values(), default=0)
for nid in node_ids:
if nid not in rank:
rank[nid] = max_r + 1
max_r += 1
return rank
def _apply_layered_positions(nodes: List[Dict[str, Any]], ranks: Dict[str, int]) -> None:
layers: Dict[int, List[str]] = defaultdict(list)
for nid, r in ranks.items():
layers[r].append(nid)
for r in layers:
layers[r].sort()
x0, y0 = 80.0, 140.0
x_step = 300.0
y_step = 110.0
for r in sorted(layers.keys()):
ids = layers[r]
nlen = len(ids)
y_base = y0 - (nlen - 1) * y_step / 2.0
for j, nid in enumerate(ids):
for node in nodes:
if node.get("id") != nid:
continue
pos = node.setdefault("position", {})
pos["x"] = x0 + r * x_step
pos["y"] = y_base + j * y_step
break
def improve_workflow_layout_and_edges(wf: Dict[str, Any]) -> Tuple[int, int]:
"""返回 (去掉的自环条数, 去掉的重复边条数)。"""
nodes = wf.get("nodes") or []
raw_edges = wf.get("edges") or []
loops = sum(
1
for e in raw_edges
if e.get("source") and e.get("target") and e.get("source") == e.get("target")
)
clean = _sanitize_edges(raw_edges)
removed_dup = len(raw_edges) - len(clean) - loops
wf["edges"] = clean
ranks = _compute_ranks(nodes, clean)
_apply_layered_positions(nodes, ranks)
return loops, max(0, removed_dup)
def _patch_llm_unified(wf: dict, base_prompt: Optional[str] = None) -> None:
for n in wf.get("nodes") or []:
if n.get("id") != "llm-unified":
continue
d = n.setdefault("data", {})
prompt = base_prompt if base_prompt else d.get("prompt") or ""
if PROMPT_V14_MARKER not in prompt:
prompt = (prompt.rstrip() + "\n" + PROMPT_V14_EXTRA).strip()
d["prompt"] = prompt
d["enable_tools"] = True
d["tools"] = list(TOOLS_V14)
d["selected_tools"] = list(TOOLS_V14)
return
print("警告: 未找到节点 llm-unified", file=sys.stderr)
def _find_agent_id_by_name(h: Dict[str, str], name: str) -> Optional[str]:
r = requests.get(f"{BASE}/api/v1/agents", params={"search": name, "limit": 50}, headers=h, timeout=30)
if r.status_code != 200:
return None
for a in r.json() or []:
if a.get("name") == name:
return a.get("id")
return None
def main() -> int:
r = requests.post(
f"{BASE}/api/v1/auth/login",
data={"username": USER, "password": PWD},
headers={"Content-Type": "application/x-www-form-urlencoded"},
timeout=15,
)
if r.status_code != 200:
print("登录失败:", r.status_code, r.text[:500], file=sys.stderr)
return 1
token = r.json().get("access_token")
if not token:
print("无 access_token", file=sys.stderr)
return 1
h = {"Authorization": f"Bearer {token}", "Content-Type": "application/json"}
src_id = _find_agent_id_by_name(h, SOURCE_NAME)
if not src_id:
print(f"未找到源 Agent: {SOURCE_NAME}", file=sys.stderr)
return 1
existing = _find_agent_id_by_name(h, TARGET_NAME)
if existing:
print("已存在", TARGET_NAME, "-> 仅更新工作流", existing)
new_id = existing
g = requests.get(f"{BASE}/api/v1/agents/{new_id}", headers=h, timeout=30)
if g.status_code != 200:
print("读取失败:", g.text, file=sys.stderr)
return 1
agent = g.json()
else:
dup = requests.post(
f"{BASE}/api/v1/agents/{src_id}/duplicate",
headers=h,
json={"name": TARGET_NAME},
timeout=60,
)
if dup.status_code != 201:
print("复制失败:", dup.status_code, dup.text[:800], file=sys.stderr)
return 1
new_id = dup.json()["id"]
agent = dup.json()
print("已创建副本:", new_id, TARGET_NAME)
wf = copy.deepcopy(agent["workflow_config"])
loops, dup_edges = improve_workflow_layout_and_edges(wf)
print(f"连线整理: 去掉自环 {loops} 条, 合并重复边 {dup_edges}")
g2 = requests.get(f"{BASE}/api/v1/agents/{src_id}", headers=h, timeout=30)
base_prompt = None
if g2.status_code == 200:
try:
for n in g2.json().get("workflow_config", {}).get("nodes") or []:
if n.get("id") == "llm-unified":
base_prompt = (n.get("data") or {}).get("prompt")
break
except Exception:
pass
_patch_llm_unified(wf, base_prompt=base_prompt)
desc = (
"在知你客服13号基础上扩展内置工具为全量含 text_analyze、datetime、math_calculate、"
"json_process、database_query、adb_log 等);画布与 13 号一致整理;输出仍为单行 JSON。"
)
up = requests.put(
f"{BASE}/api/v1/agents/{new_id}",
headers=h,
json={"description": desc, "workflow_config": wf},
timeout=120,
)
if up.status_code != 200:
print("更新失败:", up.status_code, up.text[:1200], file=sys.stderr)
return 1
print("已写入工具:", ", ".join(TOOLS_V14))
print("Agent ID:", new_id)
print(json.dumps({"id": new_id, "name": TARGET_NAME}, ensure_ascii=False))
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,106 @@
#!/usr/bin/env python3
"""
从「知你客服6号」复制为「知你客服7号」并更新 LLM 提示(强化姓名与 user_profile 记忆说明)。
需本地平台已启动(默认 http://127.0.0.1:8037账号 admin/123456。
用法:
cd backend && ..\\venv\\Scripts\\python.exe scripts/create_zhini_kefu_7.py
或: python scripts/create_zhini_kefu_7.py
"""
from __future__ import annotations
import json
import os
import sys
import requests
BASE = os.getenv("PLATFORM_BASE_URL", "http://127.0.0.1:8037").rstrip("/")
SOURCE_AGENT_ID = os.getenv("ZHINI_6_AGENT_ID", "2acc84d5-814b-4d61-9703-94a4b117375f")
USER = os.getenv("PLATFORM_USERNAME", "admin")
PWD = os.getenv("PLATFORM_PASSWORD", "123456")
NEW_NAME = "知你客服7号"
NEW_DESC = (
"在知你客服6号工作流基础上配合引擎修复多轮记忆"
"对话历史写入真实助手回复、合并 user_profile含姓名"
"LLM 提示词强调用户姓名与 user_profile 的维护。"
)
LLM_PROMPT = """你是客服助手。根据「用户当前输入」「已知用户信息」「相关历史(检索)」和「最近几轮」完成:
1判断意图
2生成一句自然、有帮助的回复
3【强制】只要用户说出或暗示自己的姓名、昵称必须在 user_profile 里用字段 name 保存,例如用户说「我叫王小明」则 JSON 必须包含 "user_profile":{"name":"王小明"}(若已有其它字段则合并,不要丢字段);
4若用户问「我叫什么」「你还记得我名字吗」等必须根据「已知用户信息」里的 user_profile.name 与对话历史回答;若已有 name 则禁止说「还不知道」。
只输出一行合法 JSON不要 markdown。格式示例
{"intent":"greeting","reply":"你好王小明!","user_profile":{"name":"王小明"}}
用户输入:{{user_input}}
已知用户信息:{{memory.user_profile}}
相关历史(检索到的):{{memory.relevant_from_retrieval}}
最近几轮:{{memory.recent_turns}}
要求reply 简洁自然200 字以内user_profile 为对象,至少包含 name当用户自我介绍时"""
def main() -> int:
r = requests.post(
f"{BASE}/api/v1/auth/login",
data={"username": USER, "password": PWD},
headers={"Content-Type": "application/x-www-form-urlencoded"},
timeout=15,
)
if r.status_code != 200:
print("登录失败:", r.status_code, r.text[:500], file=sys.stderr)
return 1
token = r.json().get("access_token")
if not token:
print("无 access_token", file=sys.stderr)
return 1
h = {"Authorization": f"Bearer {token}", "Content-Type": "application/json"}
dup = requests.post(
f"{BASE}/api/v1/agents/{SOURCE_AGENT_ID}/duplicate",
headers=h,
json={"name": NEW_NAME},
timeout=30,
)
if dup.status_code != 201:
print("复制失败:", dup.status_code, dup.text[:800], file=sys.stderr)
return 1
new_id = dup.json()["id"]
print("已创建副本:", new_id, NEW_NAME)
g = requests.get(f"{BASE}/api/v1/agents/{new_id}", headers=h, timeout=30)
if g.status_code != 200:
print("读取 Agent 失败:", g.text, file=sys.stderr)
return 1
agent = g.json()
wf = agent["workflow_config"]
nodes = wf.get("nodes") or []
for n in nodes:
if n.get("id") == "llm-unified":
n.setdefault("data", {})["prompt"] = LLM_PROMPT
break
up = requests.put(
f"{BASE}/api/v1/agents/{new_id}",
headers=h,
json={
"description": NEW_DESC,
"workflow_config": wf,
},
timeout=60,
)
if up.status_code != 200:
print("更新失败:", up.status_code, up.text[:800], file=sys.stderr)
return 1
print("已更新描述与 llm-unified 提示词")
print("Agent ID:", new_id)
print(json.dumps({"id": new_id, "name": NEW_NAME}, ensure_ascii=False))
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,125 @@
#!/usr/bin/env python3
"""
从「知你客服7号」复制为「知你客服8号」在 7 号多轮记忆能力基础上,说明使用平台「永久记忆」
Cache user_memory_* 同步写入 MySQL persistent_user_memories需 MEMORY_PERSIST_DB_ENABLED=true
需本地平台已启动(默认 http://127.0.0.1:8037账号可通过环境变量配置。
用法:
cd backend && .\\venv\\Scripts\\python.exe scripts/create_zhini_kefu_8.py
"""
from __future__ import annotations
import json
import os
import sys
import requests
BASE = os.getenv("PLATFORM_BASE_URL", "http://127.0.0.1:8037").rstrip("/")
# 默认从 7 号复制;也可通过环境变量指定
SOURCE_AGENT_ID = os.getenv("ZHINI_7_AGENT_ID", "688c2c41-dcd1-4285-b193-6bed00c485c2")
USER = os.getenv("PLATFORM_USERNAME", "admin")
PWD = os.getenv("PLATFORM_PASSWORD", "123456")
NEW_NAME = "知你客服8号"
NEW_DESC = (
"在知你客服7号基础上面向「永久记忆」工作流仍为 user_memory_{user_id} 读写;"
"引擎将记忆同步至 MySQL跨 Redis TTL、服务重启仍保留"
"调用时请固定传入 user_id部署需开启 MEMORY_PERSIST_DB_ENABLED。"
)
LLM_PROMPT = """你是客服助手。根据「用户当前输入」「已知用户信息」「相关历史(检索)」和「最近几轮」完成:
1判断意图
2生成一句自然、有帮助的回复
3【强制】只要用户说出或暗示自己的姓名、昵称必须在 user_profile 里用字段 name 保存,例如用户说「我叫王小明」则 JSON 必须包含 "user_profile":{"name":"王小明"}(若已有其它字段则合并,不要丢字段);
4若用户问「我叫什么」「你还记得我名字吗」等必须根据「已知用户信息」里的 user_profile.name 与对话历史回答;若已有 name 则禁止说「还不知道」。
5系统会在后台持久化用户画像与近期对话请始终基于「已知用户信息」与「最近几轮」作答避免与用户已提供信息矛盾。
只输出一行合法 JSON不要 markdown。格式示例
{"intent":"greeting","reply":"你好王小明!","user_profile":{"name":"王小明"}}
用户输入:{{user_input}}
已知用户信息:{{memory.user_profile}}
相关历史(检索到的):{{memory.relevant_from_retrieval}}
最近几轮:{{memory.recent_turns}}
要求reply 简洁自然200 字以内user_profile 为对象,至少包含 name当用户自我介绍时"""
def _patch_cache_nodes_for_memory(wf: dict) -> None:
"""为 Cache 节点设置更长对话窗口与较长 Redis TTL真正永久存储由引擎写 MySQL。"""
nodes = wf.get("nodes") or []
for n in nodes:
if n.get("type") != "cache":
continue
data = n.setdefault("data", {})
op = data.get("operation", "get")
if op == "set":
data["max_history_length"] = 40
data["ttl"] = 604800 # 7 天热缓存;冷数据仍可从 DB 拉回
elif op == "get":
data["ttl"] = 604800
def main() -> int:
r = requests.post(
f"{BASE}/api/v1/auth/login",
data={"username": USER, "password": PWD},
headers={"Content-Type": "application/x-www-form-urlencoded"},
timeout=15,
)
if r.status_code != 200:
print("登录失败:", r.status_code, r.text[:500], file=sys.stderr)
return 1
token = r.json().get("access_token")
if not token:
print("无 access_token", file=sys.stderr)
return 1
h = {"Authorization": f"Bearer {token}", "Content-Type": "application/json"}
dup = requests.post(
f"{BASE}/api/v1/agents/{SOURCE_AGENT_ID}/duplicate",
headers=h,
json={"name": NEW_NAME},
timeout=30,
)
if dup.status_code != 201:
print("复制失败:", dup.status_code, dup.text[:800], file=sys.stderr)
return 1
new_id = dup.json()["id"]
print("已创建副本:", new_id, NEW_NAME)
g = requests.get(f"{BASE}/api/v1/agents/{new_id}", headers=h, timeout=30)
if g.status_code != 200:
print("读取 Agent 失败:", g.text, file=sys.stderr)
return 1
agent = g.json()
wf = agent["workflow_config"]
nodes = wf.get("nodes") or []
for n in nodes:
if n.get("id") == "llm-unified":
n.setdefault("data", {})["prompt"] = LLM_PROMPT
break
_patch_cache_nodes_for_memory(wf)
up = requests.put(
f"{BASE}/api/v1/agents/{new_id}",
headers=h,
json={
"description": NEW_DESC,
"workflow_config": wf,
},
timeout=60,
)
if up.status_code != 200:
print("更新失败:", up.status_code, up.text[:800], file=sys.stderr)
return 1
print("已更新描述、llm-unified 提示词,并为 Cache 节点设置 max_history_length/ttl可选")
print("Agent ID:", new_id)
print(json.dumps({"id": new_id, "name": NEW_NAME}, ensure_ascii=False))
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,243 @@
#!/usr/bin/env python3
"""
从「知你客服8号」复制为「知你客服9号」强化「摘要 + 检索」可用性:
1. 上下文 code-build-context更长近期轮次、注入 conversation_summary、向量结果 + 关键词从历史中捞相关句。
2. 摘要路径 code-build-memory-value在原有摘要分支上合并进完整 conversation_history追加而非仅 2 条),并写回 conversation_summary。
3. cache-update-summary显式 value 为「memory」表达式避免整包 input_data 写入 Redis。
4. 向量写入:为每条 turn 带 metadata.user_id引擎侧检索已按 user_id 过滤。
需平台可登录;默认源 8 号 ID 为上次创建结果,可用 ZHINI_8_AGENT_ID 覆盖。
部署后请重启 Celery/API 以加载引擎向量过滤逻辑。
"""
from __future__ import annotations
import json
import os
import sys
import requests
BASE = os.getenv("PLATFORM_BASE_URL", "http://127.0.0.1:8037").rstrip("/")
SOURCE_AGENT_ID = os.getenv("ZHINI_8_AGENT_ID", "d7b64bf6-c8e3-4dc7-befc-03a98d5ff741")
USER = os.getenv("PLATFORM_USERNAME", "admin")
PWD = os.getenv("PLATFORM_PASSWORD", "123456")
NEW_NAME = "知你客服9号"
NEW_DESC = (
"在知你客服8号基础上强化摘要与检索"
"远期要点写入 conversation_summary"
"当轮上下文含「近期对话 + 摘要 + 向量片段 + 关键词相关历史」;"
"向量库写入带 user_id 元数据,引擎检索按用户隔离。"
"仍依赖 MEMORY_PERSIST_DB_ENABLED 与固定 user_id。"
)
LLM_PROMPT = """你是客服助手。根据「用户当前输入」「已知用户信息」「远期摘要」「相关历史(检索)」和「最近几轮」完成:
1判断意图
2生成一句自然、有帮助的回复
3【强制】用户说出或暗示姓名、昵称时必须在 user_profile.name 保存;合并已有字段勿丢失;
4用户问「我叫什么」等时必须依据 user_profile.name 与对话/摘要回答;已有 name 时禁止说「还不知道」;
5「远期摘要」概括更早话题「相关历史」可能含向量命中或关键词命中的旧轮次请结合使用。
只输出一行合法 JSON不要 markdown。格式示例
{"intent":"greeting","reply":"你好!","user_profile":{"name":"小明"}}
用户输入:{{user_input}}
已知用户信息:{{memory.user_profile}}
远期摘要:{{memory.conversation_summary}}
相关历史(检索到的):{{memory.relevant_from_retrieval}}
最近几轮:{{memory.recent_turns}}
要求reply 简洁自然200 字以内user_profile 为对象。"""
CODE_BUILD_CONTEXT = r"""left = input_data.get('left') or {}
right = input_data.get('right') or []
if not isinstance(right, list):
right = []
mem = left.get('memory') or {}
hist = mem.get('conversation_history') or []
if not isinstance(hist, list):
hist = []
summary = mem.get('conversation_summary') or ''
recent_n = 16
recent = hist[-recent_n:] if len(hist) > recent_n else hist
recent_str = '\n'.join(f"{x.get('role', '')}: {x.get('content', '')}" for x in recent)
vec_str = '\n'.join((rec.get('text') or rec.get('content') or '') for rec in right)
query = (left.get('user_input') or left.get('query') or '').strip()
older = hist[:-recent_n] if len(hist) > recent_n else []
def _tok(s):
s = str(s)
ch = {c for c in s if '\u4e00' <= c <= '\u9fff'}
wd = set(s.lower().replace('\n', ' ').split())
return ch | wd
qt = _tok(query) if query else set()
scored = []
for m in older:
c = str(m.get('content', ''))
if not c:
continue
sc = len(qt & _tok(c)) if qt else 0
if sc > 0:
scored.append((sc, str(m.get('role', '')), c[:240]))
scored.sort(key=lambda x: -x[0])
kw_lines = [f"{role}: {text}" for _, role, text in scored[:6]]
kw_str = '\n'.join(kw_lines)
relevant_str = vec_str.strip()
if kw_str:
if relevant_str:
relevant_str = relevant_str + '\n---\n关键词相关历史\n' + kw_str
else:
relevant_str = '关键词相关历史:\n' + kw_str
result = {
'user_input': left.get('user_input') or left.get('query') or '',
'memory': {
'user_profile': mem.get('user_profile') or {},
'conversation_summary': summary,
'relevant_from_retrieval': relevant_str,
'recent_turns': recent_str,
},
'query': left.get('query') or '',
'user_id': left.get('user_id'),
}
"""
CODE_BUILD_MEMORY_VALUE = r"""left = input_data.get('left') or {}
right_out = input_data.get('right') or {}
summary = ''
if isinstance(right_out, dict):
summary = right_out.get('output') or right_out.get('result') or ''
if not isinstance(summary, str):
summary = str(summary or '')
summary = summary.strip()
mem = left.get('memory') or {}
user_input = left.get('user_input') or left.get('query') or ''
reply = left.get('right') or ''
if isinstance(reply, dict):
reply = reply.get('right') or reply.get('content') or str(reply)
profile_update = left.get('user_profile_update') or {}
if not isinstance(profile_update, dict):
profile_update = {}
user_profile = dict(mem.get('user_profile') or {}, **profile_update)
ts = datetime.now().isoformat()
old_hist = mem.get('conversation_history') or []
if not isinstance(old_hist, list):
old_hist = []
new_hist = old_hist + [
{'role': 'user', 'content': user_input, 'timestamp': ts},
{'role': 'assistant', 'content': str(reply or ''), 'timestamp': ts},
]
max_len = 40
if len(new_hist) > max_len:
new_hist = new_hist[-max_len:]
prev_sum = (mem.get('conversation_summary') or '').strip()
conversation_summary = summary if summary else prev_sum
memory_value = {
'conversation_summary': conversation_summary,
'conversation_history': new_hist,
'user_profile': user_profile,
'context': mem.get('context') or {},
}
result = {
'memory': memory_value,
'user_id': left.get('user_id'),
'query': left.get('query'),
'user_input': user_input,
'right': reply,
'user_profile_update': profile_update,
}
"""
CODE_BUILD_TURN_FOR_VECTOR = r"""reply = input_data.get('right') or ''
if isinstance(reply, dict):
reply = reply.get('right') or reply.get('content') or str(reply)
query = input_data.get('query') or ''
user_id = str(input_data.get('user_id') or 'default')
raw = (user_id + '\n' + str(query) + '\n' + str(reply)).encode('utf-8', errors='ignore')
doc_id = 'turn_' + hashlib.sha256(raw).hexdigest()[:24]
text = '用户:' + str(query) + '\n助手' + str(reply)
result = {
'text': text,
'user_id': user_id,
'id': doc_id,
'metadata': {'user_id': user_id},
}
"""
def _patch_nodes(wf: dict) -> None:
nodes = wf.get("nodes") or []
for n in nodes:
nid = n.get("id")
if nid == "llm-unified":
n.setdefault("data", {})["prompt"] = LLM_PROMPT
elif nid == "code-build-context":
n.setdefault("data", {})["code"] = CODE_BUILD_CONTEXT
elif nid == "code-build-memory-value":
n.setdefault("data", {})["code"] = CODE_BUILD_MEMORY_VALUE
elif nid == "code-build-turn-for-vector":
n.setdefault("data", {})["code"] = CODE_BUILD_TURN_FOR_VECTOR
elif nid == "cache-update-summary":
d = n.setdefault("data", {})
d["value"] = "memory"
elif nid == "transform-for-vector-upsert":
m = n.setdefault("data", {}).setdefault("mapping", {})
m["metadata"] = "{{left.metadata}}"
def main() -> int:
r = requests.post(
f"{BASE}/api/v1/auth/login",
data={"username": USER, "password": PWD},
headers={"Content-Type": "application/x-www-form-urlencoded"},
timeout=15,
)
if r.status_code != 200:
print("登录失败:", r.status_code, r.text[:500], file=sys.stderr)
return 1
token = r.json().get("access_token")
if not token:
print("无 access_token", file=sys.stderr)
return 1
h = {"Authorization": f"Bearer {token}", "Content-Type": "application/json"}
dup = requests.post(
f"{BASE}/api/v1/agents/{SOURCE_AGENT_ID}/duplicate",
headers=h,
json={"name": NEW_NAME},
timeout=30,
)
if dup.status_code != 201:
print("复制失败:", dup.status_code, dup.text[:800], file=sys.stderr)
return 1
new_id = dup.json()["id"]
print("已创建副本:", new_id, NEW_NAME)
g = requests.get(f"{BASE}/api/v1/agents/{new_id}", headers=h, timeout=30)
if g.status_code != 200:
print("读取 Agent 失败:", g.text, file=sys.stderr)
return 1
agent = g.json()
wf = agent["workflow_config"]
_patch_nodes(wf)
up = requests.put(
f"{BASE}/api/v1/agents/{new_id}",
headers=h,
json={"description": NEW_DESC, "workflow_config": wf},
timeout=60,
)
if up.status_code != 200:
print("更新失败:", up.status_code, up.text[:800], file=sys.stderr)
return 1
print("已更新LLM 提示、code-build-context / memory-value / vector-turn、cache-update-summary.value、upsert.metadata")
print("Agent ID:", new_id)
print(json.dumps({"id": new_id, "name": NEW_NAME}, ensure_ascii=False))
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,37 @@
"""单次执行后打印 cache-query 与 llm-unified 输出,用于排查记忆。"""
import json
import requests
B = "http://127.0.0.1:8037"
AID = "688c2c41-dcd1-4285-b193-6bed00c485c2"
UID = "debug_uid_fresh_99"
MSG = "我叫李小红"
r = requests.post(
B + "/api/v1/auth/login",
data={"username": "admin", "password": "123456"},
headers={"Content-Type": "application/x-www-form-urlencoded"},
timeout=15,
)
h = {"Authorization": f"Bearer {r.json()['access_token']}", "Content-Type": "application/json"}
eid = requests.post(
B + "/api/v1/executions",
headers=h,
json={"agent_id": AID, "input_data": {"query": MSG, "USER_INPUT": MSG, "user_id": UID}},
timeout=30,
).json()["id"]
import time
for _ in range(90):
d = requests.get(f"{B}/api/v1/executions/{eid}", headers=h, timeout=60).json()
if d["status"] not in ("pending", "running"):
break
time.sleep(0.6)
od = d.get("output_data") or {}
nr = od.get("node_results") or {}
cq = nr.get("cache-query") or {}
llm = nr.get("llm-unified") or {}
print("cache-query memory user_profile:", (cq.get("memory") or {}).get("user_profile"))
print("llm output:", (llm.get("output") or "")[:600])

View File

@@ -0,0 +1,153 @@
"""
知你客服11号 E2E普通对话 + 要求拉取 URL触发 http_request
需 API、Celery、LLM、外网可达测试 URL。
默认会先重启本机 Celery Worker与 e2e_zhini7 一致),以加载含 code 节点 re/hashlib 注入的引擎。
跳过重启: 设置环境变量 E2E_RESTART_CELERY=0
用法: cd backend && .\\venv\\Scripts\\python.exe scripts/e2e_zhini11_test.py
"""
from __future__ import annotations
import json
import os
import subprocess
import sys
import time
import uuid
from pathlib import Path
BACKEND_DIR = Path(__file__).resolve().parents[1]
VENV_PY = BACKEND_DIR / "venv" / "Scripts" / "python.exe"
API_BASE = os.environ.get("API_BASE", "http://127.0.0.1:8037")
AGENT_NAME = os.environ.get("E2E_AGENT_NAME", "知你客服11号")
# 小 JSON适合测 GET
TEST_URL = os.environ.get(
"E2E_TEST_URL",
"https://jsonplaceholder.typicode.com/posts/1",
)
def _restart_celery() -> None:
ps = (
"Get-CimInstance Win32_Process | "
"Where-Object { $_.CommandLine -match 'celery_app' } | "
"ForEach-Object { Stop-Process -Id $_.ProcessId -Force -ErrorAction SilentlyContinue }"
)
subprocess.run(
["powershell", "-NoProfile", "-Command", ps],
cwd=str(BACKEND_DIR),
capture_output=True,
text=True,
)
time.sleep(2)
if not VENV_PY.is_file():
print("未找到 venv Python跳过启动 Celery", file=sys.stderr)
return
popen_kw: dict = {
"cwd": str(BACKEND_DIR),
"stdout": subprocess.DEVNULL,
"stderr": subprocess.STDOUT,
}
if sys.platform == "win32":
popen_kw["creationflags"] = subprocess.CREATE_NEW_PROCESS_GROUP # type: ignore[attr-defined]
subprocess.Popen(
[
str(VENV_PY),
"-m",
"celery",
"-A",
"app.core.celery_app",
"worker",
"--loglevel=info",
"--pool=threads",
"--concurrency=8",
],
**popen_kw,
)
print("已启动新 Celery Worker等待就绪…")
time.sleep(4)
def main() -> int:
os.chdir(BACKEND_DIR)
sys.path.insert(0, str(BACKEND_DIR))
if os.environ.get("E2E_RESTART_CELERY", "1").strip().lower() not in ("0", "false", "no"):
_restart_celery()
import httpx
from app.core.database import SessionLocal
from app.core.security import create_access_token
from app.models.agent import Agent
from app.models.user import User
db = SessionLocal()
try:
agent = db.query(Agent).filter(Agent.name == AGENT_NAME).first()
if not agent:
print(f"未找到「{AGENT_NAME}", file=sys.stderr)
return 1
owner = db.query(User).filter(User.id == agent.user_id).first()
user = owner or db.query(User).first()
if not user:
print("无用户", file=sys.stderr)
return 1
token = create_access_token(data={"sub": user.id, "username": user.username})
headers = {"Authorization": f"Bearer {token}"}
uid = f"e2e_z11_{uuid.uuid4().hex[:10]}"
print(f"agent={agent.id} user_id={uid} test_url={TEST_URL}\n")
def poll(client: httpx.Client, eid: str, timeout: float = 420.0) -> dict:
t0 = time.time()
while time.time() - t0 < timeout:
r = client.get(f"/api/v1/executions/{eid}", headers=headers)
r.raise_for_status()
d = r.json()
st = d.get("status")
if st == "completed":
return d
if st == "failed":
print("failed:", d.get("error_message"), file=sys.stderr)
raise RuntimeError("执行失败")
time.sleep(1.5)
raise TimeoutError("超时")
def reply_text(out: dict) -> str:
od = out.get("output_data") or {}
if isinstance(od, dict):
r = od.get("result")
if isinstance(r, str):
return r[:800]
return json.dumps(od, ensure_ascii=False)[:800]
rounds = [
"我的名字叫测试员",
f"请用工具访问这个网址并简要说明返回里 title 或主要内容是什么(只回答要点):{TEST_URL}",
"我叫什么名字?",
]
with httpx.Client(base_url=API_BASE, timeout=420.0) as client:
for i, q in enumerate(rounds, 1):
r = client.post(
"/api/v1/executions",
json={"agent_id": str(agent.id), "input_data": {"query": q, "user_id": uid}},
headers=headers,
)
if r.status_code >= 400:
print(r.text, file=sys.stderr)
r.raise_for_status()
eid = r.json()["id"]
print(f"--- 第{i}轮 execution={eid} ---")
out = poll(client, eid)
print(f"Q: {q[:120]}...")
print(f"A: {reply_text(out)}\n")
print("完成")
finally:
db.close()
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,99 @@
"""一次调用「知你客服12号」创建仓库根下 123.md相对路径 123.md。默认不重启 Celery。"""
from __future__ import annotations
import json
import os
import sys
import time
import uuid
from pathlib import Path
BACKEND_DIR = Path(__file__).resolve().parents[1]
REPO_ROOT = BACKEND_DIR.parent
API_BASE = os.environ.get("API_BASE", "http://127.0.0.1:8037")
AGENT_NAME = os.environ.get("E2E_AGENT_NAME", "知你客服12号")
REL_PATH = "123.md"
FILE_CONTENT = "# 123\ne2e zhini12 123.md marker\n"
def main() -> int:
os.chdir(BACKEND_DIR)
sys.path.insert(0, str(BACKEND_DIR))
import httpx
from app.core.database import SessionLocal
from app.core.security import create_access_token
from app.models.agent import Agent
from app.models.user import User
db = SessionLocal()
try:
agent = db.query(Agent).filter(Agent.name == AGENT_NAME).first()
if not agent:
print(f"未找到「{AGENT_NAME}", file=sys.stderr)
return 1
owner = db.query(User).filter(User.id == agent.user_id).first()
user = owner or db.query(User).first()
if not user:
print("无用户", file=sys.stderr)
return 1
token = create_access_token(data={"sub": user.id, "username": user.username})
headers = {"Authorization": f"Bearer {token}"}
uid = f"e2e123_{uuid.uuid4().hex[:10]}"
q = (
f"创建 123.md。请用 file_write相对路径 {REL_PATH}(工作区根下),"
f"content 为 {json.dumps(FILE_CONTENT, ensure_ascii=False)}mode 为 w。"
"reply 中写出 file_write 返回的真实 JSON。最后一行单行 JSONintent、reply、user_profile。"
)
print(f"agent={agent.id} ({AGENT_NAME}) user_id={uid}")
print(f"目标文件: {(REPO_ROOT / REL_PATH).resolve()}")
def poll(client: httpx.Client, eid: str, timeout: float = 300.0) -> dict:
t0 = time.time()
while time.time() - t0 < timeout:
r = client.get(f"/api/v1/executions/{eid}", headers=headers)
r.raise_for_status()
d = r.json()
st = d.get("status")
if st == "completed":
return d
if st == "failed":
print("failed:", d.get("error_message"), file=sys.stderr)
raise RuntimeError("执行失败")
time.sleep(1.5)
raise TimeoutError("超时")
with httpx.Client(base_url=API_BASE, timeout=300.0) as client:
r = client.post(
"/api/v1/executions",
json={"agent_id": str(agent.id), "input_data": {"query": q, "user_id": uid}},
headers=headers,
)
if r.status_code >= 400:
print(r.text, file=sys.stderr)
r.raise_for_status()
eid = r.json()["id"]
print(f"execution={eid}")
out = poll(client, eid)
od = out.get("output_data") or {}
result = od.get("result", od)
print("\n--- API result (截断 2000 字符) ---\n")
print(str(result)[:2000])
abs_file = (REPO_ROOT / REL_PATH).resolve()
if not abs_file.is_file():
print(f"\n[FAIL] 磁盘未找到: {abs_file}", file=sys.stderr)
return 2
body = abs_file.read_text(encoding="utf-8", errors="replace")
print(f"\n[OK] 文件: {abs_file}")
print("--- 内容 ---\n", body[:800])
if "e2e zhini12 123.md marker" not in body:
print("\n[WARN] 未找到预期标记字符串", file=sys.stderr)
print("\n完成")
return 0
finally:
db.close()
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,184 @@
"""
通过「知你客服12号」Agent 实测:在 user_data 下创建 bbb.md对应 D:\\aaa\\aiagent\\user_data\\bbb.md
依赖API(默认 8037)、Redis、Celery、LLM引擎需已修复 End 节点优先于 vector-upsert否则界面 result 可能仍不对,但 file_write 可能已执行)。
用法:
cd backend
.\\venv\\Scripts\\python.exe scripts\\e2e_zhini12_bbb_md.py
环境变量:
API_BASE 默认 http://127.0.0.1:8037
E2E_AGENT_NAME 默认 知你客服12号
E2E_REL_FILE 默认 user_data/bbb.md
E2E_FILE_CONTENT 写入内容,默认带标记行便于校验
E2E_RESTART_CELERY 默认 1设为 0 跳过重启 Worker
E2E_USE_ABSPATH_MSG 设为 1 时在用户话术中额外要求使用绝对路径 D:\\aaa\\aiagent\\user_data\\bbb.md
"""
from __future__ import annotations
import json
import os
import subprocess
import sys
import time
import uuid
from pathlib import Path
BACKEND_DIR = Path(__file__).resolve().parents[1]
VENV_PY = BACKEND_DIR / "venv" / "Scripts" / "python.exe"
API_BASE = os.environ.get("API_BASE", "http://127.0.0.1:8037")
AGENT_NAME = os.environ.get("E2E_AGENT_NAME", "知你客服12号")
REL_PATH = os.environ.get("E2E_REL_FILE", "user_data/bbb.md")
FILE_CONTENT = os.environ.get(
"E2E_FILE_CONTENT",
"# bbb\n\nzhini12 e2e bbb marker\n",
)
def _restart_celery() -> None:
ps = (
"Get-CimInstance Win32_Process | "
"Where-Object { $_.CommandLine -match 'celery_app' } | "
"ForEach-Object { Stop-Process -Id $_.ProcessId -Force -ErrorAction SilentlyContinue }"
)
subprocess.run(
["powershell", "-NoProfile", "-Command", ps],
cwd=str(BACKEND_DIR),
capture_output=True,
text=True,
)
time.sleep(2)
if not VENV_PY.is_file():
print("未找到 venv Python跳过启动 Celery", file=sys.stderr)
return
kw: dict = {"cwd": str(BACKEND_DIR), "stdout": subprocess.DEVNULL, "stderr": subprocess.STDOUT}
if sys.platform == "win32":
kw["creationflags"] = subprocess.CREATE_NEW_PROCESS_GROUP # type: ignore[attr-defined]
popen_env = os.environ.copy()
if os.environ.get("E2E_LLM_TOOL_CHOICE", "required").strip().lower() not in (
"0",
"false",
"no",
"auto",
):
popen_env["LLM_TOOL_CHOICE"] = "required"
subprocess.Popen(
[
str(VENV_PY),
"-m",
"celery",
"-A",
"app.core.celery_app",
"worker",
"--loglevel=info",
"--pool=threads",
"--concurrency=8",
],
env=popen_env,
**kw,
)
print("已启动 Celery等待就绪…")
time.sleep(4)
def main() -> int:
os.chdir(BACKEND_DIR)
sys.path.insert(0, str(BACKEND_DIR))
# 实测 file_write部分模型在 tool_choice=auto 下不发起 tool_calls只输出伪造的 JSON 文本E2E 默认要求至少一次函数调用
if os.environ.get("E2E_LLM_TOOL_CHOICE", "required").strip().lower() not in ("0", "false", "no", "auto"):
os.environ["LLM_TOOL_CHOICE"] = "required"
if os.environ.get("E2E_RESTART_CELERY", "1").strip().lower() not in ("0", "false", "no"):
_restart_celery()
import httpx
from app.core.database import SessionLocal
from app.core.security import create_access_token
from app.models.agent import Agent
from app.models.user import User
db = SessionLocal()
try:
agent = db.query(Agent).filter(Agent.name == AGENT_NAME).first()
if not agent:
print(f"未找到「{AGENT_NAME}", file=sys.stderr)
return 1
owner = db.query(User).filter(User.id == agent.user_id).first()
user = owner or db.query(User).first()
if not user:
print("无用户", file=sys.stderr)
return 1
token = create_access_token(data={"sub": user.id, "username": user.username})
headers = {"Authorization": f"Bearer {token}"}
uid = f"bbb12_{uuid.uuid4().hex[:10]}"
abs_win = str((BACKEND_DIR.parent / REL_PATH.replace("/", os.sep)).resolve())
use_abs = os.environ.get("E2E_USE_ABSPATH_MSG", "").strip().lower() in ("1", "true", "yes")
path_hint = (
f"绝对路径 {json.dumps(abs_win, ensure_ascii=False)}"
if use_abs
else f"相对路径 {REL_PATH}(相对工作区根)"
)
_basename = os.path.basename(REL_PATH.replace("\\", "/"))
q = (
f"请在 D:\\\\aaa\\\\aiagent\\\\user_data 目录下创建 {_basename}"
f"必须通过工具 file_write 写入:优先使用 {path_hint}"
f"content 为 {json.dumps(FILE_CONTENT, ensure_ascii=False)}mode 为 w。"
f"file_write 返回的 JSON 必须原样体现在你最终 reply 的可读说明里(含 success 与 file_path"
f"最后一行仍输出单行 JSONintent、reply、user_profile。"
)
print(f"agent={agent.id} ({AGENT_NAME}) user_id={uid}\n目标文件(解析后): {abs_win}\n")
def poll(client: httpx.Client, eid: str, timeout: float = 300.0) -> dict:
t0 = time.time()
while time.time() - t0 < timeout:
r = client.get(f"/api/v1/executions/{eid}", headers=headers)
r.raise_for_status()
d = r.json()
st = d.get("status")
if st == "completed":
return d
if st == "failed":
print("failed:", d.get("error_message"), file=sys.stderr)
raise RuntimeError("执行失败")
time.sleep(1.5)
raise TimeoutError("超时")
with httpx.Client(base_url=API_BASE, timeout=300.0) as client:
r = client.post(
"/api/v1/executions",
json={"agent_id": str(agent.id), "input_data": {"query": q, "user_id": uid}},
headers=headers,
)
if r.status_code >= 400:
print(r.text, file=sys.stderr)
r.raise_for_status()
eid = r.json()["id"]
print(f"execution={eid}")
out = poll(client, eid)
od = out.get("output_data") or {}
result = od.get("result", od)
print("\n--- API output_data.result (截断 1500 字符) ---\n")
print(str(result)[:1500])
abs_file = Path(abs_win).resolve()
if not abs_file.is_file():
print(f"\n[FAIL] 磁盘未找到: {abs_file}", file=sys.stderr)
print("若 API 已 completed可能是模型未触发 file_write或路径/权限问题;可看 Celery 日志「执行工具 file_write」。", file=sys.stderr)
return 2
body = abs_file.read_text(encoding="utf-8", errors="replace")
print(f"\n[OK] 文件存在: {abs_file}\n--- 内容 ---\n{body}\n---")
marker = os.environ.get("E2E_MARKER", "zhini12 e2e bbb marker")
if marker and marker not in body and FILE_CONTENT.strip() and marker in FILE_CONTENT:
print(f"[WARN] 未在文件中发现预期标记「{marker}」,可能内容与 E2E_FILE_CONTENT 不一致", file=sys.stderr)
finally:
db.close()
print("\n完成")
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,239 @@
"""
E2E知你客服12号 — 用户话术「在 D:\\aaa\\aiagent\\user_data 下创建 bbbb.md」
并校验:/health 内置工具、落盘路径必须在工作区内user_data 子路径)、可选 node_results 中的 file_write 痕迹。
用法(在 backend 目录):
.\\venv\\Scripts\\python.exe scripts\\e2e_zhini12_bbbb_md.py
环境变量API_BASE、E2E_AGENT_NAME、E2E_RESTART_CELERY、E2E_LLM_TOOL_CHOICE。
注意:固定 user_data/bbbb.md 与正文/标记常量;不读取 E2E_REL_FILE、E2E_FILE_CONTENT、E2E_MARKER避免 Shell 残留误测)。
默认 E2E_LLM_TOOL_CHOICE=auto。
"""
from __future__ import annotations
import json
import os
import subprocess
import sys
import time
import uuid
from pathlib import Path
BACKEND_DIR = Path(__file__).resolve().parents[1]
REPO_ROOT = BACKEND_DIR.parent
USER_DATA_ROOT = (REPO_ROOT / "user_data").resolve()
VENV_PY = BACKEND_DIR / "venv" / "Scripts" / "python.exe"
API_BASE = os.environ.get("API_BASE", "http://127.0.0.1:8037")
AGENT_NAME = os.environ.get("E2E_AGENT_NAME", "知你客服12号")
# 固定测 bbbb.md勿用 E2E_REL_FILEPowerShell 里常残留 ccc/bbb
REL_PATH = "user_data/bbbb.md"
# 固定内容与标记(不用 E2E_FILE_CONTENT / E2E_MARKER 环境变量,避免 PowerShell 残留旧值)
FILE_CONTENT = "# bbbb\n\ne2e bbbb permission marker\n"
E2E_MARKER = "e2e bbbb permission marker"
def _restart_celery() -> None:
ps = (
"Get-CimInstance Win32_Process | "
"Where-Object { $_.CommandLine -match 'celery_app' } | "
"ForEach-Object { Stop-Process -Id $_.ProcessId -Force -ErrorAction SilentlyContinue }"
)
subprocess.run(
["powershell", "-NoProfile", "-Command", ps],
cwd=str(BACKEND_DIR),
capture_output=True,
text=True,
)
time.sleep(2)
if not VENV_PY.is_file():
print("未找到 venv Python跳过启动 Celery", file=sys.stderr)
return
kw: dict = {"cwd": str(BACKEND_DIR), "stdout": subprocess.DEVNULL, "stderr": subprocess.STDOUT}
if sys.platform == "win32":
kw["creationflags"] = subprocess.CREATE_NEW_PROCESS_GROUP # type: ignore[attr-defined]
popen_env = os.environ.copy()
if os.environ.get("E2E_LLM_TOOL_CHOICE", "auto").strip().lower() not in (
"0",
"false",
"no",
"auto",
):
popen_env["LLM_TOOL_CHOICE"] = "required"
subprocess.Popen(
[
str(VENV_PY),
"-m",
"celery",
"-A",
"app.core.celery_app",
"worker",
"--loglevel=info",
"--pool=threads",
"--concurrency=8",
],
env=popen_env,
**kw,
)
print("已启动 Celery等待就绪…")
time.sleep(4)
def _health_check(client) -> bool:
r = client.get(f"{API_BASE.rstrip('/')}/health", timeout=15.0)
r.raise_for_status()
data = r.json()
print("\n--- GET /health ---")
print(json.dumps(data, ensure_ascii=False, indent=2)[:2500])
checks = data.get("checks")
if not checks:
print(
"\n[INFO] /health 无 checks 字段(可能 API 未更新);请以 Worker 日志「内置工具就绪」为准。",
file=sys.stderr,
)
return True
ok = checks.get("builtin_tools_ready") and checks.get("file_agent_core_ready")
if not ok:
print(
"\n[WARN] API 进程 builtin_tools 未完全就绪。请同步重启 API 与 Celery。",
file=sys.stderr,
)
return bool(ok)
def _path_allowed(abs_file: Path) -> tuple[bool, str]:
"""file_write 合法路径:须在仓库根下,且本次要求落在 user_data 下。"""
try:
f = abs_file.resolve()
repo = REPO_ROOT.resolve()
except OSError as e:
return False, str(e)
try:
f.relative_to(repo)
except ValueError:
return False, f"文件不在仓库根内: {f} vs root {repo}"
try:
f.relative_to(USER_DATA_ROOT)
except ValueError:
return False, f"文件不在 user_data 下: {f} vs {USER_DATA_ROOT}"
return True, ""
def _scan_file_write_in_node_results(od: dict) -> None:
nr = od.get("node_results") or {}
hits = []
for nid, payload in nr.items():
if not isinstance(payload, dict):
continue
out = payload.get("output")
s = json.dumps(out, ensure_ascii=False) if not isinstance(out, str) else out
if "file_write" in s and ("success" in s.lower() or "file_path" in s):
hits.append((nid, s[:400]))
print("\n--- node_results 中含 file_write 线索的节点 ---")
if not hits:
print("(未扫到明显 JSON可能结果只在 LLM 正文或日志中)")
for nid, frag in hits:
print(f" {nid}: {frag}...")
def main() -> int:
os.chdir(BACKEND_DIR)
sys.path.insert(0, str(BACKEND_DIR))
if os.environ.get("E2E_LLM_TOOL_CHOICE", "auto").strip().lower() not in ("0", "false", "no", "auto"):
os.environ["LLM_TOOL_CHOICE"] = "required"
if os.environ.get("E2E_RESTART_CELERY", "1").strip().lower() not in ("0", "false", "no"):
_restart_celery()
import httpx
from app.core.database import SessionLocal
from app.core.security import create_access_token
from app.models.agent import Agent
from app.models.user import User
db = SessionLocal()
try:
with httpx.Client(timeout=30.0) as hclient:
_health_check(hclient)
agent = db.query(Agent).filter(Agent.name == AGENT_NAME).first()
if not agent:
print(f"未找到「{AGENT_NAME}", file=sys.stderr)
return 1
owner = db.query(User).filter(User.id == agent.user_id).first()
user = owner or db.query(User).first()
if not user:
print("无用户", file=sys.stderr)
return 1
token = create_access_token(data={"sub": user.id, "username": user.username})
headers = {"Authorization": f"Bearer {token}"}
uid = f"bbbb12_{uuid.uuid4().hex[:10]}"
abs_win = str((REPO_ROOT / REL_PATH.replace("/", os.sep)).resolve())
_basename = os.path.basename(REL_PATH.replace("\\", "/"))
q = (
f"请在 D:\\\\aaa\\\\aiagent\\\\user_data 目录下创建 {_basename}"
f"必须通过工具 file_write 写入:优先使用 相对路径 {REL_PATH}(相对工作区根);"
f"content 为 {json.dumps(FILE_CONTENT, ensure_ascii=False)}mode 为 w。"
f"file_write 返回的 JSON 必须原样体现在你最终 reply 的可读说明里(含 success 与 file_path"
f"最后一行仍输出单行 JSONintent、reply、user_profile。"
)
print(f"\nagent={agent.id} ({AGENT_NAME}) user_id={uid}")
print(f"目标文件(解析后): {abs_win}")
print(f"权限校验: 须位于 {USER_DATA_ROOT}")
def poll(client: httpx.Client, eid: str, timeout: float = 300.0) -> dict:
t0 = time.time()
while time.time() - t0 < timeout:
r = client.get(f"/api/v1/executions/{eid}", headers=headers)
r.raise_for_status()
d = r.json()
st = d.get("status")
if st == "completed":
return d
if st == "failed":
print("failed:", d.get("error_message"), file=sys.stderr)
raise RuntimeError("执行失败")
time.sleep(1.5)
raise TimeoutError("超时")
with httpx.Client(base_url=API_BASE, timeout=300.0) as client:
r = client.post(
"/api/v1/executions",
json={"agent_id": str(agent.id), "input_data": {"query": q, "user_id": uid}},
headers=headers,
)
if r.status_code >= 400:
print(r.text, file=sys.stderr)
r.raise_for_status()
eid = r.json()["id"]
print(f"\nexecution={eid}")
out = poll(client, eid)
od = out.get("output_data") or {}
result = od.get("result", od)
print("\n--- API output_data.result (截断 1500 字符) ---\n")
print(str(result)[:1500])
_scan_file_write_in_node_results(od)
abs_file = Path(abs_win).resolve()
allowed, reason = _path_allowed(abs_file)
if not allowed:
print(f"\n[FAIL] 路径权限校验: {reason}", file=sys.stderr)
return 3
if not abs_file.is_file():
print(f"\n[FAIL] 磁盘未找到: {abs_file}", file=sys.stderr)
print("若 completed可能未触发 file_write查 Celery「执行工具 file_write」。", file=sys.stderr)
return 2
body = abs_file.read_text(encoding="utf-8", errors="replace")
print(f"\n[OK] 文件存在且路径合法: {abs_file}\n--- 内容 ---\n{body}\n---")
if E2E_MARKER and E2E_MARKER not in body and FILE_CONTENT.strip() and E2E_MARKER in FILE_CONTENT:
print(f"[WARN] 未在文件中发现标记「{E2E_MARKER}", file=sys.stderr)
finally:
db.close()
print("\n[OK] E2E bbbb.md + 路径权限 通过\n完成")
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,142 @@
"""
知你客服12号一轮对话触发 file_write相对路径 user_data/e2e_12.md
可选重启 CeleryE2E_RESTART_CELERY=1默认 1
用法: cd backend && .\\venv\\Scripts\\python.exe scripts/e2e_zhini12_file_test.py
"""
from __future__ import annotations
import json
import os
import subprocess
import sys
import time
import uuid
from pathlib import Path
BACKEND_DIR = Path(__file__).resolve().parents[1]
VENV_PY = BACKEND_DIR / "venv" / "Scripts" / "python.exe"
API_BASE = os.environ.get("API_BASE", "http://127.0.0.1:8037")
AGENT_NAME = os.environ.get("E2E_AGENT_NAME", "知你客服12号")
REL_PATH = os.environ.get("E2E_REL_FILE", "user_data/e2e_12.md")
FILE_CONTENT = os.environ.get("E2E_FILE_CONTENT", "e2e zhini12 ok\n")
def _restart_celery() -> None:
ps = (
"Get-CimInstance Win32_Process | "
"Where-Object { $_.CommandLine -match 'celery_app' } | "
"ForEach-Object { Stop-Process -Id $_.ProcessId -Force -ErrorAction SilentlyContinue }"
)
subprocess.run(
["powershell", "-NoProfile", "-Command", ps],
cwd=str(BACKEND_DIR),
capture_output=True,
text=True,
)
time.sleep(2)
if not VENV_PY.is_file():
print("未找到 venv Python跳过启动 Celery", file=sys.stderr)
return
kw: dict = {"cwd": str(BACKEND_DIR), "stdout": subprocess.DEVNULL, "stderr": subprocess.STDOUT}
if sys.platform == "win32":
kw["creationflags"] = subprocess.CREATE_NEW_PROCESS_GROUP # type: ignore[attr-defined]
subprocess.Popen(
[
str(VENV_PY),
"-m",
"celery",
"-A",
"app.core.celery_app",
"worker",
"--loglevel=info",
"--pool=threads",
"--concurrency=8",
],
**kw,
)
print("已启动 Celery等待就绪…")
time.sleep(4)
def main() -> int:
os.chdir(BACKEND_DIR)
sys.path.insert(0, str(BACKEND_DIR))
if os.environ.get("E2E_RESTART_CELERY", "1").strip().lower() not in ("0", "false", "no"):
_restart_celery()
import httpx
from app.core.database import SessionLocal
from app.core.security import create_access_token
from app.models.agent import Agent
from app.models.user import User
db = SessionLocal()
try:
agent = db.query(Agent).filter(Agent.name == AGENT_NAME).first()
if not agent:
print(f"未找到「{AGENT_NAME}", file=sys.stderr)
return 1
owner = db.query(User).filter(User.id == agent.user_id).first()
user = owner or db.query(User).first()
if not user:
print("无用户", file=sys.stderr)
return 1
token = create_access_token(data={"sub": user.id, "username": user.username})
headers = {"Authorization": f"Bearer {token}"}
uid = f"e2e12_{uuid.uuid4().hex[:10]}"
q = (
f"请调用 file_writefile_path 用相对路径 {REL_PATH}content 用 {json.dumps(FILE_CONTENT, ensure_ascii=False)}"
"mode 用 w。完成后在 reply 里写出 file_write 返回的原始 JSON 字符串(不要编造)。"
"最终只输出一行 JSONintent、reply、user_profile。"
)
print(f"agent={agent.id} user_id={uid}\nQ: {q[:200]}...")
def poll(client: httpx.Client, eid: str, timeout: float = 300.0) -> dict:
t0 = time.time()
while time.time() - t0 < timeout:
r = client.get(f"/api/v1/executions/{eid}", headers=headers)
r.raise_for_status()
d = r.json()
st = d.get("status")
if st == "completed":
return d
if st == "failed":
print("failed:", d.get("error_message"), file=sys.stderr)
raise RuntimeError("执行失败")
time.sleep(1.5)
raise TimeoutError("超时")
with httpx.Client(base_url=API_BASE, timeout=300.0) as client:
r = client.post(
"/api/v1/executions",
json={"agent_id": str(agent.id), "input_data": {"query": q, "user_id": uid}},
headers=headers,
)
if r.status_code >= 400:
print(r.text, file=sys.stderr)
r.raise_for_status()
eid = r.json()["id"]
print(f"execution={eid}")
out = poll(client, eid)
od = out.get("output_data") or {}
result = od.get("result", od)
print("--- API result (截断) ---")
print(str(result)[:1200])
root = BACKEND_DIR.parent
abs_file = (root / REL_PATH.replace("/", os.sep)).resolve()
if abs_file.is_file():
body = abs_file.read_text(encoding="utf-8", errors="replace")
print(f"\n磁盘文件存在: {abs_file}\n内容:\n{body!r}")
else:
print(f"\n磁盘未找到: {abs_file}", file=sys.stderr)
return 2
finally:
db.close()
print("\n完成")
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,171 @@
"""
重启 Celery WorkerWindows并对「知你客服7号」做两轮 API 测试:
1我的名字叫小七 2我叫什么名字
需:本机 API 已监听(默认 8037、Redis、LLM 配置可用。
"""
from __future__ import annotations
import json
import os
import subprocess
import sys
import time
import uuid
from pathlib import Path
BACKEND_DIR = Path(__file__).resolve().parents[1]
VENV_PY = BACKEND_DIR / "venv" / "Scripts" / "python.exe"
API_BASE = os.environ.get("API_BASE", "http://127.0.0.1:8037")
def _restart_celery() -> None:
ps = (
"Get-CimInstance Win32_Process | "
"Where-Object { $_.CommandLine -match 'celery_app' } | "
"ForEach-Object { Stop-Process -Id $_.ProcessId -Force -ErrorAction SilentlyContinue }"
)
subprocess.run(
["powershell", "-NoProfile", "-Command", ps],
cwd=str(BACKEND_DIR),
capture_output=True,
text=True,
)
time.sleep(2)
if not VENV_PY.is_file():
print("未找到 venv Python跳过启动 Celery", file=sys.stderr)
return
popen_kw: dict = {
"cwd": str(BACKEND_DIR),
"stdout": subprocess.DEVNULL,
"stderr": subprocess.STDOUT,
}
if sys.platform == "win32":
popen_kw["creationflags"] = subprocess.CREATE_NEW_PROCESS_GROUP # type: ignore[attr-defined]
subprocess.Popen(
[
str(VENV_PY),
"-m",
"celery",
"-A",
"app.core.celery_app",
"worker",
"--loglevel=info",
"--pool=threads",
"--concurrency=8",
],
**popen_kw,
)
print("已启动新 Celery Worker线程池等待就绪…")
time.sleep(4)
def _touch_api_reload() -> None:
"""若 uvicorn 带 --reload触发重载。"""
main_py = BACKEND_DIR / "app" / "main.py"
if main_py.is_file():
main_py.touch()
print("已 touch app/main.py 以触发 API 热重载(若启用 --reload")
def main() -> int:
os.chdir(BACKEND_DIR)
sys.path.insert(0, str(BACKEND_DIR))
_restart_celery()
_touch_api_reload()
import httpx
from app.core.database import SessionLocal
from app.core.security import create_access_token
from app.models.agent import Agent
from app.models.user import User
db = SessionLocal()
try:
agent = db.query(Agent).filter(Agent.name == "知你客服7号").first()
if not agent:
print("数据库中未找到名为「知你客服7号」的 Agent", file=sys.stderr)
return 1
owner = db.query(User).filter(User.id == agent.user_id).first()
user = owner or db.query(User).first()
if not user:
print("无可用用户,无法签发 JWT", file=sys.stderr)
return 1
token = create_access_token(data={"sub": user.id, "username": user.username})
headers = {"Authorization": f"Bearer {token}"}
uid = f"e2e_xiaoqi_{uuid.uuid4().hex[:10]}"
print(f"agent_id={agent.id} owner={user.username} user_id={uid}")
print(f"请确认工作流 Cache 键为 user_memory_{{{{user_id}}}},请求中已带 user_id={uid}\n")
def poll(client: httpx.Client, execution_id: str, timeout: float = 300.0) -> dict:
t0 = time.time()
while time.time() - t0 < timeout:
r = client.get(f"/api/v1/executions/{execution_id}", headers=headers)
r.raise_for_status()
data = r.json()
st = data.get("status")
if st == "completed":
return data
if st == "failed":
print("error:", data.get("error_message"), file=sys.stderr)
raise RuntimeError("执行失败")
time.sleep(1)
raise TimeoutError("等待执行完成超时")
with httpx.Client(base_url=API_BASE, timeout=300.0) as client:
r = client.post(
"/api/v1/executions",
json={
"agent_id": str(agent.id),
"input_data": {"query": "我的名字叫小七", "user_id": uid},
},
headers=headers,
)
if r.status_code >= 400:
print(r.text, file=sys.stderr)
r.raise_for_status()
eid1 = r.json()["id"]
print("第一轮 execution_id:", eid1)
out1 = poll(client, eid1)
print("第一轮 output_data:", json.dumps(out1.get("output_data"), ensure_ascii=False)[:1200])
r = client.post(
"/api/v1/executions",
json={
"agent_id": str(agent.id),
"input_data": {"query": "我叫什么名字?", "user_id": uid},
},
headers=headers,
)
r.raise_for_status()
eid2 = r.json()["id"]
print("\n第二轮 execution_id:", eid2)
out2 = poll(client, eid2)
print("第二轮 output_data:", json.dumps(out2.get("output_data"), ensure_ascii=False)[:1200])
# Redis 键检查
try:
from app.core.config import settings
import redis as redis_lib
url = getattr(settings, "REDIS_URL", None) or "redis://localhost:6379/0"
rc = redis_lib.from_url(url, decode_responses=True)
key = f"user_memory_{uid}"
raw = rc.get(key)
print(f"\nRedis 键 {key}:", "存在" if raw else "不存在")
if raw:
try:
mem = json.loads(raw)
print("memory.user_profile:", mem.get("user_profile"))
except Exception as ex:
print("解析 Redis 值失败:", ex)
except Exception as ex:
print("Redis 检查跳过:", ex)
finally:
db.close()
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,116 @@
"""
对「知你客服9号」做多轮 API 测试(不默认重启 Celery避免打断本机 Worker
轮次:①自我介绍姓名 ②陈述偏好(供摘要/关键词召回)③闲聊 ④问姓名 + 问偏好
用法:
cd backend && .\\venv\\Scripts\\python.exe scripts/e2e_zhini9_test.py
环境变量: API_BASE, E2E_AGENT_NAME默认 知你客服9号
"""
from __future__ import annotations
import json
import os
import sys
import time
import uuid
BACKEND_DIR = __file__.rsplit("scripts", 1)[0]
API_BASE = os.environ.get("API_BASE", "http://127.0.0.1:8037")
AGENT_NAME = os.environ.get("E2E_AGENT_NAME", "知你客服9号")
def main() -> int:
os.chdir(BACKEND_DIR)
sys.path.insert(0, BACKEND_DIR)
import httpx
from app.core.database import SessionLocal
from app.core.security import create_access_token
from app.models.agent import Agent
from app.models.user import User
db = SessionLocal()
try:
agent = db.query(Agent).filter(Agent.name == AGENT_NAME).first()
if not agent:
print(f"数据库中未找到「{AGENT_NAME}", file=sys.stderr)
return 1
owner = db.query(User).filter(User.id == agent.user_id).first()
user = owner or db.query(User).first()
if not user:
print("无可用用户", file=sys.stderr)
return 1
token = create_access_token(data={"sub": user.id, "username": user.username})
headers = {"Authorization": f"Bearer {token}"}
uid = f"e2e_z9_{uuid.uuid4().hex[:10]}"
print(f"agent_id={agent.id} name={agent.name} user_id={uid}\n")
def poll(client: httpx.Client, execution_id: str, timeout: float = 300.0) -> dict:
t0 = time.time()
while time.time() - t0 < timeout:
r = client.get(f"/api/v1/executions/{execution_id}", headers=headers)
r.raise_for_status()
data = r.json()
st = data.get("status")
if st == "completed":
return data
if st == "failed":
print("error:", data.get("error_message"), file=sys.stderr)
raise RuntimeError("执行失败")
time.sleep(1)
raise TimeoutError("超时")
def extract_reply(out: dict) -> str:
od = out.get("output_data") or {}
if isinstance(od, dict):
r = od.get("result")
if isinstance(r, str):
return r[:500]
return json.dumps(od, ensure_ascii=False)[:500]
rounds = [
"我的名字叫阿九",
"记住:我最爱吃火锅,不喜欢甜食。",
"今天天气不错吧?",
"我叫什么名字?你还记得我喜欢吃什么吗?",
]
with httpx.Client(base_url=API_BASE, timeout=300.0) as client:
for i, q in enumerate(rounds, 1):
r = client.post(
"/api/v1/executions",
json={"agent_id": str(agent.id), "input_data": {"query": q, "user_id": uid}},
headers=headers,
)
if r.status_code >= 400:
print(r.text, file=sys.stderr)
r.raise_for_status()
eid = r.json()["id"]
out = poll(client, eid)
print(f"--- 第{i}轮 ---\nQ: {q}\nA: {extract_reply(out)}\n")
try:
from app.core.config import settings
import redis as redis_lib
url = getattr(settings, "REDIS_URL", None) or "redis://localhost:6379/0"
rc = redis_lib.from_url(url, decode_responses=True)
key = f"user_memory_{uid}"
raw = rc.get(key)
print(f"Redis {key}:", "" if raw else "")
if raw:
mem = json.loads(raw)
print("conversation_summary 前120字:", str(mem.get("conversation_summary", ""))[:120])
print("user_profile:", mem.get("user_profile"))
print("history 条数:", len(mem.get("conversation_history") or []))
except Exception as ex:
print("Redis 检查:", ex)
finally:
db.close()
print("完成")
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,57 @@
"""
代码节点 code-build-memory-value 首行「from datetime import datetime」在受限 exec 下会触发 __import__ 不可用。
引擎已在 __builtins__ 中注入 datetime 类,去掉该行即可。
默认修补名为「知你客服12号」的 Agent也可用环境变量 PATCH_AGENT_NAME=ALL 修补所有含该节点的 Agent。
"""
from __future__ import annotations
import os
import sys
BACKEND = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.insert(0, BACKEND)
from sqlalchemy.orm.attributes import flag_modified
from app.core.database import SessionLocal
from app.models.agent import Agent
OLD_PREFIX = "from datetime import datetime\n"
def main() -> int:
name = os.environ.get("PATCH_AGENT_NAME", "知你客服12号")
db = SessionLocal()
try:
q = db.query(Agent)
if name.upper() != "ALL":
q = q.filter(Agent.name == name)
agents = q.all()
n_patched = 0
for a in agents:
wf = a.workflow_config
if not wf or "nodes" not in wf:
continue
touched = False
for n in wf.get("nodes") or []:
if n.get("id") != "code-build-memory-value":
continue
c = (n.get("data") or {}).get("code") or ""
if c.startswith(OLD_PREFIX):
n.setdefault("data", {})["code"] = c[len(OLD_PREFIX) :]
touched = True
if touched:
a.workflow_config = wf
flag_modified(a, "workflow_config")
n_patched += 1
print("已修补:", a.name, a.id)
db.commit()
print("合计修补 Agent 数:", n_patched)
return 0
finally:
db.close()
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,51 @@
"""修复 code-build-context 中列表推导式 r 与前面 genexp 的 r 作用域冲突(就地更新数据库 Agent"""
from __future__ import annotations
import os
import sys
BACKEND = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.insert(0, BACKEND)
from sqlalchemy.orm.attributes import flag_modified
from app.core.database import SessionLocal
from app.models.agent import Agent
OLD_VEC = "vec_str = '\\n'.join((r.get('text') or r.get('content') or '') for r in right)"
NEW_VEC = "vec_str = '\\n'.join((rec.get('text') or rec.get('content') or '') for rec in right)"
OLD_KW = 'kw_lines = [f"{r}: {t}" for _, r, t in scored[:6]]'
NEW_KW = 'kw_lines = [f"{role}: {text}" for _, role, text in scored[:6]]'
def main() -> int:
name = os.environ.get("PATCH_AGENT_NAME", "知你客服11号")
db = SessionLocal()
try:
a = db.query(Agent).filter(Agent.name == name).first()
if not a:
print("未找到", name, file=sys.stderr)
return 1
wf = a.workflow_config
for n in wf.get("nodes", []):
if n.get("id") != "code-build-context":
continue
c = n.get("data", {}).get("code", "")
c2 = c.replace(OLD_VEC, NEW_VEC).replace(OLD_KW, NEW_KW)
if c2 == c:
print("无需替换(可能已修复或内容不同)")
return 0
n.setdefault("data", {})["code"] = c2
a.workflow_config = wf
flag_modified(a, "workflow_config")
db.commit()
print("已更新", name, "code-build-context")
return 0
print("未找到 code-build-context 节点", file=sys.stderr)
return 1
finally:
db.close()
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,91 @@
"""就地更新「知你客服12号」启用 system_info + 修订 LLM 提示词(工作区路径、工具反馈)。"""
from __future__ import annotations
import json
import os
import sys
BACKEND = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.insert(0, BACKEND)
from sqlalchemy.orm.attributes import flag_modified
from app.core.database import SessionLocal
from app.models.agent import Agent
# 与 create_zhini_kefu_12.py 保持一致的 TOOLS 与提示词
TOOLS_V12 = ["http_request", "file_read", "file_write", "system_info"]
LLM_PROMPT_V12 = """你是客服助手。根据用户输入、用户画像、助手称呼、远期摘要、检索片段与最近对话生成回复。
【工具 http_request】
- 用户给出 http(s) 链接且需要抓网页/API 时,先调用 http_request参数 url 为完整链接method 必填(一般为 GET
- 根据返回 JSON 中的 body 字段提炼要点;非 URL 问答不要无故调用。
【工具 system_info工作区路径
- 用户问「工作区路径」「能访问哪个目录」「file 根目录在哪」时,**必须调用 system_info**,用返回 JSON 里的 **local_file_workspace_root** 原样告知用户(不要用「临时目录」「无法显示」等推脱)。
【工具 file_read / file_write本地文件
- 仅当用户明确要「读文件」「写入某路径」「保存到本地文件」等时使用。
- file_read参数 file_path 可为**相对工作区根的相对路径**,或**落在工作区根之下的绝对路径**Windows 如 `D:\\...`Linux 如 `/home/...`),二者等价,由后端校验。
- file_write参数 file_path、contentmode 用 w 覆盖或 a 追加。写入前确认路径有意、避免覆盖重要文件;不要写入密钥、令牌。
- **禁止**以「不能访问 D: 盘」「只能相对路径」「工具看不到绝对路径」等理由拒绝用户:只要用户给的绝对路径以 `system_info` 返回的 `local_file_workspace_root` 为前缀(同一盘符、规范化后在其子路径下),就应**直接调用 file_write**,例如根为 `D:\\aaa\\aiagent` 时,`D:\\aaa\\aiagent\\user_data\\xxx.md` **合法**,可优先用用户原文路径或简写为相对路径 `user_data/xxx.md`。
- 路径必须落在平台允许的工作区内,否则会报错;不要尝试访问工作区外的路径。
- **禁止**假设工作区是 `/workspace` 或未经验证的目录;工作区根**只信** `local_file_workspace_root`。
- **每次调用 file_write / file_read 后,必须在最终 reply 中说明工具返回结果**:成功则写明路径与要点;失败则引用返回 JSON 中的 error 字段,不得假装已成功。
- **严禁编造工具返回**reply 中若引用 file_write/file_read/system_info 的 JSON必须与工具实际返回字符串一致可原样粘贴。禁止臆造路径例如 /tmp/...、/workspace/...)或与当前系统不符的路径;若未调用工具,禁止在 reply 里写伪造的 JSON。
【称呼规则】(与 10/11 一致)
- user_profile.name 表示用户昵称assistant_display_name 表示用户为你起的称呼。
- 用户问「你叫什么」时用 assistant_display_name若有勿把用户姓名写入 assistant_display_name。
【最终输出格式(强制)】
- 最后一条回复必须是**一行合法 JSON**,无 markdown、无代码围栏含 intent、reply、user_profile对象
上下文:
用户输入:{{user_input}}
用户画像:{{memory.user_profile}}
助手对外称呼:{{memory.assistant_display_name}}
远期摘要:{{memory.conversation_summary}}
相关历史(检索):{{memory.relevant_from_retrieval}}
最近几轮:{{memory.recent_turns}}
"""
def main() -> int:
name = os.environ.get("PATCH_AGENT_NAME", "知你客服12号")
db = SessionLocal()
try:
a = db.query(Agent).filter(Agent.name == name).first()
if not a:
print("未找到", name, file=sys.stderr)
return 1
wf = dict(a.workflow_config) if a.workflow_config else {}
nodes = list(wf.get("nodes") or [])
done = False
for i, n in enumerate(nodes):
if n.get("id") != "llm-unified":
continue
d = dict(n.get("data") or {})
d["prompt"] = LLM_PROMPT_V12
d["enable_tools"] = True
d["tools"] = list(TOOLS_V12)
d["selected_tools"] = list(TOOLS_V12)
nodes[i] = {**n, "data": d}
done = True
break
if not done:
print("未找到 llm-unified", file=sys.stderr)
return 1
wf["nodes"] = nodes
a.workflow_config = wf
flag_modified(a, "workflow_config")
db.commit()
print("已更新", name, "llm-unified: tools=", TOOLS_V12)
print(json.dumps({"name": name, "id": str(a.id)}, ensure_ascii=False))
return 0
finally:
db.close()
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,41 @@
$ErrorActionPreference = "SilentlyContinue"
$backend = "D:\aaa\aiagent\backend"
Get-CimInstance Win32_Process | Where-Object {
$_.CommandLine -and $_.CommandLine -match "celery" -and $_.CommandLine -match "celery_app"
} | ForEach-Object {
Write-Host "Stop Celery PID $($_.ProcessId)"
Stop-Process -Id $_.ProcessId -Force
}
Get-CimInstance Win32_Process | Where-Object {
$_.CommandLine -and $_.CommandLine -match "uvicorn" -and $_.CommandLine -match "app.main:app"
} | ForEach-Object {
Write-Host "Stop Uvicorn PID $($_.ProcessId)"
Stop-Process -Id $_.ProcessId -Force
}
Start-Sleep -Seconds 2
$py = Join-Path $backend "venv\Scripts\python.exe"
Write-Host "Start Uvicorn :8037 ..."
Start-Process -FilePath $py -ArgumentList @(
"-m", "uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8037", "--reload"
) -WorkingDirectory $backend -WindowStyle Minimized
Start-Sleep -Seconds 2
Write-Host "Start Celery worker ..."
Start-Process -FilePath $py -ArgumentList @(
"-m", "celery", "-A", "app.core.celery_app", "worker",
"--loglevel=info", "--pool=threads", "--concurrency=8"
) -WorkingDirectory $backend -WindowStyle Minimized
Start-Sleep -Seconds 3
try {
$r = Invoke-WebRequest -Uri "http://127.0.0.1:8037/health" -UseBasicParsing -TimeoutSec 15
Write-Host "health: $($r.Content)"
} catch {
Write-Host "health check failed: $($_.Exception.Message)"
}
Write-Host "Done."

View File

@@ -0,0 +1,61 @@
#!/usr/bin/env python3
"""
测试在 D:\\aaa\\aiagent\\user_data 下创建 aaa.md与线上一致的 file_write_tool
用法(在 backend 目录):
.\\venv\\Scripts\\python.exe scripts\\test_write_user_data_aaa_md.py
可选环境变量:
TEST_MD_CONTENT 写入内容,默认一行时间戳
TEST_USE_ABSPATH 设为 1 时使用绝对路径 D:\\aaa\\aiagent\\user_data\\aaa.md否则用相对路径 user_data/aaa.md
"""
from __future__ import annotations
import asyncio
import os
import sys
from datetime import datetime, timezone
from pathlib import Path
BACKEND_DIR = Path(__file__).resolve().parents[1]
def main() -> int:
os.chdir(BACKEND_DIR)
sys.path.insert(0, str(BACKEND_DIR))
from app.services.builtin_tools import _local_file_workspace_root, file_write_tool
root = _local_file_workspace_root()
content = os.environ.get(
"TEST_MD_CONTENT",
f"# aaa\\n\\nwritten by test_write_user_data_aaa_md.py at {datetime.now(timezone.utc).isoformat()}\\n",
).replace("\\n", "\n")
if os.environ.get("TEST_USE_ABSPATH", "").strip() in ("1", "true", "yes"):
target = root / "user_data" / "aaa.md"
file_path_arg = str(target)
else:
file_path_arg = "user_data/aaa.md"
print("workspace root:", root)
print("file_path:", file_path_arg)
print("content bytes (utf-8):", len(content.encode("utf-8")))
raw = asyncio.run(file_write_tool(file_path_arg, content, "w"))
print("tool return:", raw)
resolved = (root / "user_data" / "aaa.md").resolve()
if not resolved.is_file():
print("FAIL: file missing:", resolved, file=sys.stderr)
return 2
on_disk = resolved.read_text(encoding="utf-8", errors="replace")
print("OK: on disk", resolved)
print("--- file head ---")
print(on_disk[:500])
print("---")
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -0,0 +1,25 @@
import re
import requests
BASE = "http://127.0.0.1:8037"
AID = "688c2c41-dcd1-4285-b193-6bed00c485c2"
text = open("scripts/create_zhini_kefu_7.py", encoding="utf-8").read()
m = re.search(r'LLM_PROMPT = """(.*?)"""', text, re.S)
prompt = m.group(1).strip()
r = requests.post(
f"{BASE}/api/v1/auth/login",
data={"username": "admin", "password": "123456"},
headers={"Content-Type": "application/x-www-form-urlencoded"},
timeout=15,
)
h = {"Authorization": f"Bearer {r.json()['access_token']}", "Content-Type": "application/json"}
g = requests.get(f"{BASE}/api/v1/agents/{AID}", headers=h, timeout=30).json()
wf = g["workflow_config"]
for n in wf["nodes"]:
if n.get("id") == "llm-unified":
n["data"]["prompt"] = prompt
break
up = requests.put(f"{BASE}/api/v1/agents/{AID}", headers=h, json={"workflow_config": wf}, timeout=60)
print(up.status_code, up.text[:300])