490 lines
11 KiB
Markdown
490 lines
11 KiB
Markdown
|
|
# 完整测试指南
|
|||
|
|
|
|||
|
|
## 📋 测试前准备
|
|||
|
|
|
|||
|
|
### 1. 配置API密钥
|
|||
|
|
|
|||
|
|
#### 方法一:使用环境变量文件
|
|||
|
|
|
|||
|
|
创建或编辑 `backend/.env` 文件:
|
|||
|
|
|
|||
|
|
```env
|
|||
|
|
# OpenAI配置
|
|||
|
|
OPENAI_API_KEY=sk-your-openai-api-key-here
|
|||
|
|
OPENAI_BASE_URL=https://api.openai.com/v1
|
|||
|
|
|
|||
|
|
# DeepSeek配置
|
|||
|
|
DEEPSEEK_API_KEY=sk-your-deepseek-api-key-here
|
|||
|
|
DEEPSEEK_BASE_URL=https://api.deepseek.com
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
#### 方法二:使用Docker环境变量
|
|||
|
|
|
|||
|
|
在 `docker-compose.dev.yml` 中的 `backend` 服务添加:
|
|||
|
|
|
|||
|
|
```yaml
|
|||
|
|
environment:
|
|||
|
|
- OPENAI_API_KEY=sk-your-openai-api-key-here
|
|||
|
|
- DEEPSEEK_API_KEY=sk-your-deepseek-api-key-here
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 2. 重启服务
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
docker-compose -f docker-compose.dev.yml restart backend
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 3. 验证服务运行
|
|||
|
|
|
|||
|
|
```bash
|
|||
|
|
# 检查后端服务
|
|||
|
|
docker-compose -f docker-compose.dev.yml ps backend
|
|||
|
|
|
|||
|
|
# 查看后端日志
|
|||
|
|
docker-compose -f docker-compose.dev.yml logs --tail=20 backend
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
## 🧪 测试步骤
|
|||
|
|
|
|||
|
|
### 测试1:基础功能测试
|
|||
|
|
|
|||
|
|
#### 1.1 登录系统
|
|||
|
|
|
|||
|
|
1. 打开浏览器访问:`http://101.43.95.130:8038` 或 `http://localhost:8038`
|
|||
|
|
2. 如果未注册,先注册账号
|
|||
|
|
3. 使用用户名和密码登录
|
|||
|
|
4. 验证登录成功后跳转到首页
|
|||
|
|
|
|||
|
|
**预期结果**:
|
|||
|
|
- ✅ 登录成功
|
|||
|
|
- ✅ 显示工作流列表页面
|
|||
|
|
- ✅ 控制台无错误
|
|||
|
|
|
|||
|
|
#### 1.2 创建工作流
|
|||
|
|
|
|||
|
|
1. 点击"创建工作流"按钮
|
|||
|
|
2. 进入工作流设计器
|
|||
|
|
3. 验证画布显示正常
|
|||
|
|
|
|||
|
|
**预期结果**:
|
|||
|
|
- ✅ 工作流设计器正常加载
|
|||
|
|
- ✅ 节点工具箱显示正常
|
|||
|
|
- ✅ 可以拖拽节点到画布
|
|||
|
|
|
|||
|
|
### 测试2:LLM节点测试
|
|||
|
|
|
|||
|
|
#### 2.1 创建简单工作流(OpenAI)
|
|||
|
|
|
|||
|
|
1. **添加节点**:
|
|||
|
|
- 从节点工具箱拖拽"开始"节点到画布
|
|||
|
|
- 拖拽"LLM"节点到画布
|
|||
|
|
- 拖拽"结束"节点到画布
|
|||
|
|
|
|||
|
|
2. **连接节点**:
|
|||
|
|
- 从"开始"节点的底部连接点拖到"LLM"节点的顶部
|
|||
|
|
- 从"LLM"节点的底部拖到"结束"节点的顶部
|
|||
|
|
|
|||
|
|
3. **配置LLM节点**:
|
|||
|
|
- 点击"LLM"节点选中它
|
|||
|
|
- 在右侧配置面板中:
|
|||
|
|
- 提供商:选择"OpenAI"
|
|||
|
|
- 提示词:输入 `请将以下文本翻译成英文:{input}`
|
|||
|
|
- 模型:选择"GPT-3.5 Turbo"
|
|||
|
|
- 温度:0.7
|
|||
|
|
- 点击"保存配置"
|
|||
|
|
|
|||
|
|
4. **保存工作流**:
|
|||
|
|
- 点击顶部"保存"按钮
|
|||
|
|
- 验证保存成功提示
|
|||
|
|
|
|||
|
|
**预期结果**:
|
|||
|
|
- ✅ 节点可以正常连接
|
|||
|
|
- ✅ LLM节点配置保存成功
|
|||
|
|
- ✅ 工作流保存成功
|
|||
|
|
|
|||
|
|
#### 2.2 执行工作流(OpenAI)
|
|||
|
|
|
|||
|
|
1. **运行工作流**:
|
|||
|
|
- 点击"运行"按钮
|
|||
|
|
- 输入测试数据(JSON格式):
|
|||
|
|
```json
|
|||
|
|
{
|
|||
|
|
"input": "你好,世界"
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
- 点击"执行"
|
|||
|
|
|
|||
|
|
2. **查看执行结果**:
|
|||
|
|
- 等待执行完成
|
|||
|
|
- 查看执行结果
|
|||
|
|
|
|||
|
|
**预期结果**:
|
|||
|
|
- ✅ 执行成功
|
|||
|
|
- ✅ 返回英文翻译结果:"Hello, world"
|
|||
|
|
- ✅ 无错误信息
|
|||
|
|
|
|||
|
|
#### 2.3 测试DeepSeek
|
|||
|
|
|
|||
|
|
1. **创建新工作流或修改现有工作流**:
|
|||
|
|
- 添加LLM节点
|
|||
|
|
- 配置节点:
|
|||
|
|
- 提供商:选择"DeepSeek"
|
|||
|
|
- 提示词:`请用一句话总结:{input}`
|
|||
|
|
- 模型:选择"DeepSeek Chat"
|
|||
|
|
- 温度:0.7
|
|||
|
|
|
|||
|
|
2. **执行工作流**:
|
|||
|
|
- 输入数据:
|
|||
|
|
```json
|
|||
|
|
{
|
|||
|
|
"input": "人工智能是计算机科学的一个分支,它试图理解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器。"
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
- 执行并查看结果
|
|||
|
|
|
|||
|
|
**预期结果**:
|
|||
|
|
- ✅ DeepSeek调用成功
|
|||
|
|
- ✅ 返回总结结果
|
|||
|
|
- ✅ 无错误信息
|
|||
|
|
|
|||
|
|
#### 2.4 测试DeepSeek Coder
|
|||
|
|
|
|||
|
|
1. **配置代码生成节点**:
|
|||
|
|
- 提供商:DeepSeek
|
|||
|
|
- 模型:DeepSeek Coder
|
|||
|
|
- 提示词:`请用Python编写一个函数,功能是:{input}`
|
|||
|
|
|
|||
|
|
2. **执行工作流**:
|
|||
|
|
- 输入数据:
|
|||
|
|
```json
|
|||
|
|
{
|
|||
|
|
"input": "计算斐波那契数列的第n项"
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
- 执行并查看结果
|
|||
|
|
|
|||
|
|
**预期结果**:
|
|||
|
|
- ✅ 返回Python代码
|
|||
|
|
- ✅ 代码格式正确
|
|||
|
|
- ✅ 功能符合要求
|
|||
|
|
|
|||
|
|
### 测试3:复杂工作流测试
|
|||
|
|
|
|||
|
|
#### 3.1 多节点工作流
|
|||
|
|
|
|||
|
|
创建以下工作流:
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
开始 → LLM节点(翻译) → LLM节点(总结) → 结束
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
1. **配置第一个LLM节点**(翻译):
|
|||
|
|
- 提供商:OpenAI
|
|||
|
|
- 提示词:`将以下中文翻译成英文:{input}`
|
|||
|
|
- 模型:GPT-3.5 Turbo
|
|||
|
|
|
|||
|
|
2. **配置第二个LLM节点**(总结):
|
|||
|
|
- 提供商:DeepSeek
|
|||
|
|
- 提示词:`请用一句话总结以下英文内容:{input}`
|
|||
|
|
- 模型:DeepSeek Chat
|
|||
|
|
|
|||
|
|
3. **执行工作流**:
|
|||
|
|
- 输入数据:
|
|||
|
|
```json
|
|||
|
|
{
|
|||
|
|
"input": "人工智能技术正在快速发展,它将在未来改变我们的生活方式。"
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**预期结果**:
|
|||
|
|
- ✅ 第一个节点返回英文翻译
|
|||
|
|
- ✅ 第二个节点返回总结
|
|||
|
|
- ✅ 数据正确传递
|
|||
|
|
|
|||
|
|
#### 3.2 条件分支工作流
|
|||
|
|
|
|||
|
|
创建以下工作流:
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
开始 → LLM节点(判断) → 条件节点 → [True分支] → 输出节点
|
|||
|
|
↓
|
|||
|
|
[False分支] → 输出节点
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
1. **配置LLM节点**:
|
|||
|
|
- 提示词:`判断以下文本的情感倾向(返回positive或negative):{input}`
|
|||
|
|
|
|||
|
|
2. **配置条件节点**:
|
|||
|
|
- 条件表达式:`{input} == "positive"`
|
|||
|
|
|
|||
|
|
3. **执行工作流**:
|
|||
|
|
- 测试数据1(正面):
|
|||
|
|
```json
|
|||
|
|
{
|
|||
|
|
"input": "今天天气真好"
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
- 测试数据2(负面):
|
|||
|
|
```json
|
|||
|
|
{
|
|||
|
|
"input": "今天心情很糟糕"
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
**预期结果**:
|
|||
|
|
- ✅ 正面文本走True分支
|
|||
|
|
- ✅ 负面文本走False分支
|
|||
|
|
- ✅ 条件判断正确
|
|||
|
|
|
|||
|
|
### 测试4:WebSocket实时推送测试
|
|||
|
|
|
|||
|
|
#### 4.1 使用浏览器控制台测试
|
|||
|
|
|
|||
|
|
1. **打开浏览器控制台**(F12)
|
|||
|
|
|
|||
|
|
2. **建立WebSocket连接**:
|
|||
|
|
```javascript
|
|||
|
|
// 先执行一个工作流,获取execution_id
|
|||
|
|
// 假设execution_id为 'your-execution-id'
|
|||
|
|
|
|||
|
|
const executionId = 'your-execution-id';
|
|||
|
|
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
|
|||
|
|
const hostname = window.location.hostname;
|
|||
|
|
const ws = new WebSocket(`${protocol}//${hostname}:8037/api/v1/ws/executions/${executionId}`);
|
|||
|
|
|
|||
|
|
ws.onopen = () => {
|
|||
|
|
console.log('✅ WebSocket连接已建立');
|
|||
|
|
};
|
|||
|
|
|
|||
|
|
ws.onmessage = (event) => {
|
|||
|
|
const message = JSON.parse(event.data);
|
|||
|
|
console.log('📨 收到消息:', message);
|
|||
|
|
|
|||
|
|
if (message.type === 'status') {
|
|||
|
|
console.log('状态:', message.status);
|
|||
|
|
console.log('进度:', message.progress);
|
|||
|
|
}
|
|||
|
|
};
|
|||
|
|
|
|||
|
|
ws.onerror = (error) => {
|
|||
|
|
console.error('❌ WebSocket错误:', error);
|
|||
|
|
};
|
|||
|
|
|
|||
|
|
ws.onclose = () => {
|
|||
|
|
console.log('🔌 WebSocket连接已关闭');
|
|||
|
|
};
|
|||
|
|
|
|||
|
|
// 心跳
|
|||
|
|
setInterval(() => {
|
|||
|
|
if (ws.readyState === WebSocket.OPEN) {
|
|||
|
|
ws.send(JSON.stringify({ type: 'ping' }));
|
|||
|
|
}
|
|||
|
|
}, 30000);
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
3. **执行工作流**:
|
|||
|
|
- 在另一个标签页执行工作流
|
|||
|
|
- 观察控制台中的状态更新
|
|||
|
|
|
|||
|
|
**预期结果**:
|
|||
|
|
- ✅ WebSocket连接成功
|
|||
|
|
- ✅ 收到状态更新消息
|
|||
|
|
- ✅ 状态从pending → running → completed
|
|||
|
|
- ✅ 收到最终结果
|
|||
|
|
|
|||
|
|
### 测试5:错误处理测试
|
|||
|
|
|
|||
|
|
#### 5.1 API Key错误
|
|||
|
|
|
|||
|
|
1. **临时移除API Key**:
|
|||
|
|
```bash
|
|||
|
|
# 在backend/.env中注释掉API Key
|
|||
|
|
# OPENAI_API_KEY=sk-xxx
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
2. **重启后端**:
|
|||
|
|
```bash
|
|||
|
|
docker-compose -f docker-compose.dev.yml restart backend
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
3. **执行工作流**:
|
|||
|
|
- 使用OpenAI节点
|
|||
|
|
- 执行工作流
|
|||
|
|
|
|||
|
|
**预期结果**:
|
|||
|
|
- ✅ 返回错误信息
|
|||
|
|
- ✅ 错误信息清晰:`OpenAI API Key未配置...`
|
|||
|
|
- ✅ 工作流执行失败但不会崩溃
|
|||
|
|
|
|||
|
|
#### 5.2 网络错误
|
|||
|
|
|
|||
|
|
1. **断开网络**(或使用错误的API地址)
|
|||
|
|
|
|||
|
|
2. **执行工作流**
|
|||
|
|
|
|||
|
|
**预期结果**:
|
|||
|
|
- ✅ 返回网络错误信息
|
|||
|
|
- ✅ 错误处理正常
|
|||
|
|
- ✅ 不会导致系统崩溃
|
|||
|
|
|
|||
|
|
#### 5.3 无效模型名称
|
|||
|
|
|
|||
|
|
1. **配置LLM节点**:
|
|||
|
|
- 模型名称:`invalid-model-name`
|
|||
|
|
|
|||
|
|
2. **执行工作流**
|
|||
|
|
|
|||
|
|
**预期结果**:
|
|||
|
|
- ✅ 返回模型不存在错误
|
|||
|
|
- ✅ 错误信息清晰
|
|||
|
|
|
|||
|
|
### 测试6:性能测试
|
|||
|
|
|
|||
|
|
#### 6.1 并发执行测试
|
|||
|
|
|
|||
|
|
1. **同时执行多个工作流**:
|
|||
|
|
- 创建3-5个不同的工作流
|
|||
|
|
- 同时执行它们
|
|||
|
|
|
|||
|
|
**预期结果**:
|
|||
|
|
- ✅ 所有工作流都能正常执行
|
|||
|
|
- ✅ 不会相互影响
|
|||
|
|
- ✅ 执行时间合理
|
|||
|
|
|
|||
|
|
#### 6.2 长时间运行测试
|
|||
|
|
|
|||
|
|
1. **创建包含多个LLM节点的工作流**
|
|||
|
|
|
|||
|
|
2. **执行并监控**:
|
|||
|
|
- 观察执行时间
|
|||
|
|
- 检查资源使用情况
|
|||
|
|
|
|||
|
|
**预期结果**:
|
|||
|
|
- ✅ 长时间运行稳定
|
|||
|
|
- ✅ 内存使用正常
|
|||
|
|
- ✅ 不会出现内存泄漏
|
|||
|
|
|
|||
|
|
## 📊 测试检查清单
|
|||
|
|
|
|||
|
|
### 基础功能
|
|||
|
|
- [ ] 用户注册和登录
|
|||
|
|
- [ ] 工作流创建和保存
|
|||
|
|
- [ ] 节点拖拽和连接
|
|||
|
|
- [ ] 节点配置保存
|
|||
|
|
|
|||
|
|
### LLM功能
|
|||
|
|
- [ ] OpenAI调用成功
|
|||
|
|
- [ ] DeepSeek调用成功
|
|||
|
|
- [ ] DeepSeek Coder调用成功
|
|||
|
|
- [ ] Prompt模板变量替换
|
|||
|
|
- [ ] 不同模型选择
|
|||
|
|
- [ ] 温度参数生效
|
|||
|
|
- [ ] 最大Token数限制
|
|||
|
|
|
|||
|
|
### 工作流执行
|
|||
|
|
- [ ] 简单工作流执行
|
|||
|
|
- [ ] 多节点工作流执行
|
|||
|
|
- [ ] 条件分支工作流
|
|||
|
|
- [ ] 数据传递正确
|
|||
|
|
- [ ] 执行结果正确
|
|||
|
|
|
|||
|
|
### WebSocket
|
|||
|
|
- [ ] WebSocket连接建立
|
|||
|
|
- [ ] 状态实时更新
|
|||
|
|
- [ ] 心跳检测
|
|||
|
|
- [ ] 连接自动断开
|
|||
|
|
|
|||
|
|
### 错误处理
|
|||
|
|
- [ ] API Key错误处理
|
|||
|
|
- [ ] 网络错误处理
|
|||
|
|
- [ ] 模型错误处理
|
|||
|
|
- [ ] 错误信息清晰
|
|||
|
|
|
|||
|
|
### 性能
|
|||
|
|
- [ ] 并发执行正常
|
|||
|
|
- [ ] 长时间运行稳定
|
|||
|
|
- [ ] 资源使用合理
|
|||
|
|
|
|||
|
|
## 🐛 常见问题排查
|
|||
|
|
|
|||
|
|
### 问题1:LLM调用失败
|
|||
|
|
|
|||
|
|
**检查项**:
|
|||
|
|
1. API Key是否正确配置
|
|||
|
|
2. 网络连接是否正常
|
|||
|
|
3. API余额是否充足
|
|||
|
|
4. 模型名称是否正确
|
|||
|
|
|
|||
|
|
**解决方法**:
|
|||
|
|
```bash
|
|||
|
|
# 检查环境变量
|
|||
|
|
docker-compose -f docker-compose.dev.yml exec backend env | grep API_KEY
|
|||
|
|
|
|||
|
|
# 查看后端日志
|
|||
|
|
docker-compose -f docker-compose.dev.yml logs --tail=50 backend
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 问题2:WebSocket连接失败
|
|||
|
|
|
|||
|
|
**检查项**:
|
|||
|
|
1. 后端服务是否运行
|
|||
|
|
2. 端口8037是否开放
|
|||
|
|
3. 防火墙配置是否正确
|
|||
|
|
|
|||
|
|
**解决方法**:
|
|||
|
|
```bash
|
|||
|
|
# 检查后端服务
|
|||
|
|
docker-compose -f docker-compose.dev.yml ps backend
|
|||
|
|
|
|||
|
|
# 测试WebSocket连接
|
|||
|
|
curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" \
|
|||
|
|
http://localhost:8037/api/v1/ws/executions/test-id
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### 问题3:工作流执行卡住
|
|||
|
|
|
|||
|
|
**检查项**:
|
|||
|
|
1. Celery worker是否运行
|
|||
|
|
2. Redis连接是否正常
|
|||
|
|
3. 数据库连接是否正常
|
|||
|
|
|
|||
|
|
**解决方法**:
|
|||
|
|
```bash
|
|||
|
|
# 检查Celery worker
|
|||
|
|
docker-compose -f docker-compose.dev.yml ps celery
|
|||
|
|
|
|||
|
|
# 查看Celery日志
|
|||
|
|
docker-compose -f docker-compose.dev.yml logs --tail=50 celery
|
|||
|
|
|
|||
|
|
# 检查Redis
|
|||
|
|
docker-compose -f docker-compose.dev.yml exec redis redis-cli ping
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
## 📝 测试报告模板
|
|||
|
|
|
|||
|
|
```
|
|||
|
|
测试日期:2024-XX-XX
|
|||
|
|
测试人员:XXX
|
|||
|
|
|
|||
|
|
测试结果:
|
|||
|
|
- 基础功能:✅ 通过
|
|||
|
|
- LLM功能:✅ 通过
|
|||
|
|
- 工作流执行:✅ 通过
|
|||
|
|
- WebSocket:✅ 通过
|
|||
|
|
- 错误处理:✅ 通过
|
|||
|
|
- 性能:✅ 通过
|
|||
|
|
|
|||
|
|
发现问题:
|
|||
|
|
1. [问题描述]
|
|||
|
|
2. [问题描述]
|
|||
|
|
|
|||
|
|
建议:
|
|||
|
|
1. [建议内容]
|
|||
|
|
2. [建议内容]
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
**状态**: ✅ 测试指南已创建
|
|||
|
|
**时间**: 2024年
|