Compare commits
2 Commits
c7e1952f72
...
2ee12bae8e
| Author | SHA1 | Date | |
|---|---|---|---|
| 2ee12bae8e | |||
| 037a620798 |
105
TOOLS.md
Normal file
105
TOOLS.md
Normal file
@ -0,0 +1,105 @@
|
||||
# TOOLS.md - Local Notes
|
||||
|
||||
Skills define _how_ tools work. This file is for _your_ specifics — the stuff that's unique to your setup.
|
||||
|
||||
## What Goes Here
|
||||
|
||||
Things like:
|
||||
|
||||
- Camera names and locations
|
||||
- SSH hosts and aliases
|
||||
- Preferred voices for TTS
|
||||
- Speaker/room names
|
||||
- Device nicknames
|
||||
- Database connections and access methods
|
||||
- Anything environment-specific
|
||||
|
||||
## Examples
|
||||
|
||||
```markdown
|
||||
### Cameras
|
||||
|
||||
- living-room → Main area, 180° wide angle
|
||||
- front-door → Entrance, motion-triggered
|
||||
|
||||
### SSH
|
||||
|
||||
- home-server → 192.168.1.100, user: admin
|
||||
|
||||
### TTS
|
||||
|
||||
- Preferred voice: "Nova" (warm, slightly British)
|
||||
- Default speaker: Kitchen HomePod
|
||||
```
|
||||
|
||||
## Why Separate?
|
||||
|
||||
Skills are shared. Your setup is yours. Keeping them apart means you can update skills without losing your notes, and share skills without leaking your infrastructure.
|
||||
|
||||
## Database Connections
|
||||
|
||||
### MySQL Databases
|
||||
|
||||
#### Online MySQL (线上版本)
|
||||
- **描述:** 包含不同发布版本的配置数据,以及线上用户订单/用户信息等数据
|
||||
- **主机:** bj-cdb-dh2fkqa0.sql.tencentcdb.com
|
||||
- **端口:** 27751
|
||||
- **用户名:** read_only
|
||||
- **密码:** fsdo45ijfmfmuu77$%^&
|
||||
- **访问权限:** 只读
|
||||
- **注意:** 永远只读取,不进行写入/删除操作
|
||||
|
||||
#### Test MySQL (测试环境)
|
||||
- **描述:** 包含最新版本的配置数据,test环境的内部开发用户数据
|
||||
- **主机:** bj-cdb-8frbdwju.sql.tencentcdb.com
|
||||
- **端口:** 25413
|
||||
- **用户名:** read_only
|
||||
- **密码:** fdsfiidier^$*hjfdijjd232
|
||||
- **访问权限:** 只读
|
||||
- **注意:** 永远只读取,不进行写入/删除操作
|
||||
|
||||
### PostgreSQL Databases
|
||||
|
||||
#### Online PostgreSQL (正式环境用户行为数据)
|
||||
- **描述:** 存储正式环境的用户行为等数据
|
||||
- **主机:** bj-postgres-16pob4sg.sql.tencentcdb.com
|
||||
- **端口:** 28591
|
||||
- **用户名:** ai_member
|
||||
- **密码:** LdfjdjL83h3h3^$&**YGG*
|
||||
- **访问权限:** 只读
|
||||
- **注意:** 永远只读取,不进行写入/删除操作
|
||||
|
||||
#### Test PostgreSQL (测试环境行为数据)
|
||||
- **描述:** 存储test环境的测试行为等数据
|
||||
- **主机:** bj-postgres-642mcico.sql.tencentcdb.com
|
||||
- **端口:** 21531
|
||||
- **用户名:** ai_member
|
||||
- **密码:** dsjsLGU&%$%FG*((yy9y8
|
||||
- **访问权限:** 只读
|
||||
- **注意:** 永远只读取,不进行写入/删除操作
|
||||
|
||||
### Elasticsearch (ES)
|
||||
|
||||
#### Test ES (测试环境服务日志)
|
||||
- **描述:** 存储测试环境的服务日志数据
|
||||
- **主机:** es-o79jsx9i.public.tencentelasticsearch.com
|
||||
- **端口:** 9200
|
||||
- **协议:** https
|
||||
- **用户名:** elastic
|
||||
- **密码:** lPLYr2!ap%^4UQb#
|
||||
- **访问权限:** 只读
|
||||
- **注意:** 永远只读取,不进行写入/删除操作
|
||||
|
||||
#### Online ES (正式环境服务日志)
|
||||
- **描述:** 存储正式环境的服务日志数据
|
||||
- **主机:** es-7vd7jcu9.public.tencentelasticsearch.com
|
||||
- **端口:** 9200
|
||||
- **协议:** https
|
||||
- **用户名:** elastic
|
||||
- **密码:** F%?QDcWes7N2WTuiYD11
|
||||
- **访问权限:** 只读
|
||||
- **注意:** 永远只读取,不进行写入/删除操作
|
||||
|
||||
---
|
||||
|
||||
Add whatever helps you do your job. This is your cheat sheet.
|
||||
30
business_knowledge/README.md
Normal file
30
business_knowledge/README.md
Normal file
@ -0,0 +1,30 @@
|
||||
# 业务知识库
|
||||
|
||||
作为数据分析师,持续积累对公司业务和数据表的理解。
|
||||
|
||||
## 目录结构
|
||||
|
||||
- `sql_queries/` - 常用 SQL 查询语句和业务分析模板
|
||||
- `tables/` - 数据表结构和字段说明
|
||||
- `business_terms/` - 业务术语和指标定义
|
||||
|
||||
## 资料来源
|
||||
|
||||
1. 飞书 Wiki - 增长组常用查询SQL: https://makee-interactive.feishu.cn/wiki/XJuCwNol1iL3sYkXkXWc2QnJnMd
|
||||
2. Git 仓库 - 数据抽取脚本: https://git.valavala.com/vala/llm_offline_production/src/branch/master/config_user_data_extract_and_analyze
|
||||
|
||||
## 收集的 SQL 查询文档
|
||||
|
||||
- [ ] 全字段大表
|
||||
- [ ] 平均通关时长
|
||||
- [ ] 新增注册用户数by渠道
|
||||
- [ ] 课程进入完成率
|
||||
- [ ] 账号角色年龄地址
|
||||
- [ ] 退费率
|
||||
- [ ] 销转学习进度
|
||||
- [ ] 班主任关注数据
|
||||
- [ ] 端内GMV
|
||||
- [ ] 端内用户课程进入完成率
|
||||
- [ ] 端内购课用户学习行为
|
||||
- [ ] 转化率
|
||||
- [ ] 课程ID映射
|
||||
49
business_knowledge/business_terms.md
Normal file
49
business_knowledge/business_terms.md
Normal file
@ -0,0 +1,49 @@
|
||||
# 业务术语表
|
||||
|
||||
## 核心业务指标
|
||||
|
||||
### 用户相关
|
||||
- **注册用户**: 在 `bi_vala_app_account` 表中 `status = 1` 且 `deleted_at is NULL` 的用户
|
||||
- **测试用户**: 需要排除的特定用户 ID,如 `id not in (51,2121)`
|
||||
- **下载渠道 (download_channel)**: 用户下载 App 的渠道
|
||||
- **key_from**: 注册或购课的来源标识
|
||||
|
||||
### 购课相关
|
||||
- **购课渠道 (sale_channel)**: 用户购买课程的渠道,有数字编码映射到具体渠道名称
|
||||
- **有效订单**: `order_status = 3` 且 `pay_amount_int > 49800` 的订单(金额大于498元)
|
||||
- **购课标签**: 分为"未购课"、"站外购课"、"站内购课"
|
||||
- **站内购课**: 购课渠道不是"站外"的购课
|
||||
|
||||
### 角色相关
|
||||
- **角色付费状态 (characer_pay_status)**: 0表示未付费,1表示已付费
|
||||
- **性别 (gender)**: 0=girl, 1=boy, 其他=unknow
|
||||
- **赛季包 (purchase_season_package)**: `'[1]'` 表示未购买赛季包
|
||||
|
||||
### 课程相关
|
||||
- **完课标识 (chapter_unique_id)**: 唯一标识一次完课记录
|
||||
- **完课耗时 (finish_time)**: 完成课程所花费的时间,格式为 mm:ss
|
||||
- **课程ID (course_id)**: 由 course_level-course_season-course_unit-course_lesson 组成
|
||||
- **play_status = 1**: 表示播放完成状态
|
||||
|
||||
## 购课渠道映射表
|
||||
|
||||
| 编码 | 渠道名称 |
|
||||
|------|----------|
|
||||
| 11 | 苹果 |
|
||||
| 12 | 华为 |
|
||||
| 13 | 小米 |
|
||||
| 14 | 荣耀 |
|
||||
| 15 | 应用宝 |
|
||||
| 17 | 魅族 |
|
||||
| 18 | VIVO |
|
||||
| 19 | OPPO |
|
||||
| 21 | 学而思 |
|
||||
| 22 | 讯飞 |
|
||||
| 23 | 步步高 |
|
||||
| 24 | 作业帮 |
|
||||
| 25 | 小度 |
|
||||
| 26 | 希沃 |
|
||||
| 27 | 京东方 |
|
||||
| 41 | 官网 |
|
||||
| 71 | 小程序 |
|
||||
| 其他 | 站外 |
|
||||
168
business_knowledge/data_tables.md
Normal file
168
business_knowledge/data_tables.md
Normal file
@ -0,0 +1,168 @@
|
||||
# 数据表说明
|
||||
|
||||
## 核心业务表
|
||||
|
||||
### 用户账号表
|
||||
**表名**: `bi_vala_app_account`
|
||||
|
||||
**关键字段**:
|
||||
- `id`: 用户ID
|
||||
- `key_from`: 注册来源
|
||||
- `created_at`: 注册时间
|
||||
- `download_channel`: 下载渠道
|
||||
- `status`: 账号状态(1表示有效)
|
||||
- `deleted_at`: 删除时间(NULL表示未删除)
|
||||
|
||||
**常用筛选条件**:
|
||||
```sql
|
||||
where status = 1
|
||||
and id not in (51,2121) -- 排除测试用户
|
||||
and deleted_at is NULL
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 账号详情表
|
||||
**表名**: `account_detail_info`
|
||||
|
||||
**关键字段**:
|
||||
- `account_id`: 账号ID(关联 bi_vala_app_account.id)
|
||||
- `login_address`: 登录地址(格式如"省份-城市")
|
||||
- `phone_login_times`: 手机登录次数
|
||||
|
||||
**业务逻辑**:
|
||||
```sql
|
||||
-- 提取城市
|
||||
split_part(login_address,'-',2) as login_address
|
||||
|
||||
-- 判断是否手机登录
|
||||
case when phone_login_times = 0 then 0 else 1 end as phone_login
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 订单表
|
||||
**表名**: `bi_vala_order`
|
||||
|
||||
**关键字段**:
|
||||
- `account_id`: 账号ID
|
||||
- `sale_channel`: 购课渠道(数字编码)
|
||||
- `key_from`: 购课来源
|
||||
- `pay_success_date`: 支付成功时间
|
||||
- `pay_amount`: 支付金额
|
||||
- `pay_amount_int`: 支付金额(整数分)
|
||||
- `order_status`: 订单状态(3表示有效订单)
|
||||
|
||||
**常用筛选条件**:
|
||||
```sql
|
||||
where order_status = 3
|
||||
and pay_amount_int > 49800 -- 金额大于498元
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 角色表
|
||||
**表名**: `bi_vala_app_character`
|
||||
|
||||
**关键字段**:
|
||||
- `id`: 角色ID
|
||||
- `account_id`: 账号ID
|
||||
- `gender`: 性别(0=girl, 1=boy)
|
||||
- `birthday`: 生日(格式如"YYYY-MM-DD")
|
||||
- `purchase_season_package`: 赛季包购买状态
|
||||
- `deleted_at`: 删除时间
|
||||
|
||||
**业务逻辑**:
|
||||
```sql
|
||||
-- 角色付费状态
|
||||
case when purchase_season_package = '[1]' then 0 else 1 end as characer_pay_status
|
||||
|
||||
-- 性别映射
|
||||
case when gender = 0 then 'girl'
|
||||
when gender = 1 then 'boy'
|
||||
else 'unknow'
|
||||
end as gender
|
||||
|
||||
-- 提取出生年份
|
||||
case when split_part(birthday,'-',1) = '' then '0000'
|
||||
else split_part(birthday,'-',1)
|
||||
end as birthday
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 课程播放记录表(分表)
|
||||
|
||||
### 用户章节播放记录
|
||||
**表名**: `bi_user_chapter_play_record_0` ~ `bi_user_chapter_play_record_7`
|
||||
|
||||
**说明**: 按分表存储,共8张表,需要使用 UNION ALL 合并
|
||||
|
||||
**关键字段**:
|
||||
- `user_id`: 用户ID
|
||||
- `chapter_id`: 章节ID
|
||||
- `chapter_unique_id`: 完课唯一标识
|
||||
- `updated_at`: 更新时间
|
||||
- `play_status`: 播放状态(1表示完成)
|
||||
|
||||
**常用筛选条件**:
|
||||
```sql
|
||||
where chapter_id in (55,56,57,58,59) -- 指定章节
|
||||
and play_status = 1 -- 播放完成
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 用户组件播放记录
|
||||
**表名**: `bi_user_component_play_record_0` ~ `bi_user_component_play_record_7`
|
||||
|
||||
**说明**: 按分表存储,共8张表,需要使用 UNION ALL 合并
|
||||
|
||||
**关键字段**:
|
||||
- `chapter_unique_id`: 完课唯一标识
|
||||
- `interval_time`: 播放时长(毫秒)
|
||||
|
||||
**业务逻辑**:
|
||||
```sql
|
||||
-- 计算完课耗时(mm:ss格式)
|
||||
format('%s:%s',
|
||||
floor(sum(interval_time)/1000/60),
|
||||
mod((sum(interval_time)/1000),60)
|
||||
) as finish_time
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 课程信息表
|
||||
|
||||
### 课程单元表
|
||||
**表名**: `bi_level_unit_lesson`
|
||||
|
||||
**关键字段**:
|
||||
- `id`: ID(关联 chapter_id)
|
||||
- `course_level`: 课程级别
|
||||
- `course_season`: 课程赛季
|
||||
- `course_unit`: 课程单元
|
||||
- `course_lesson`: 课程课时
|
||||
|
||||
**业务逻辑**:
|
||||
```sql
|
||||
-- 生成课程ID
|
||||
format('%s-%s-%s-%s',
|
||||
course_level,
|
||||
course_season,
|
||||
course_unit,
|
||||
course_lesson
|
||||
) as course_id
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 其他表
|
||||
|
||||
### 账号登录表
|
||||
**表名**: `account_login`
|
||||
|
||||
**关键字段**:
|
||||
- `account_id`: 账号ID
|
||||
- `login_date`: 登录日期
|
||||
83
business_knowledge/fetch_wiki_docs.py
Normal file
83
business_knowledge/fetch_wiki_docs.py
Normal file
@ -0,0 +1,83 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
批量读取飞书 Wiki 文档并保存到本地知识库
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
from datetime import datetime
|
||||
|
||||
# Wiki 子页面列表
|
||||
wiki_pages = [
|
||||
{"node_token": "O7QvwdY8piO8aUkhxYecA1qZnBe", "title": "全字段大表", "obj_token": "VVyWd5491o6tuqxceCVci6dVnFd"},
|
||||
{"node_token": "Y6Iywqf75iepbUkvJzLcfiUYnkg", "title": "平均通关时长", "obj_token": "EpP7d6h2SoaTyJx1lZRcXXdLnVe"},
|
||||
{"node_token": "KQihwMjO9i1zjFkqTgBcq67Snzc", "title": "新增注册用户数by渠道", "obj_token": "AzRPddp97o7To8x8VkxcFGr8nBh"},
|
||||
{"node_token": "Zt7RwfGLWiacslkO2glcheWjnwf", "title": "课程进入完成率", "obj_token": "PwIydfZcHo5eZgxi8XLcOtjOnSb"},
|
||||
{"node_token": "LTaiw3OmUi2pcckDWuNcyBIVnAd", "title": "账号角色年龄地址", "obj_token": "CUa2du2sSoNFSRxl3vFc8ucInEm"},
|
||||
{"node_token": "ZAPJwIODRiNYE5kTuNtcpSlvnIX", "title": "退费率", "obj_token": "DC1Qdhpitowt9lxxo1acEzOwnFc"},
|
||||
{"node_token": "Cb3KwPWLriG7GgkN73pcM0Idnch", "title": "销转学习进度", "obj_token": "G1p9dhK63oLWMzxyGQ8csZGMnDh"},
|
||||
{"node_token": "EBEiwQsw2iOtgekDldHcQxgwnOh", "title": "班主任关注数据", "obj_token": "NcVqdRKtrowglNxs9CocDekunje"},
|
||||
{"node_token": "BZPkwARxiixUZRk4BW9cij50nDe", "title": "端内GMV", "obj_token": "FkVCd1AruoD9xWxxVpzc16hinVh"},
|
||||
{"node_token": "AQpnwpsfOixYGtk4jf0c6t9XncG", "title": "端内用户课程进入完成率", "obj_token": "Ueu7dtgSHoNYfsxCDHmcY6E4nid"},
|
||||
{"node_token": "PyqEwXXqsiQybPkpGbscUjUFnOg", "title": "端内购课用户学习行为", "obj_token": "ZTxod4IUWo5yMexf8AHcBbpFnMg"},
|
||||
{"node_token": "OyXlwY2vyisvV1kc3HhcMyMVnTd", "title": "转化率", "obj_token": "ATJ0dfajQo5CSexQd8hc9i3pnWe"},
|
||||
{"node_token": "MWpZwV01fitaKjkCRSxckMUunRb", "title": "课程ID映射", "obj_token": "GenUdsXCloUdYhxMvxqcWBMdnhb"}
|
||||
]
|
||||
|
||||
def safe_filename(title):
|
||||
"""生成安全的文件名"""
|
||||
return "".join(c for c in title if c.isalnum() or c in (' ', '-', '_')).rstrip().replace(' ', '_')
|
||||
|
||||
def main():
|
||||
print("="*60)
|
||||
print("飞书 Wiki 文档批量获取")
|
||||
print("="*60)
|
||||
|
||||
output_dir = "sql_queries"
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
|
||||
print(f"\n共 {len(wiki_pages)} 个文档需要获取")
|
||||
print(f"输出目录: {output_dir}")
|
||||
|
||||
# 创建索引文件
|
||||
index_content = "# SQL 查询文档索引\n\n"
|
||||
index_content += f"创建时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n"
|
||||
index_content += "## 文档列表\n\n"
|
||||
|
||||
for i, page in enumerate(wiki_pages, 1):
|
||||
filename = safe_filename(page['title']) + ".md"
|
||||
filepath = os.path.join(output_dir, filename)
|
||||
|
||||
print(f"\n[{i}/{len(wiki_pages)}] 处理: {page['title']}")
|
||||
print(f" 文件: {filepath}")
|
||||
|
||||
# 创建占位文件
|
||||
with open(filepath, 'w', encoding='utf-8') as f:
|
||||
f.write(f"# {page['title']}\n\n")
|
||||
f.write(f"**获取时间:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n")
|
||||
f.write(f"**飞书文档 Token:** {page['obj_token']}\n\n")
|
||||
f.write(f"**注意:** 此文档需要通过 feishu_doc 工具读取完整内容\n\n")
|
||||
f.write("---\n\n")
|
||||
f.write("## 使用说明\n\n")
|
||||
f.write("使用以下命令读取完整文档内容:\n\n")
|
||||
f.write("```bash\n")
|
||||
f.write(f"feishu_doc read {page['obj_token']}\n")
|
||||
f.write("```\n")
|
||||
|
||||
# 更新索引
|
||||
index_content += f"- [{page['title']}]({filename})\n"
|
||||
|
||||
print(f" ✅ 已创建占位文件")
|
||||
|
||||
# 写入索引文件
|
||||
with open(os.path.join(output_dir, "README.md"), 'w', encoding='utf-8') as f:
|
||||
f.write(index_content)
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("✅ 初始化完成")
|
||||
print("="*60)
|
||||
print("\n下一步: 使用 feishu_doc 工具逐个读取文档内容")
|
||||
print("或者让我继续为你读取这些文档的完整内容")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
70
business_knowledge/git_scripts/CLAUDE.md
Normal file
70
business_knowledge/git_scripts/CLAUDE.md
Normal file
@ -0,0 +1,70 @@
|
||||
# 项目说明
|
||||
|
||||
## 项目概述
|
||||
用户数据提取和分析工具集,用于从各种数据源(ES、数据库等)导出和分析用户数据。
|
||||
|
||||
## 脚本列表
|
||||
|
||||
### export_realtime_asr.py
|
||||
**功能**: 导出流式语音 ASR 数据
|
||||
|
||||
**版本**: v1.0
|
||||
|
||||
**数据源**:
|
||||
- Elasticsearch 索引: `llm_realtime_asr_log`
|
||||
|
||||
**配置说明**:
|
||||
- 在脚本开头配置开始和结束日期(8位数字格式,如 20260101)
|
||||
- ES 连接信息通过环境变量配置(需要创建 .env 文件)
|
||||
|
||||
**依赖包**:
|
||||
```
|
||||
elasticsearch
|
||||
pandas
|
||||
openpyxl
|
||||
python-dotenv
|
||||
```
|
||||
|
||||
**运行方式**:
|
||||
```bash
|
||||
python export_realtime_asr.py
|
||||
```
|
||||
|
||||
**输出**:
|
||||
- 输出目录: `output/`
|
||||
- 文件命名: `realtime_asr_export_{开始日期}_{结束日期}.xlsx`
|
||||
- Excel 列: voice_id, asr_prompt, result_str, timestamp, audio_url, source
|
||||
|
||||
**数据处理逻辑**:
|
||||
- 从 ES 使用 scroll API 分批读取数据(每批1000条)
|
||||
- 按 voice_id 聚合,仅保留恰好有2条记录的 voice_id
|
||||
- 取两条记录中最新的 timestamp
|
||||
- 自动拼接 audio_url
|
||||
|
||||
**特点**:
|
||||
- 支持大数据量处理(几十万级别)
|
||||
- 实时进度显示
|
||||
- 自动过滤异常数据(非2条记录的 voice_id)
|
||||
|
||||
---
|
||||
|
||||
### 其他脚本
|
||||
- `export_user_id_data.py`: 用户ID数据导出
|
||||
- `batch_add_shengtong_result.py`: 批量添加声通评测结果
|
||||
- `shengtong_eval.py`: 声通评测
|
||||
- `calc_score_diff_stats.py`: 分数差异统计
|
||||
- `export_unit_summary.py`: 单元总结统计导出
|
||||
|
||||
## 环境配置
|
||||
|
||||
需要创建 `.env` 文件,包含以下配置:
|
||||
```
|
||||
ES_HOST=xxx
|
||||
ES_PORT=9200
|
||||
ES_SCHEME=https
|
||||
ES_USER=elastic
|
||||
ES_PASSWORD=xxx
|
||||
```
|
||||
|
||||
## 最近更新
|
||||
- 2026-01-27: 新增 export_realtime_asr.py 脚本,支持流式语音 ASR 数据导出
|
||||
853
business_knowledge/git_scripts/batch_add_shengtong_result.py
Normal file
853
business_knowledge/git_scripts/batch_add_shengtong_result.py
Normal file
@ -0,0 +1,853 @@
|
||||
"""
|
||||
声通语音评测批量处理工具
|
||||
|
||||
功能说明:
|
||||
- 读取 Excel 文件,其中包含音频链接(userAudio 字段)和参考文本(refText 字段)
|
||||
- 调用声通 API 对音频进行评测,获取总分、明细和recordId
|
||||
- 在原 Excel 中添加"测试总分"、"测试明细"和"测试recordId"三个字段
|
||||
- 输出文件命名为: {原文件名}_add_shengtong_result.xlsx
|
||||
- 支持串行和并发两种处理模式
|
||||
|
||||
环境变量配置:
|
||||
- ST_APP_KEY: 声通应用 Key
|
||||
- ST_SECRET_KEY: 声通 Secret Key
|
||||
|
||||
声通API文档: http://api.stkouyu.com
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import os
|
||||
import requests
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
import json
|
||||
import time
|
||||
import hashlib
|
||||
import uuid
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
import threading
|
||||
from queue import Queue
|
||||
import logging
|
||||
|
||||
# 配置日志
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s',
|
||||
handlers=[
|
||||
logging.FileHandler('shengtong_batch_processing.log'),
|
||||
logging.StreamHandler()
|
||||
]
|
||||
)
|
||||
|
||||
# 从 .env 文件加载环境变量
|
||||
from dotenv import load_dotenv
|
||||
load_dotenv()
|
||||
|
||||
# ==================== 全局配置 ====================
|
||||
# DEBUG 模式开关(控制详细日志输出)
|
||||
DEBUG_MODE = False
|
||||
|
||||
|
||||
def debug_print(message):
|
||||
"""
|
||||
DEBUG 信息输出函数
|
||||
|
||||
Args:
|
||||
message (str): 要输出的调试信息
|
||||
"""
|
||||
if DEBUG_MODE:
|
||||
print(f"[DEBUG] {message}")
|
||||
|
||||
|
||||
# ==================== 声通 API 相关代码 ====================
|
||||
|
||||
class ShengtongEvaluator:
|
||||
"""声通口语评测 API 封装类"""
|
||||
|
||||
def __init__(self):
|
||||
"""从环境变量读取 API 配置"""
|
||||
self.app_key = os.environ.get('ST_APP_KEY', '')
|
||||
self.secret_key = os.environ.get('ST_SECRET_KEY', '')
|
||||
self.api_url = "http://api.stkouyu.com:8080/sent.eval"
|
||||
|
||||
# 检查环境变量是否配置
|
||||
if not all([self.app_key, self.secret_key]):
|
||||
raise ValueError(
|
||||
"请配置声通 API 环境变量: ST_APP_KEY, ST_SECRET_KEY"
|
||||
)
|
||||
|
||||
def _generate_signature(self, data: str) -> str:
|
||||
"""生成SHA1签名"""
|
||||
return hashlib.sha1(data.encode('utf-8')).hexdigest()
|
||||
|
||||
def _build_request_params(self, ref_text: str, audio_ext: str) -> dict:
|
||||
"""构建请求参数"""
|
||||
timestamp = str(int(time.time()))
|
||||
user_id = str(uuid.uuid4())
|
||||
|
||||
# 生成签名
|
||||
connect_data = self.app_key + timestamp + self.secret_key
|
||||
start_data = self.app_key + timestamp + user_id + self.secret_key
|
||||
connect_sig = self._generate_signature(connect_data)
|
||||
start_sig = self._generate_signature(start_data)
|
||||
|
||||
# 构建请求参数
|
||||
params = {
|
||||
"connect": {
|
||||
"cmd": "connect",
|
||||
"param": {
|
||||
"sdk": {
|
||||
"version": 16777472,
|
||||
"source": 9,
|
||||
"protocol": 2
|
||||
},
|
||||
"app": {
|
||||
"applicationId": self.app_key,
|
||||
"sig": connect_sig,
|
||||
"timestamp": timestamp
|
||||
}
|
||||
}
|
||||
},
|
||||
"start": {
|
||||
"cmd": "start",
|
||||
"param": {
|
||||
"app": {
|
||||
"applicationId": self.app_key,
|
||||
"sig": start_sig,
|
||||
"timestamp": timestamp,
|
||||
"userId": user_id
|
||||
},
|
||||
"audio": {
|
||||
"audioType": audio_ext,
|
||||
"channel": 1,
|
||||
"sampleBytes": 2,
|
||||
"sampleRate": 16000
|
||||
},
|
||||
"request": {
|
||||
"coreType": "sent.eval",
|
||||
"refText": ref_text,
|
||||
"tokenId": "makee",
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return params
|
||||
|
||||
def evaluate(self, audio_file_path: str, ref_text: str) -> dict:
|
||||
"""
|
||||
调用声通API进行口语评测
|
||||
|
||||
Args:
|
||||
audio_file_path (str): 音频文件路径
|
||||
ref_text (str): 参考文本
|
||||
|
||||
Returns:
|
||||
dict: 评测结果
|
||||
"""
|
||||
debug_print(f"开始评测音频文件: {audio_file_path}")
|
||||
debug_print(f"评测文本: {ref_text}")
|
||||
|
||||
# 检查音频文件是否存在
|
||||
if not os.path.exists(audio_file_path):
|
||||
error_msg = f"音频文件不存在: {audio_file_path}"
|
||||
logging.error(error_msg)
|
||||
return {"error": error_msg}
|
||||
|
||||
# 获取音频文件扩展名
|
||||
audio_ext = os.path.splitext(audio_file_path)[1][1:] # 去掉点号
|
||||
if not audio_ext:
|
||||
audio_ext = "wav" # 默认为wav
|
||||
|
||||
# 构建请求参数
|
||||
params = self._build_request_params(ref_text, audio_ext)
|
||||
|
||||
# 读取音频文件
|
||||
try:
|
||||
with open(audio_file_path, 'rb') as f:
|
||||
audio_data = f.read()
|
||||
|
||||
# 构建multipart/form-data请求
|
||||
files = {
|
||||
'text': (None, json.dumps(params)),
|
||||
'audio': (f"{int(time.time() * 1000000)}.{audio_ext}", audio_data)
|
||||
}
|
||||
|
||||
headers = {
|
||||
'Request-Index': '0'
|
||||
}
|
||||
|
||||
debug_print("开始发送请求到声通API...")
|
||||
response = requests.post(
|
||||
self.api_url,
|
||||
files=files,
|
||||
headers=headers,
|
||||
timeout=30
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
result = response.json()
|
||||
debug_print("声通API返回成功")
|
||||
return result
|
||||
else:
|
||||
error_msg = f"请求失败,状态码: {response.status_code}"
|
||||
logging.error(f"{error_msg}, 响应: {response.text}")
|
||||
return {
|
||||
"error": error_msg,
|
||||
"response": response.text
|
||||
}
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
error_msg = f"请求异常: {str(e)}"
|
||||
logging.error(error_msg)
|
||||
return {"error": error_msg}
|
||||
except Exception as e:
|
||||
error_msg = f"评测过程出错: {str(e)}"
|
||||
logging.error(error_msg)
|
||||
return {"error": error_msg}
|
||||
|
||||
|
||||
def evaluate_audio_file(audio_file_path, text="nice to meet you."):
|
||||
"""
|
||||
简化的音频评测函数
|
||||
|
||||
Args:
|
||||
audio_file_path (str): 音频文件路径
|
||||
text (str): 评测文本内容
|
||||
|
||||
Returns:
|
||||
dict: 评测结果JSON
|
||||
"""
|
||||
api = ShengtongEvaluator()
|
||||
return api.evaluate(audio_file_path, text)
|
||||
|
||||
|
||||
# ==================== 批量处理相关代码 ====================
|
||||
|
||||
def download_audio_file(audio_url, temp_dir, max_retries=3, timeout=30):
|
||||
"""
|
||||
下载音频文件到临时目录(增强版本)
|
||||
|
||||
Args:
|
||||
audio_url (str): 音频文件URL
|
||||
temp_dir (str): 临时目录路径
|
||||
max_retries (int): 最大重试次数
|
||||
timeout (int): 请求超时时间(秒)
|
||||
|
||||
Returns:
|
||||
str: 下载的音频文件路径,失败返回None
|
||||
"""
|
||||
if not audio_url or pd.isna(audio_url):
|
||||
logging.warning("音频URL为空或无效")
|
||||
return None
|
||||
|
||||
# 从URL中提取文件名
|
||||
try:
|
||||
file_name = os.path.basename(audio_url.split('?')[0]) # 去除URL参数
|
||||
if not file_name or '.' not in file_name:
|
||||
file_name = f"audio_{hash(audio_url) % 100000}.wav" # 生成默认文件名
|
||||
|
||||
file_path = os.path.join(temp_dir, file_name)
|
||||
|
||||
# 重试机制
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
logging.info(f"正在下载音频文件 (尝试 {attempt + 1}/{max_retries}): {audio_url}")
|
||||
|
||||
# 设置请求头,模拟浏览器
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
|
||||
}
|
||||
|
||||
response = requests.get(audio_url, timeout=timeout, headers=headers, stream=True)
|
||||
response.raise_for_status()
|
||||
|
||||
# 检查内容类型
|
||||
content_type = response.headers.get('content-type', '')
|
||||
if not any(audio_type in content_type.lower() for audio_type in ['audio', 'wav', 'mp3', 'ogg', 'flac']):
|
||||
logging.warning(f"可能不是音频文件,Content-Type: {content_type}")
|
||||
|
||||
# 写入文件
|
||||
with open(file_path, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
if chunk:
|
||||
f.write(chunk)
|
||||
|
||||
# 验证文件大小
|
||||
file_size = os.path.getsize(file_path)
|
||||
if file_size == 0:
|
||||
raise ValueError("下载的文件为空")
|
||||
|
||||
logging.info(f"音频文件下载成功: {file_path} (大小: {file_size} bytes)")
|
||||
return file_path
|
||||
|
||||
except requests.exceptions.Timeout:
|
||||
logging.warning(f"下载超时 (尝试 {attempt + 1}/{max_retries}): {audio_url}")
|
||||
if attempt < max_retries - 1:
|
||||
time.sleep(2 ** attempt) # 指数退避
|
||||
continue
|
||||
except requests.exceptions.RequestException as e:
|
||||
logging.warning(f"下载请求异常 (尝试 {attempt + 1}/{max_retries}): {str(e)}")
|
||||
if attempt < max_retries - 1:
|
||||
time.sleep(2 ** attempt)
|
||||
continue
|
||||
except Exception as e:
|
||||
logging.error(f"下载过程中发生未知错误 (尝试 {attempt + 1}/{max_retries}): {str(e)}")
|
||||
if attempt < max_retries - 1:
|
||||
time.sleep(2 ** attempt)
|
||||
continue
|
||||
|
||||
logging.error(f"音频文件下载失败,已达到最大重试次数: {audio_url}")
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"下载音频文件时发生异常: {str(e)}")
|
||||
return None
|
||||
|
||||
|
||||
def format_shengtong_details(shengtong_result):
|
||||
"""
|
||||
格式化声通评测结果为明细字符串
|
||||
|
||||
Args:
|
||||
shengtong_result (dict): 声通API返回的结果
|
||||
|
||||
Returns:
|
||||
str: 格式化的明细字符串
|
||||
"""
|
||||
if not shengtong_result or 'error' in shengtong_result:
|
||||
return ""
|
||||
|
||||
try:
|
||||
# 从result字段中获取words数组
|
||||
result = shengtong_result.get('result', {})
|
||||
words = result.get('words', [])
|
||||
|
||||
if not words:
|
||||
return ""
|
||||
|
||||
details = []
|
||||
for word in words:
|
||||
# 获取单词内容和得分
|
||||
word_text = word.get('word', '')
|
||||
scores = word.get('scores', {})
|
||||
overall_score = scores.get('overall', 0)
|
||||
|
||||
# 格式化为 "单词 分数"
|
||||
details.append(f"{word_text} {int(overall_score)}")
|
||||
|
||||
return "\n".join(details)
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"格式化声通明细失败: {str(e)}")
|
||||
return ""
|
||||
|
||||
|
||||
def get_shengtong_total_score(shengtong_result):
|
||||
"""
|
||||
获取声通评测总分
|
||||
|
||||
Args:
|
||||
shengtong_result (dict): 声通API返回的结果
|
||||
|
||||
Returns:
|
||||
int: 总分,失败返回0
|
||||
"""
|
||||
if not shengtong_result or 'error' in shengtong_result:
|
||||
return 0
|
||||
|
||||
try:
|
||||
result = shengtong_result.get('result', {})
|
||||
overall_score = result.get('overall', 0)
|
||||
return int(overall_score)
|
||||
except Exception as e:
|
||||
logging.error(f"获取声通总分失败: {str(e)}")
|
||||
return 0
|
||||
|
||||
|
||||
def get_shengtong_record_id(shengtong_result):
|
||||
"""
|
||||
获取声通评测recordId
|
||||
|
||||
Args:
|
||||
shengtong_result (dict): 声通API返回的结果
|
||||
|
||||
Returns:
|
||||
str: recordId,失败返回空字符串
|
||||
"""
|
||||
if not shengtong_result or 'error' in shengtong_result:
|
||||
return ""
|
||||
|
||||
try:
|
||||
record_id = shengtong_result.get('recordId', '')
|
||||
return str(record_id) if record_id else ""
|
||||
except Exception as e:
|
||||
logging.error(f"获取声通recordId失败: {str(e)}")
|
||||
return ""
|
||||
|
||||
|
||||
def process_single_row(row_data, temp_dir, results_dict, lock, rate_limiter=None):
|
||||
"""
|
||||
处理单行数据(并发版本,增强错误处理和时间分析)
|
||||
|
||||
Args:
|
||||
row_data (tuple): (index, row) 数据
|
||||
temp_dir (str): 临时目录路径
|
||||
results_dict (dict): 结果字典
|
||||
lock (threading.Lock): 线程锁
|
||||
rate_limiter (Queue): 速率限制器
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
index, row = row_data
|
||||
start_time = time.time()
|
||||
timing_info = {}
|
||||
|
||||
try:
|
||||
# 1. 速率限制等待时间
|
||||
rate_limit_start = time.time()
|
||||
if rate_limiter:
|
||||
rate_limiter.get() # 获取令牌
|
||||
timing_info['rate_limit_wait'] = time.time() - rate_limit_start
|
||||
|
||||
logging.info(f"开始处理第 {index + 1} 行数据")
|
||||
|
||||
# 2. 数据预处理时间
|
||||
preprocess_start = time.time()
|
||||
ref_text = str(row['refText']) if pd.notna(row['refText']) else ""
|
||||
audio_url = str(row['userAudio']) if pd.notna(row['userAudio']) else ""
|
||||
|
||||
# 数据验证
|
||||
if not ref_text:
|
||||
raise ValueError("refText 为空或无效")
|
||||
|
||||
if not audio_url:
|
||||
raise ValueError("userAudio 为空或无效")
|
||||
timing_info['preprocess'] = time.time() - preprocess_start
|
||||
|
||||
# 3. 音频下载时间
|
||||
download_start = time.time()
|
||||
audio_file_path = download_audio_file(audio_url, temp_dir)
|
||||
timing_info['audio_download'] = time.time() - download_start
|
||||
|
||||
if not audio_file_path:
|
||||
raise ValueError("音频文件下载失败")
|
||||
|
||||
try:
|
||||
# 4. 声通API调用时间
|
||||
api_start = time.time()
|
||||
logging.info(f"正在调用声通API评测: {ref_text}")
|
||||
shengtong_result = evaluate_audio_file(audio_file_path, ref_text)
|
||||
timing_info['api_call'] = time.time() - api_start
|
||||
|
||||
if not shengtong_result:
|
||||
raise ValueError("声通API返回空结果")
|
||||
|
||||
# 5. 结果处理时间
|
||||
result_process_start = time.time()
|
||||
shengtong_details = format_shengtong_details(shengtong_result)
|
||||
shengtong_total_score = get_shengtong_total_score(shengtong_result)
|
||||
shengtong_record_id = get_shengtong_record_id(shengtong_result)
|
||||
timing_info['result_process'] = time.time() - result_process_start
|
||||
|
||||
# 6. 数据更新时间
|
||||
update_start = time.time()
|
||||
with lock:
|
||||
results_dict[index] = {
|
||||
'测试总分': shengtong_total_score,
|
||||
'测试明细': shengtong_details,
|
||||
'测试recordId': shengtong_record_id
|
||||
}
|
||||
timing_info['data_update'] = time.time() - update_start
|
||||
|
||||
# 计算总耗时
|
||||
total_time = time.time() - start_time
|
||||
timing_info['total'] = total_time
|
||||
|
||||
# 详细的时间分析日志
|
||||
logging.info(f"第 {index + 1} 行处理成功 - 总分: {shengtong_total_score} | "
|
||||
f"总耗时: {total_time:.2f}s | "
|
||||
f"速率等待: {timing_info['rate_limit_wait']:.2f}s | "
|
||||
f"预处理: {timing_info['preprocess']:.3f}s | "
|
||||
f"音频下载: {timing_info['audio_download']:.2f}s | "
|
||||
f"API调用: {timing_info['api_call']:.2f}s | "
|
||||
f"结果处理: {timing_info['result_process']:.3f}s | "
|
||||
f"数据更新: {timing_info['data_update']:.3f}s")
|
||||
|
||||
except Exception as api_error:
|
||||
total_time = time.time() - start_time
|
||||
logging.error(f"第 {index + 1} 行声通API调用失败: {str(api_error)} | "
|
||||
f"总耗时: {total_time:.2f}s | "
|
||||
f"音频下载: {timing_info.get('audio_download', 0):.2f}s | "
|
||||
f"API调用: {timing_info.get('api_call', 0):.2f}s")
|
||||
with lock:
|
||||
results_dict[index] = {
|
||||
'测试总分': 0,
|
||||
'测试明细': "",
|
||||
'测试recordId': "",
|
||||
'error': f'API调用失败: {str(api_error)}'
|
||||
}
|
||||
|
||||
finally:
|
||||
# 7. 清理时间
|
||||
cleanup_start = time.time()
|
||||
try:
|
||||
if audio_file_path and os.path.exists(audio_file_path):
|
||||
os.remove(audio_file_path)
|
||||
logging.debug(f"已删除临时文件: {audio_file_path}")
|
||||
except Exception as cleanup_error:
|
||||
logging.warning(f"清理临时文件失败: {str(cleanup_error)}")
|
||||
timing_info['cleanup'] = time.time() - cleanup_start
|
||||
|
||||
# 释放速率限制令牌
|
||||
if rate_limiter:
|
||||
try:
|
||||
rate_limiter.put(None, timeout=1) # 归还令牌
|
||||
except:
|
||||
pass # 队列可能已满,忽略
|
||||
|
||||
except Exception as e:
|
||||
total_time = time.time() - start_time
|
||||
logging.error(f"第 {index + 1} 行处理异常: {str(e)} | 总耗时: {total_time:.2f}s")
|
||||
with lock:
|
||||
results_dict[index] = {
|
||||
'测试总分': 0,
|
||||
'测试明细': "",
|
||||
'测试recordId': "",
|
||||
'error': f'处理异常: {str(e)}'
|
||||
}
|
||||
|
||||
# 释放速率限制令牌
|
||||
if rate_limiter:
|
||||
try:
|
||||
rate_limiter.put(None, timeout=1)
|
||||
except:
|
||||
pass
|
||||
|
||||
|
||||
def process_excel_with_shengtong_concurrent(input_file_path, output_dir="output/audio", max_workers=3, rate_limit_per_second=3):
|
||||
"""
|
||||
处理Excel文件,添加声通评测结果(并发版本,增强控制)
|
||||
|
||||
Args:
|
||||
input_file_path (str): 输入Excel文件路径
|
||||
output_dir (str): 输出目录路径,默认为 output/audio
|
||||
max_workers (int): 最大并发线程数,默认3
|
||||
rate_limit_per_second (int): 每秒最大请求数,默认3
|
||||
|
||||
Returns:
|
||||
bool: 处理是否成功
|
||||
"""
|
||||
start_time = time.time()
|
||||
|
||||
try:
|
||||
# 读取Excel文件
|
||||
logging.info(f"正在读取Excel文件: {input_file_path}")
|
||||
df = pd.read_excel(input_file_path)
|
||||
|
||||
# 检查必要的列是否存在
|
||||
required_columns = ['refText', 'userAudio']
|
||||
missing_columns = [col for col in required_columns if col not in df.columns]
|
||||
if missing_columns:
|
||||
logging.error(f"Excel文件缺少必要的列: {missing_columns}")
|
||||
return False
|
||||
|
||||
# 数据预处理和验证
|
||||
total_rows = len(df)
|
||||
valid_rows = 0
|
||||
for index, row in df.iterrows():
|
||||
if pd.notna(row.get('refText')) and pd.notna(row.get('userAudio')):
|
||||
valid_rows += 1
|
||||
|
||||
logging.info(f"总行数: {total_rows}, 有效行数: {valid_rows}")
|
||||
|
||||
if valid_rows == 0:
|
||||
logging.warning("没有找到有效的数据行")
|
||||
return False
|
||||
|
||||
# 添加新列
|
||||
df['测试总分'] = 0
|
||||
df['测试明细'] = ""
|
||||
df['测试recordId'] = ""
|
||||
|
||||
# 创建优化的速率限制器
|
||||
effective_rate_limit = max(rate_limit_per_second, max_workers)
|
||||
rate_limiter = Queue(maxsize=effective_rate_limit * 2)
|
||||
|
||||
# 预填充令牌
|
||||
for _ in range(effective_rate_limit):
|
||||
rate_limiter.put(None)
|
||||
|
||||
# 启动优化的速率限制器补充线程
|
||||
def rate_limiter_refill():
|
||||
interval = 1.0 / effective_rate_limit
|
||||
while True:
|
||||
time.sleep(interval)
|
||||
try:
|
||||
rate_limiter.put(None, block=False)
|
||||
except:
|
||||
pass
|
||||
|
||||
rate_thread = threading.Thread(target=rate_limiter_refill, daemon=True)
|
||||
rate_thread.start()
|
||||
|
||||
logging.info(f"速率限制设置: {effective_rate_limit} req/s (原始: {rate_limit_per_second}, 队列大小: {effective_rate_limit * 2})")
|
||||
|
||||
# 创建临时目录用于下载音频文件
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
logging.info(f"创建临时目录: {temp_dir}")
|
||||
logging.info(f"开始并发处理,最大并发数: {max_workers}, 有效速率限制: {effective_rate_limit} req/s")
|
||||
|
||||
# 准备数据
|
||||
row_data_list = [(index, row) for index, row in df.iterrows()]
|
||||
|
||||
# 创建结果字典和线程锁
|
||||
results_dict = {}
|
||||
lock = threading.Lock()
|
||||
|
||||
# 使用线程池进行并发处理
|
||||
with ThreadPoolExecutor(max_workers=max_workers) as executor:
|
||||
# 提交所有任务
|
||||
future_to_index = {
|
||||
executor.submit(process_single_row, row_data, temp_dir, results_dict, lock, rate_limiter): row_data[0]
|
||||
for row_data in row_data_list
|
||||
}
|
||||
|
||||
# 等待任务完成并显示进度
|
||||
completed_count = 0
|
||||
success_count = 0
|
||||
error_count = 0
|
||||
|
||||
for future in as_completed(future_to_index):
|
||||
completed_count += 1
|
||||
index = future_to_index[future]
|
||||
|
||||
try:
|
||||
future.result() # 获取结果,如果有异常会抛出
|
||||
|
||||
# 检查处理结果
|
||||
with lock:
|
||||
result = results_dict.get(index, {})
|
||||
if result.get('error') is None:
|
||||
success_count += 1
|
||||
else:
|
||||
error_count += 1
|
||||
|
||||
# 显示进度
|
||||
if completed_count % 10 == 0 or completed_count == total_rows:
|
||||
elapsed_time = time.time() - start_time
|
||||
avg_time_per_item = elapsed_time / completed_count
|
||||
remaining_time = avg_time_per_item * (total_rows - completed_count)
|
||||
|
||||
logging.info(f"进度: {completed_count}/{total_rows} ({completed_count/total_rows*100:.1f}%) "
|
||||
f"成功: {success_count}, 失败: {error_count}, "
|
||||
f"预计剩余时间: {remaining_time:.1f}秒")
|
||||
|
||||
except Exception as e:
|
||||
error_count += 1
|
||||
logging.error(f"任务 {index + 1} 执行异常: {str(e)}")
|
||||
with lock:
|
||||
if index not in results_dict:
|
||||
results_dict[index] = {
|
||||
'测试总分': 0,
|
||||
'测试明细': "",
|
||||
'测试recordId': "",
|
||||
'error': f'任务执行异常: {str(e)}'
|
||||
}
|
||||
|
||||
# 将结果更新到DataFrame
|
||||
logging.info("正在更新结果到DataFrame...")
|
||||
for index in results_dict:
|
||||
result = results_dict[index]
|
||||
df.at[index, '测试总分'] = result.get('测试总分', 0)
|
||||
df.at[index, '测试明细'] = result.get('测试明细', "")
|
||||
df.at[index, '测试recordId'] = result.get('测试recordId', "")
|
||||
|
||||
# 如果有错误,可以选择记录到备注列(如果存在)
|
||||
if result.get('error') and '备注' in df.columns:
|
||||
existing_note = str(df.at[index, '备注']) if pd.notna(df.at[index, '备注']) else ""
|
||||
error_note = f"声通API错误: {result['error']}"
|
||||
df.at[index, '备注'] = f"{existing_note}\n{error_note}".strip()
|
||||
|
||||
# 创建输出目录
|
||||
output_path = Path(output_dir)
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# 生成输出文件路径
|
||||
input_path = Path(input_file_path)
|
||||
output_file_path = output_path / f"{input_path.stem}_add_shengtong_result.xlsx"
|
||||
|
||||
# 保存结果
|
||||
logging.info(f"正在保存结果到: {output_file_path}")
|
||||
df.to_excel(output_file_path, index=False)
|
||||
|
||||
# 计算总耗时
|
||||
total_time = time.time() - start_time
|
||||
|
||||
# 统计处理结果
|
||||
final_success_count = sum(1 for result in results_dict.values() if result.get('error') is None)
|
||||
final_error_count = len(results_dict) - final_success_count
|
||||
|
||||
logging.info("=" * 50)
|
||||
logging.info("并发处理完成!")
|
||||
logging.info(f"处理统计: 成功 {final_success_count} 条,失败 {final_error_count} 条,总计 {len(results_dict)} 条")
|
||||
logging.info(f"总耗时: {total_time:.2f} 秒")
|
||||
logging.info(f"平均处理时间: {total_time/len(results_dict):.2f} 秒/条")
|
||||
logging.info(f"输出文件: {output_file_path}")
|
||||
logging.info("=" * 50)
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"处理Excel文件时出错: {str(e)}")
|
||||
return False
|
||||
|
||||
|
||||
def process_excel_with_shengtong(input_file_path, output_dir="output/audio"):
|
||||
"""
|
||||
处理Excel文件,添加声通评测结果(串行版本)
|
||||
|
||||
Args:
|
||||
input_file_path (str): 输入Excel文件路径
|
||||
output_dir (str): 输出目录路径,默认为 output/audio
|
||||
|
||||
Returns:
|
||||
bool: 处理是否成功
|
||||
"""
|
||||
try:
|
||||
# 读取Excel文件
|
||||
print(f"正在读取Excel文件: {input_file_path}")
|
||||
df = pd.read_excel(input_file_path)
|
||||
|
||||
# 检查必要的列是否存在
|
||||
required_columns = ['refText', 'userAudio']
|
||||
missing_columns = [col for col in required_columns if col not in df.columns]
|
||||
if missing_columns:
|
||||
print(f"错误: Excel文件缺少必要的列: {missing_columns}")
|
||||
return False
|
||||
|
||||
# 添加新列
|
||||
df['测试总分'] = 0
|
||||
df['测试明细'] = ""
|
||||
df['测试recordId'] = ""
|
||||
|
||||
# 创建临时目录用于下载音频文件
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
print(f"创建临时目录: {temp_dir}")
|
||||
|
||||
# 处理每一行数据
|
||||
total_rows = len(df)
|
||||
for index, row in df.iterrows():
|
||||
print(f"\n处理进度: {index + 1}/{total_rows}")
|
||||
|
||||
ref_text = str(row['refText']) if pd.notna(row['refText']) else ""
|
||||
audio_url = str(row['userAudio']) if pd.notna(row['userAudio']) else ""
|
||||
|
||||
if not ref_text or not audio_url:
|
||||
print(f"第 {index + 1} 行数据不完整,跳过")
|
||||
continue
|
||||
|
||||
print(f"参考文本: {ref_text}")
|
||||
print(f"音频URL: {audio_url}")
|
||||
|
||||
# 下载音频文件
|
||||
audio_file_path = download_audio_file(audio_url, temp_dir)
|
||||
if not audio_file_path:
|
||||
print(f"第 {index + 1} 行音频下载失败,跳过")
|
||||
continue
|
||||
|
||||
# 调用声通API进行评测
|
||||
print("正在调用声通API进行评测...")
|
||||
try:
|
||||
shengtong_result = evaluate_audio_file(audio_file_path, ref_text)
|
||||
print(f"声通API返回结果: {json.dumps(shengtong_result, indent=2, ensure_ascii=False)}")
|
||||
|
||||
# 提取总分、明细和recordId
|
||||
total_score = get_shengtong_total_score(shengtong_result)
|
||||
details = format_shengtong_details(shengtong_result)
|
||||
record_id = get_shengtong_record_id(shengtong_result)
|
||||
|
||||
# 更新DataFrame
|
||||
df.at[index, '测试总分'] = total_score
|
||||
df.at[index, '测试明细'] = details
|
||||
df.at[index, '测试recordId'] = record_id
|
||||
|
||||
print(f"测试总分: {total_score}")
|
||||
print(f"测试明细: {details}")
|
||||
print(f"测试recordId: {record_id}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"第 {index + 1} 行声通API调用失败: {str(e)}")
|
||||
continue
|
||||
|
||||
# 删除临时音频文件
|
||||
try:
|
||||
os.remove(audio_file_path)
|
||||
except:
|
||||
pass
|
||||
|
||||
# 添加延时避免API调用过于频繁
|
||||
time.sleep(1)
|
||||
|
||||
# 创建输出目录
|
||||
output_path = Path(output_dir)
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# 生成输出文件路径
|
||||
input_path = Path(input_file_path)
|
||||
output_file_path = output_path / f"{input_path.stem}_add_shengtong_result.xlsx"
|
||||
|
||||
# 保存结果
|
||||
print(f"\n正在保存结果到: {output_file_path}")
|
||||
df.to_excel(output_file_path, index=False)
|
||||
print("处理完成!")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"处理Excel文件时出错: {str(e)}")
|
||||
return False
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# ==================== 配置参数 ====================
|
||||
input_file = "人工筛选测试集v2_denoise.xlsx"
|
||||
output_directory = "output/audio" # 输出目录,可以修改
|
||||
use_concurrent = True # True: 使用并发版本,False: 使用串行版本
|
||||
|
||||
# DEBUG 模式开关(True: 显示详细调试信息,False: 仅显示关键信息)
|
||||
enable_debug = False # 可以设置为 True 来查看详细的 DEBUG 日志
|
||||
|
||||
# 设置全局 DEBUG_MODE
|
||||
globals()['DEBUG_MODE'] = enable_debug
|
||||
|
||||
# 检查环境变量
|
||||
required_env_vars = ['ST_APP_KEY', 'ST_SECRET_KEY']
|
||||
missing_vars = [var for var in required_env_vars if not os.environ.get(var)]
|
||||
|
||||
if missing_vars:
|
||||
print(f"错误: 缺少必要的环境变量: {missing_vars}")
|
||||
print("请在 .env 文件或系统环境变量中配置:")
|
||||
print(" ST_APP_KEY=你的应用Key")
|
||||
print(" ST_SECRET_KEY=你的Secret Key")
|
||||
elif not os.path.exists(input_file):
|
||||
print(f"文件不存在: {input_file}")
|
||||
print("请确保Excel文件存在并包含 'refText' 和 'userAudio' 列")
|
||||
else:
|
||||
if use_concurrent:
|
||||
print("使用并发版本处理(3路并发,3 req/s)...")
|
||||
success = process_excel_with_shengtong_concurrent(
|
||||
input_file,
|
||||
output_dir=output_directory,
|
||||
max_workers=3,
|
||||
rate_limit_per_second=3
|
||||
)
|
||||
else:
|
||||
print("使用串行版本处理...")
|
||||
success = process_excel_with_shengtong(input_file, output_dir=output_directory)
|
||||
|
||||
if success:
|
||||
print("处理成功!")
|
||||
else:
|
||||
print("处理失败!")
|
||||
1090
business_knowledge/git_scripts/batch_add_xunfei_result.py
Normal file
1090
business_knowledge/git_scripts/batch_add_xunfei_result.py
Normal file
File diff suppressed because it is too large
Load Diff
492
business_knowledge/git_scripts/export_component_record.py
Normal file
492
business_knowledge/git_scripts/export_component_record.py
Normal file
@ -0,0 +1,492 @@
|
||||
"""
|
||||
互动组件数据导出
|
||||
|
||||
需求 20251123:
|
||||
---------
|
||||
在 PGsql数据库中 筛选数据
|
||||
数据库相关配置 从.env中读取:
|
||||
PG_DB_HOST = xxx
|
||||
PG_DB_PORT = xxx
|
||||
PG_DB_USER = xxx
|
||||
PG_DB_PASSWORD = xxx
|
||||
PG_DB_DATABASE = xxx
|
||||
|
||||
读取以下数据表:
|
||||
user_component_play_record_0 ~ user_component_play_record_7
|
||||
|
||||
支持输入时间范围
|
||||
起始时间 和 截止时间 配置格式: "20250110"
|
||||
|
||||
数据表中的时间字段为 updated_at , 格式样例: "2025-11-05 19:35:46.698246+08:00"
|
||||
|
||||
在这些时间范围内,筛选以下字段数据 导出为excel文件:
|
||||
|
||||
c_type 与 c_id 非空
|
||||
|
||||
输出以下字段:
|
||||
user_id,
|
||||
session_id,
|
||||
c_type,
|
||||
c_id,
|
||||
play_result,
|
||||
user_behavior_info,
|
||||
updated_at
|
||||
|
||||
写一个简单清晰的 数据导出脚本, 输入参数都直接在脚本开头定义和修改。 不要改动文件开头的需求描述,直接追加代码。
|
||||
-------
|
||||
|
||||
需求二:
|
||||
读取上述 输出的 excel 文件, 围绕 每个组件进行 统计,
|
||||
|
||||
统计方式如下:
|
||||
仅计算 c_type 与 c_id 非空 的记录
|
||||
|
||||
以每个 c_type + c_id 拼接 后 作为统计维度,
|
||||
统计以下数据:
|
||||
总数量
|
||||
Perfect数量:play_result=="Perfect" 的数量
|
||||
Good数量:play_result=="Good" 的数量
|
||||
Pass数量:play_result=="Pass" 的数量
|
||||
Oops数量:play_result=="Oops" 的数量
|
||||
Failed数量:play_result=="Failed" 的数量
|
||||
Perfect+Good数量:play_result=="Perfect" 或 play_result=="Good" 的数量
|
||||
Perfect比例:Perfect数量 / 总数量
|
||||
Good比例:Good数量 / 总数量
|
||||
Pass比例:Pass数量 / 总数量
|
||||
Oops比例:Oops数量 / 总数量
|
||||
Failed比例:Failed数量 / 总数量
|
||||
Perfect+Good比例:Perfect+Good数量 / 总数量
|
||||
|
||||
导出为excel 命名: 步骤1文件 结尾追加 _stats.xlsx
|
||||
|
||||
需求三:
|
||||
在需求二中, 追加从另外两个mysql表关联的组件配置字段:
|
||||
MYSQL_HOST=xxx
|
||||
MYSQL_USERNAME=xxx
|
||||
MYSQL_PASSWORD=xxx
|
||||
MYSQL_DATABASE=xxx
|
||||
MYSQL_PORT=xxx
|
||||
|
||||
以上环境变量已配置在 .env 中。
|
||||
|
||||
1.如果 c_type 开头为"mid"
|
||||
|
||||
则读取下表:表名:middle_interaction_component
|
||||
|
||||
增加以下字段:
|
||||
title
|
||||
component_config
|
||||
组件类型
|
||||
|
||||
其中:
|
||||
“组件类型”: 根据以下映射 把 c_type 转成中文名:xx互动
|
||||
{
|
||||
"词汇类": {
|
||||
"物品互动": "mid_vocab_item",
|
||||
"图片互动": "mid_vocab_image",
|
||||
"填词互动": "mid_vocab_fillBlank",
|
||||
"指令互动": "mid_vocab_instruction"
|
||||
},
|
||||
"句子类": {
|
||||
"对话互动": "mid_sentence_dialogue",
|
||||
"语音互动": "mid_sentence_voice",
|
||||
"材料互动": "mid_sentence_material",
|
||||
"造句互动": "mid_sentence_makeSentence"
|
||||
},
|
||||
"语法类": {
|
||||
"挖空互动": "mid_grammar_cloze",
|
||||
"组句互动": "mid_grammar_sentence"
|
||||
},
|
||||
"发音类": {
|
||||
"发音互动": "mid_pron_pron"
|
||||
|
||||
}
|
||||
|
||||
2. 如果 c_type 开头为"core"
|
||||
则读取下表:表名:core_interaction_component
|
||||
|
||||
增加以下字段:
|
||||
title
|
||||
component_config
|
||||
组件类型
|
||||
|
||||
其中:
|
||||
“组件类型”: 根据以下映射 把 c_type 转成中文名:xx互动
|
||||
{
|
||||
"口语类": {
|
||||
"口语快答": "core_speaking_reply",
|
||||
"口语妙问": "core_speaking_inquiry",
|
||||
"口语探讨": "core_speaking_explore"
|
||||
"口语独白": "core_speaking_monologue"
|
||||
},
|
||||
"阅读类": {
|
||||
"合作阅读": "core_reading_order",
|
||||
},
|
||||
"听力类": {
|
||||
"合作听力": "core_listening_order",
|
||||
},
|
||||
"写作类": {
|
||||
"看图组句": "core_writing_imgMakeSentence",
|
||||
"看图撰写": "core_writing_imgWrite",
|
||||
"问题组句": "core_writing_questionMakeSentence",
|
||||
"问题撰写": "core_writing_questionWrite",
|
||||
},
|
||||
}
|
||||
|
||||
以上追加字段 增加到 步骤二输出的表中
|
||||
|
||||
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
from datetime import datetime
|
||||
from dotenv import load_dotenv
|
||||
import psycopg2
|
||||
import pandas as pd
|
||||
import pymysql
|
||||
|
||||
# ==================== 配置参数 ====================
|
||||
# 时间范围配置(格式: "20250110")
|
||||
START_DATE = "20250915" # 起始日期
|
||||
END_DATE = "20251122" # 截止日期
|
||||
|
||||
# 输出文件路径
|
||||
OUTPUT_DIR = "output"
|
||||
|
||||
# 执行步骤控制
|
||||
RUN_STEP1 = False # 是否执行步骤1:数据导出
|
||||
RUN_STEP2 = True # 是否执行步骤2:数据统计
|
||||
# ==================================================
|
||||
|
||||
# c_type 到中文组件类型的映射
|
||||
C_TYPE_MAPPING = {
|
||||
# middle_interaction_component 映射
|
||||
"mid_vocab_item": "物品互动",
|
||||
"mid_vocab_image": "图片互动",
|
||||
"mid_vocab_fillBlank": "填词互动",
|
||||
"mid_vocab_instruction": "指令互动",
|
||||
"mid_sentence_dialogue": "对话互动",
|
||||
"mid_sentence_voice": "语音互动",
|
||||
"mid_sentence_material": "材料互动",
|
||||
"mid_sentence_makeSentence": "造句互动",
|
||||
"mid_grammar_cloze": "挖空互动",
|
||||
"mid_grammar_sentence": "组句互动",
|
||||
"mid_pron_pron": "发音互动",
|
||||
|
||||
# core_interaction_component 映射
|
||||
"core_speaking_reply": "口语快答",
|
||||
"core_speaking_inquiry": "口语妙问",
|
||||
"core_speaking_explore": "口语探讨",
|
||||
"core_speaking_monologue": "口语独白",
|
||||
"core_reading_order": "合作阅读",
|
||||
"core_listening_order": "合作听力",
|
||||
"core_writing_imgMakeSentence": "看图组句",
|
||||
"core_writing_imgWrite": "看图撰写",
|
||||
"core_writing_questionMakeSentence": "问题组句",
|
||||
"core_writing_questionWrite": "问题撰写",
|
||||
}
|
||||
|
||||
|
||||
def step1_export_data():
|
||||
"""步骤1:从数据库导出数据"""
|
||||
print("=" * 60)
|
||||
print("步骤1:数据导出")
|
||||
print("=" * 60)
|
||||
|
||||
# 加载环境变量
|
||||
load_dotenv()
|
||||
|
||||
# 获取数据库配置
|
||||
db_config = {
|
||||
'host': os.getenv('PG_DB_HOST'),
|
||||
'port': os.getenv('PG_DB_PORT'),
|
||||
'user': os.getenv('PG_DB_USER'),
|
||||
'password': os.getenv('PG_DB_PASSWORD'),
|
||||
'database': os.getenv('PG_DB_DATABASE')
|
||||
}
|
||||
|
||||
# 转换时间格式
|
||||
start_datetime = datetime.strptime(START_DATE, "%Y%m%d").strftime("%Y-%m-%d 00:00:00")
|
||||
end_datetime = datetime.strptime(END_DATE, "%Y%m%d").strftime("%Y-%m-%d 23:59:59")
|
||||
|
||||
print(f"时间范围: {start_datetime} ~ {end_datetime}")
|
||||
|
||||
# 连接数据库
|
||||
conn = psycopg2.connect(**db_config)
|
||||
|
||||
# 存储所有表的数据
|
||||
all_data = []
|
||||
|
||||
# 遍历8个分表
|
||||
for i in range(8):
|
||||
table_name = f"user_component_play_record_{i}"
|
||||
print(f"正在读取表: {table_name}")
|
||||
|
||||
# SQL查询
|
||||
query = f"""
|
||||
SELECT
|
||||
user_id,
|
||||
session_id,
|
||||
c_type,
|
||||
c_id,
|
||||
play_result,
|
||||
user_behavior_info,
|
||||
updated_at
|
||||
FROM {table_name}
|
||||
WHERE updated_at >= %s
|
||||
AND updated_at <= %s
|
||||
AND c_type IS NOT NULL
|
||||
AND c_id IS NOT NULL
|
||||
"""
|
||||
|
||||
# 执行查询
|
||||
df = pd.read_sql_query(query, conn, params=(start_datetime, end_datetime))
|
||||
all_data.append(df)
|
||||
print(f" - 读取到 {len(df)} 条记录")
|
||||
|
||||
# 关闭数据库连接
|
||||
conn.close()
|
||||
|
||||
# 合并所有数据
|
||||
result_df = pd.concat(all_data, ignore_index=True)
|
||||
print(f"\n总共获取 {len(result_df)} 条记录")
|
||||
|
||||
# 移除 updated_at 字段的时区信息(Excel不支持带时区的datetime)
|
||||
if 'updated_at' in result_df.columns and not result_df.empty:
|
||||
result_df['updated_at'] = result_df['updated_at'].dt.tz_localize(None)
|
||||
|
||||
# 确保输出目录存在
|
||||
os.makedirs(OUTPUT_DIR, exist_ok=True)
|
||||
|
||||
# 生成输出文件名
|
||||
output_filename = f"component_record_{START_DATE}_{END_DATE}.xlsx"
|
||||
output_path = os.path.join(OUTPUT_DIR, output_filename)
|
||||
|
||||
# 导出到Excel
|
||||
result_df.to_excel(output_path, index=False, engine='openpyxl')
|
||||
print(f"数据已导出到: {output_path}")
|
||||
print()
|
||||
|
||||
return output_path
|
||||
|
||||
|
||||
def get_component_info_from_mysql(stats_df):
|
||||
"""从MySQL获取组件配置信息"""
|
||||
# 加载环境变量
|
||||
load_dotenv()
|
||||
|
||||
# 获取MySQL配置
|
||||
mysql_config = {
|
||||
'host': os.getenv('MYSQL_HOST'),
|
||||
'user': os.getenv('MYSQL_USERNAME'),
|
||||
'password': os.getenv('MYSQL_PASSWORD'),
|
||||
'database': os.getenv('MYSQL_DATABASE'),
|
||||
'port': int(os.getenv('MYSQL_PORT', 3306)),
|
||||
'charset': 'utf8mb4'
|
||||
}
|
||||
|
||||
print("正在连接MySQL数据库...")
|
||||
conn = pymysql.connect(**mysql_config)
|
||||
|
||||
try:
|
||||
# 分别处理 mid 和 core 类型的组件
|
||||
mid_records = stats_df[stats_df['c_type'].str.startswith('mid', na=False)][['c_type', 'c_id']]
|
||||
core_records = stats_df[stats_df['c_type'].str.startswith('core', na=False)][['c_type', 'c_id']]
|
||||
|
||||
# 存储组件信息的字典,key 为 "c_type-c_id"
|
||||
component_info = {}
|
||||
|
||||
# 查询 middle_interaction_component 表
|
||||
if not mid_records.empty:
|
||||
print(f"正在查询 middle_interaction_component 表,共 {len(mid_records)} 个组件...")
|
||||
|
||||
# 获取唯一的 c_type 和 c_id 组合
|
||||
mid_unique = mid_records.drop_duplicates()
|
||||
|
||||
for _, row in mid_unique.iterrows():
|
||||
c_type = row['c_type']
|
||||
c_id = row['c_id']
|
||||
|
||||
query = """
|
||||
SELECT title, component_config
|
||||
FROM middle_interaction_component
|
||||
WHERE c_type = %s AND c_id = %s
|
||||
"""
|
||||
result = pd.read_sql_query(query, conn, params=(c_type, c_id))
|
||||
|
||||
if not result.empty:
|
||||
key = f"{c_type}-{c_id}"
|
||||
component_info[key] = {
|
||||
'title': result['title'].iloc[0],
|
||||
'component_config': result['component_config'].iloc[0]
|
||||
}
|
||||
|
||||
print(f" - 查询到 {len([k for k in component_info.keys() if k.startswith('mid')])} 个组件信息")
|
||||
|
||||
# 查询 core_interaction_component 表
|
||||
if not core_records.empty:
|
||||
print(f"正在查询 core_interaction_component 表,共 {len(core_records)} 个组件...")
|
||||
|
||||
# 获取唯一的 c_type 和 c_id 组合
|
||||
core_unique = core_records.drop_duplicates()
|
||||
|
||||
for _, row in core_unique.iterrows():
|
||||
c_type = row['c_type']
|
||||
c_id = row['c_id']
|
||||
|
||||
query = """
|
||||
SELECT title, component_config
|
||||
FROM core_interaction_component
|
||||
WHERE c_type = %s AND c_id = %s
|
||||
"""
|
||||
result = pd.read_sql_query(query, conn, params=(c_type, c_id))
|
||||
|
||||
if not result.empty:
|
||||
key = f"{c_type}-{c_id}"
|
||||
component_info[key] = {
|
||||
'title': result['title'].iloc[0],
|
||||
'component_config': result['component_config'].iloc[0]
|
||||
}
|
||||
|
||||
print(f" - 查询到 {len([k for k in component_info.keys() if k.startswith('core')])} 个组件信息")
|
||||
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
return component_info
|
||||
|
||||
|
||||
def step2_statistics(input_file):
|
||||
"""步骤2:数据统计"""
|
||||
print("=" * 60)
|
||||
print("步骤2:数据统计")
|
||||
print("=" * 60)
|
||||
|
||||
# 读取步骤1导出的Excel文件,c_id作为字符串读取以保留前导零
|
||||
print(f"正在读取文件: {input_file}")
|
||||
df = pd.read_excel(input_file, engine='openpyxl', dtype={'c_id': str})
|
||||
print(f"读取到 {len(df)} 条记录")
|
||||
|
||||
# 筛选 c_type 和 c_id 非空的记录
|
||||
df_filtered = df[(df['c_type'].notna()) & (df['c_id'].notna())].copy()
|
||||
print(f"筛选后 {len(df_filtered)} 条有效记录")
|
||||
|
||||
# 确保c_type和c_id都是字符串类型(保留c_id的前导零)
|
||||
df_filtered['c_type'] = df_filtered['c_type'].astype(str)
|
||||
df_filtered['c_id'] = df_filtered['c_id'].astype(str)
|
||||
|
||||
# 创建组件ID(c_type-c_id)
|
||||
df_filtered['component_id'] = df_filtered['c_type'] + '-' + df_filtered['c_id']
|
||||
|
||||
# 按组件ID分组统计
|
||||
stats_list = []
|
||||
|
||||
for component_id, group in df_filtered.groupby('component_id'):
|
||||
# 获取原始的 c_type 和 c_id
|
||||
c_type = group['c_type'].iloc[0]
|
||||
c_id = group['c_id'].iloc[0]
|
||||
|
||||
# 总数量
|
||||
total_count = len(group)
|
||||
|
||||
# 各状态数量
|
||||
perfect_count = len(group[group['play_result'] == 'Perfect'])
|
||||
good_count = len(group[group['play_result'] == 'Good'])
|
||||
pass_count = len(group[group['play_result'] == 'Pass'])
|
||||
oops_count = len(group[group['play_result'] == 'Oops'])
|
||||
failed_count = len(group[group['play_result'] == 'Failed'])
|
||||
perfect_good_count = len(group[group['play_result'].isin(['Perfect', 'Good'])])
|
||||
|
||||
# 计算比例(保留两位小数)
|
||||
perfect_ratio = round(perfect_count / total_count, 2) if total_count > 0 else 0
|
||||
good_ratio = round(good_count / total_count, 2) if total_count > 0 else 0
|
||||
pass_ratio = round(pass_count / total_count, 2) if total_count > 0 else 0
|
||||
oops_ratio = round(oops_count / total_count, 2) if total_count > 0 else 0
|
||||
failed_ratio = round(failed_count / total_count, 2) if total_count > 0 else 0
|
||||
perfect_good_ratio = round(perfect_good_count / total_count, 2) if total_count > 0 else 0
|
||||
|
||||
stats_list.append({
|
||||
'component_id': component_id,
|
||||
'c_type': c_type,
|
||||
'c_id': c_id,
|
||||
'总数量': total_count,
|
||||
'Perfect数量': perfect_count,
|
||||
'Good数量': good_count,
|
||||
'Pass数量': pass_count,
|
||||
'Oops数量': oops_count,
|
||||
'Failed数量': failed_count,
|
||||
'Perfect+Good数量': perfect_good_count,
|
||||
'Perfect比例': perfect_ratio,
|
||||
'Good比例': good_ratio,
|
||||
'Pass比例': pass_ratio,
|
||||
'Oops比例': oops_ratio,
|
||||
'Failed比例': failed_ratio,
|
||||
'Perfect+Good比例': perfect_good_ratio
|
||||
})
|
||||
|
||||
# 创建统计结果DataFrame
|
||||
stats_df = pd.DataFrame(stats_list)
|
||||
|
||||
print(f"统计了 {len(stats_df)} 个不同的组件")
|
||||
|
||||
# 从MySQL获取组件配置信息
|
||||
print("\n" + "=" * 60)
|
||||
print("正在从MySQL获取组件配置信息...")
|
||||
print("=" * 60)
|
||||
component_info = get_component_info_from_mysql(stats_df)
|
||||
|
||||
# 添加新字段:title, component_config, 组件类型
|
||||
# 使用 component_id (c_type-c_id) 作为 key 来匹配
|
||||
stats_df['title'] = stats_df['component_id'].apply(lambda x: component_info.get(x, {}).get('title', ''))
|
||||
stats_df['component_config'] = stats_df['component_id'].apply(lambda x: component_info.get(x, {}).get('component_config', ''))
|
||||
stats_df['组件类型'] = stats_df['c_type'].apply(lambda x: C_TYPE_MAPPING.get(x, ''))
|
||||
|
||||
# 重新排列列顺序:将新增字段放在 c_type, c_id 后面
|
||||
columns_order = [
|
||||
'component_id', 'c_type', 'c_id',
|
||||
'title', 'component_config', '组件类型', # 新增字段
|
||||
'总数量',
|
||||
'Perfect数量', 'Good数量', 'Pass数量', 'Oops数量', 'Failed数量', 'Perfect+Good数量',
|
||||
'Perfect比例', 'Good比例', 'Pass比例', 'Oops比例', 'Failed比例', 'Perfect+Good比例'
|
||||
]
|
||||
stats_df = stats_df[columns_order]
|
||||
|
||||
# 生成输出文件名(在原文件名后追加_stats)
|
||||
output_filename = os.path.basename(input_file).replace('.xlsx', '_stats.xlsx')
|
||||
output_path = os.path.join(OUTPUT_DIR, output_filename)
|
||||
|
||||
# 导出到Excel
|
||||
stats_df.to_excel(output_path, index=False, engine='openpyxl')
|
||||
print(f"\n统计结果已导出到: {output_path}")
|
||||
print()
|
||||
|
||||
return output_path
|
||||
|
||||
|
||||
def main():
|
||||
export_file = None
|
||||
|
||||
# 执行步骤1:数据导出
|
||||
if RUN_STEP1:
|
||||
export_file = step1_export_data()
|
||||
|
||||
# 执行步骤2:数据统计
|
||||
if RUN_STEP2:
|
||||
# 如果步骤1没有执行,需要手动指定文件路径
|
||||
if export_file is None:
|
||||
export_file = os.path.join(OUTPUT_DIR, f"component_record_{START_DATE}_{END_DATE}.xlsx")
|
||||
if not os.path.exists(export_file):
|
||||
print(f"错误:找不到文件 {export_file}")
|
||||
print("请先执行步骤1或确保文件存在")
|
||||
return
|
||||
|
||||
step2_statistics(export_file)
|
||||
|
||||
print("=" * 60)
|
||||
print("处理完成!")
|
||||
print("=" * 60)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
572
business_knowledge/git_scripts/export_lesson_review.py
Normal file
572
business_knowledge/git_scripts/export_lesson_review.py
Normal file
@ -0,0 +1,572 @@
|
||||
"""
|
||||
** 不要改动我的需求描述,直接在需求后面写代码即可 **
|
||||
|
||||
课程巩固 数据导出 和 分析
|
||||
|
||||
-----------
|
||||
需求一:
|
||||
在 PGsql数据库中 筛选数据
|
||||
数据库相关配置 从.env中读取:
|
||||
PG_DB_HOST = xxx
|
||||
PG_DB_PORT = xxx
|
||||
PG_DB_USER = xxx
|
||||
PG_DB_PASSWORD = xxx
|
||||
PG_DB_DATABASE = xxx
|
||||
|
||||
读取以下数据表: user_unit_review_question_result
|
||||
|
||||
支持输入时间范围
|
||||
起始时间 和 截止时间 配置格式: "20250110"
|
||||
|
||||
数据表中的时间字段为 updated_at , 格式样例: "2025-11-05 19:35:46.698246+08:00"
|
||||
|
||||
在这些时间范围内,筛选数据 (要求deleted_at字段内容为null)
|
||||
|
||||
导出以下字段:
|
||||
|
||||
user_id
|
||||
unit_id (读取每条记录的story_id, 根据 get_id_2_unit_index 函数返回的映射表 映射到 unit_id)
|
||||
lesson_id (读取chapter_id, 根据该值 查询 mysql表 vala_game_chapter 的 id == chapter_id, 并返回该记录的 index字段的值)
|
||||
question_list
|
||||
题目总数
|
||||
正确数量
|
||||
正确率
|
||||
play_time_seconds (读取 play_time 把ms数据转换为秒 保留整数部分)
|
||||
updated_at
|
||||
|
||||
其中 题目总数 正确数量 正确率 都通过 question_list 计算,
|
||||
该字段为 list of json:
|
||||
[
|
||||
{
|
||||
"question": {
|
||||
"type": "vocab_meaning_meaning",
|
||||
"id": "20-0",
|
||||
"title": "“clean” 的意思是什么?",
|
||||
"npcId": -1
|
||||
},
|
||||
"answers": [
|
||||
"2"
|
||||
],
|
||||
"optionList": [
|
||||
{
|
||||
"option": "爬行"
|
||||
},
|
||||
{
|
||||
"option": "清晰的"
|
||||
},
|
||||
{
|
||||
"option": "清洁"
|
||||
}
|
||||
],
|
||||
"isRight": true
|
||||
},
|
||||
...
|
||||
]
|
||||
|
||||
每个元素为一道题目, 题目中有 "isRight": true 代表用户做对了。
|
||||
|
||||
导出为excel文件
|
||||
----
|
||||
需求二 基于 需求一的输出文件 作为 输入文件 进行数据聚合。
|
||||
|
||||
聚合的维度是每道题目
|
||||
|
||||
根据 question_list 中的 每个题目 取 question -> id 作为唯一标识
|
||||
|
||||
统计每个题目
|
||||
总记录数量
|
||||
正确数量
|
||||
正确率
|
||||
|
||||
并查询mysql表 补充题目的以下信息:
|
||||
步骤一中,每个题目id的格式是 num1-num2 (question -> id)
|
||||
查询vala_kp_question表
|
||||
其中num1部分 用于 检索vala_kp_question 中的 id, 每个id下 可能有多道题目 在 vala_kp_question的 question 字段 是一个list, num2为question 字段中的索引
|
||||
|
||||
补充以下字段:
|
||||
kp_id (vala_kp_question字段)
|
||||
category (vala_kp_question字段)
|
||||
skill (vala_kp_question字段)
|
||||
type (vala_kp_question字段)
|
||||
题目配置 (question字段中 对应 num2 索引的内容)
|
||||
|
||||
最终针对每道题目输出以下字段:
|
||||
出现位置 (list, 把所有出现的位置拼接 unit_id +"_"+ lesson_id 例如:"unit10-lesson1" 这样的格式)
|
||||
question_id (question -> id)
|
||||
kp_id (vala_kp_question字段)
|
||||
category (vala_kp_question字段)
|
||||
skill (vala_kp_question字段)
|
||||
type (vala_kp_question字段)
|
||||
题目配置 (question字段中 对应 num2 索引的内容)
|
||||
总记录数量
|
||||
正确数量
|
||||
正确率
|
||||
|
||||
导出为excel 命名为 步骤一文件_stat.xlsx
|
||||
|
||||
所有需要配置的参数 放在脚本开头位置
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import pymysql
|
||||
import psycopg2
|
||||
from psycopg2.extras import RealDictCursor
|
||||
from datetime import datetime
|
||||
import pandas as pd
|
||||
from dotenv import load_dotenv
|
||||
import json
|
||||
from collections import defaultdict
|
||||
|
||||
# 加载环境变量
|
||||
load_dotenv()
|
||||
|
||||
# ============ 配置参数 ============
|
||||
START_DATE = "20250915" # 起始时间
|
||||
END_DATE = "20251122" # 截止时间
|
||||
OUTPUT_NAME = "lesson_review_data_{}_{}.xlsx".format(START_DATE, END_DATE) # 输出文件名
|
||||
OUTPUT_FILENAME = os.path.join("./output", OUTPUT_NAME)
|
||||
# =================================
|
||||
|
||||
def get_mysql_connection():
|
||||
"""获取MySQL连接"""
|
||||
db_host = os.getenv('MYSQL_HOST')
|
||||
db_user = os.getenv('MYSQL_USERNAME')
|
||||
db_password = os.getenv('MYSQL_PASSWORD')
|
||||
db_name = os.getenv('MYSQL_DATABASE')
|
||||
db_port = os.getenv('MYSQL_PORT')
|
||||
|
||||
if not all([db_host, db_user, db_password, db_name]):
|
||||
raise Exception("Error: Missing MySQL configuration in .env file.")
|
||||
|
||||
connection = pymysql.connect(
|
||||
host=db_host,
|
||||
user=db_user,
|
||||
password=db_password,
|
||||
database=db_name,
|
||||
port=int(db_port) if db_port else 3306,
|
||||
cursorclass=pymysql.cursors.DictCursor
|
||||
)
|
||||
return connection
|
||||
|
||||
def get_pgsql_connection():
|
||||
"""获取PGsql连接"""
|
||||
pg_host = os.getenv('PG_DB_HOST')
|
||||
pg_port = os.getenv('PG_DB_PORT')
|
||||
pg_user = os.getenv('PG_DB_USER')
|
||||
pg_password = os.getenv('PG_DB_PASSWORD')
|
||||
pg_database = os.getenv('PG_DB_DATABASE')
|
||||
|
||||
if not all([pg_host, pg_port, pg_user, pg_password, pg_database]):
|
||||
raise Exception("Error: Missing PGsql configuration in .env file.")
|
||||
|
||||
connection = psycopg2.connect(
|
||||
host=pg_host,
|
||||
port=int(pg_port),
|
||||
user=pg_user,
|
||||
password=pg_password,
|
||||
database=pg_database,
|
||||
cursor_factory=RealDictCursor
|
||||
)
|
||||
return connection
|
||||
|
||||
def get_id_2_unit_index():
|
||||
"""获取story_id到unit_id的映射"""
|
||||
print("正在获取 story_id 到 unit_id 的映射...")
|
||||
connection = get_mysql_connection()
|
||||
|
||||
try:
|
||||
with connection.cursor() as cursor:
|
||||
sql = """
|
||||
SELECT *
|
||||
FROM `vala_game_info`
|
||||
WHERE id > 0
|
||||
AND `vala_game_info`.`deleted_at` IS NULL
|
||||
ORDER BY season_package_id asc, `index` asc
|
||||
"""
|
||||
cursor.execute(sql)
|
||||
results = cursor.fetchall()
|
||||
|
||||
id_2_unit_index = {}
|
||||
for index, row in enumerate(results):
|
||||
id_2_unit_index[row['id']] = index
|
||||
|
||||
print(f"成功获取 {len(id_2_unit_index)} 个单元映射")
|
||||
return id_2_unit_index
|
||||
finally:
|
||||
connection.close()
|
||||
|
||||
def get_chapter_id_to_lesson_id():
|
||||
"""获取chapter_id到lesson_id的映射"""
|
||||
print("正在获取 chapter_id 到 lesson_id 的映射...")
|
||||
connection = get_mysql_connection()
|
||||
|
||||
try:
|
||||
with connection.cursor() as cursor:
|
||||
sql = """
|
||||
SELECT id, `index`
|
||||
FROM `vala_game_chapter`
|
||||
WHERE deleted_at IS NULL
|
||||
"""
|
||||
cursor.execute(sql)
|
||||
results = cursor.fetchall()
|
||||
|
||||
chapter_id_to_lesson_id = {}
|
||||
for row in results:
|
||||
chapter_id_to_lesson_id[row['id']] = row['index']
|
||||
|
||||
print(f"成功获取 {len(chapter_id_to_lesson_id)} 个课程映射")
|
||||
return chapter_id_to_lesson_id
|
||||
finally:
|
||||
connection.close()
|
||||
|
||||
def analyze_question_list(question_list_json):
|
||||
"""分析题目列表,返回题目总数、正确数量、正确率"""
|
||||
try:
|
||||
if isinstance(question_list_json, str):
|
||||
question_list = json.loads(question_list_json)
|
||||
else:
|
||||
question_list = question_list_json
|
||||
|
||||
if not isinstance(question_list, list):
|
||||
return 0, 0, 0
|
||||
|
||||
total = len(question_list)
|
||||
correct = sum(1 for q in question_list if q.get('isRight') == True)
|
||||
accuracy = round(correct / total * 100, 2) if total > 0 else 0
|
||||
|
||||
return total, correct, accuracy
|
||||
except Exception as e:
|
||||
print(f"解析题目列表出错: {e}")
|
||||
return 0, 0, 0
|
||||
|
||||
def export_step1():
|
||||
"""需求一:导出原始数据"""
|
||||
print("=" * 50)
|
||||
print("开始执行需求一:导出原始数据")
|
||||
print("=" * 50)
|
||||
|
||||
# 获取映射关系
|
||||
id_2_unit_index = get_id_2_unit_index()
|
||||
chapter_id_to_lesson_id = get_chapter_id_to_lesson_id()
|
||||
|
||||
# 连接PGsql
|
||||
print("正在连接 PGsql 数据库...")
|
||||
pg_conn = get_pgsql_connection()
|
||||
|
||||
try:
|
||||
with pg_conn.cursor() as cursor:
|
||||
# 构建时间范围
|
||||
start_datetime = datetime.strptime(START_DATE, "%Y%m%d")
|
||||
end_datetime = datetime.strptime(END_DATE, "%Y%m%d")
|
||||
end_datetime = end_datetime.replace(hour=23, minute=59, second=59)
|
||||
|
||||
sql = """
|
||||
SELECT user_id, story_id, chapter_id, question_list, play_time, updated_at
|
||||
FROM user_unit_review_question_result
|
||||
WHERE updated_at >= %s
|
||||
AND updated_at <= %s
|
||||
AND deleted_at IS NULL
|
||||
ORDER BY updated_at
|
||||
"""
|
||||
|
||||
print(f"查询时间范围: {start_datetime} 至 {end_datetime}")
|
||||
cursor.execute(sql, (start_datetime, end_datetime))
|
||||
results = cursor.fetchall()
|
||||
|
||||
print(f"查询到 {len(results)} 条记录")
|
||||
|
||||
# 处理数据
|
||||
export_data = []
|
||||
for row in results:
|
||||
user_id = row['user_id']
|
||||
story_id = row['story_id']
|
||||
chapter_id = row['chapter_id']
|
||||
question_list_raw = row['question_list']
|
||||
play_time = row['play_time']
|
||||
updated_at = row['updated_at']
|
||||
|
||||
# 确保 question_list 是 Python 对象(PGsql 的 jsonb 会自动转换)
|
||||
# 如果是字符串,先解析;如果已经是对象,直接使用
|
||||
if isinstance(question_list_raw, str):
|
||||
try:
|
||||
question_list = json.loads(question_list_raw)
|
||||
except:
|
||||
question_list = []
|
||||
else:
|
||||
question_list = question_list_raw if question_list_raw else []
|
||||
|
||||
# 映射 unit_id
|
||||
unit_id = id_2_unit_index.get(story_id, -1)
|
||||
|
||||
# 映射 lesson_id
|
||||
lesson_id = chapter_id_to_lesson_id.get(chapter_id, -1)
|
||||
|
||||
# 分析题目列表
|
||||
total, correct, accuracy = analyze_question_list(question_list)
|
||||
|
||||
# 转换播放时长(ms -> s)
|
||||
play_time_seconds = int(play_time / 1000) if play_time else 0
|
||||
|
||||
# 转换question_list为字符串(统一序列化为JSON字符串)
|
||||
question_list_str = json.dumps(question_list, ensure_ascii=False) if question_list else ""
|
||||
|
||||
# 移除时区信息(Excel不支持带时区的datetime)
|
||||
updated_at_no_tz = updated_at.replace(tzinfo=None) if updated_at else None
|
||||
|
||||
export_data.append({
|
||||
'user_id': user_id,
|
||||
'unit_id': unit_id,
|
||||
'lesson_id': lesson_id,
|
||||
'question_list': question_list_str,
|
||||
'题目总数': total,
|
||||
'正确数量': correct,
|
||||
'正确率': accuracy,
|
||||
'play_time_seconds': play_time_seconds,
|
||||
'updated_at': updated_at_no_tz
|
||||
})
|
||||
|
||||
# 导出到Excel
|
||||
df = pd.DataFrame(export_data)
|
||||
|
||||
# 确保输出目录存在
|
||||
os.makedirs(os.path.dirname(OUTPUT_FILENAME), exist_ok=True)
|
||||
|
||||
df.to_excel(OUTPUT_FILENAME, index=False, engine='openpyxl')
|
||||
print(f"成功导出 {len(export_data)} 条记录到: {OUTPUT_FILENAME}")
|
||||
|
||||
return OUTPUT_FILENAME
|
||||
|
||||
finally:
|
||||
pg_conn.close()
|
||||
|
||||
def get_all_kp_questions(question_ids):
|
||||
"""批量获取所有题目信息,避免N+1查询问题"""
|
||||
print(f"正在批量查询 {len(question_ids)} 道题目的信息...")
|
||||
|
||||
# 解析所有question_id,获取需要查询的kp_question id列表
|
||||
kp_ids = set()
|
||||
for qid in question_ids:
|
||||
try:
|
||||
parts = qid.split('-')
|
||||
if len(parts) == 2:
|
||||
kp_ids.add(int(parts[0]))
|
||||
except:
|
||||
continue
|
||||
|
||||
print(f"需要查询 {len(kp_ids)} 条 vala_kp_question 记录")
|
||||
|
||||
# 批量查询MySQL
|
||||
connection = get_mysql_connection()
|
||||
kp_data_map = {}
|
||||
|
||||
try:
|
||||
with connection.cursor() as cursor:
|
||||
# 使用IN查询批量获取
|
||||
if kp_ids:
|
||||
placeholders = ','.join(['%s'] * len(kp_ids))
|
||||
sql = f"""
|
||||
SELECT id, kp_id, category, skill, type, question
|
||||
FROM vala_kp_question
|
||||
WHERE id IN ({placeholders}) AND deleted_at IS NULL
|
||||
"""
|
||||
cursor.execute(sql, tuple(kp_ids))
|
||||
results = cursor.fetchall()
|
||||
|
||||
print(f"成功查询到 {len(results)} 条记录")
|
||||
|
||||
# 构建映射表
|
||||
for row in results:
|
||||
kp_data_map[row['id']] = row
|
||||
finally:
|
||||
connection.close()
|
||||
|
||||
# 为每个question_id构建结果
|
||||
question_info_map = {}
|
||||
for question_id in question_ids:
|
||||
try:
|
||||
parts = question_id.split('-')
|
||||
if len(parts) != 2:
|
||||
question_info_map[question_id] = (None, None, None, None, None)
|
||||
continue
|
||||
|
||||
kp_id = int(parts[0])
|
||||
question_index = int(parts[1])
|
||||
|
||||
kp_data = kp_data_map.get(kp_id)
|
||||
if not kp_data:
|
||||
question_info_map[question_id] = (None, None, None, None, None)
|
||||
continue
|
||||
|
||||
# 解析question字段
|
||||
question_list = kp_data['question']
|
||||
if isinstance(question_list, str):
|
||||
question_list = json.loads(question_list)
|
||||
|
||||
# 获取指定索引的题目配置
|
||||
question_config = None
|
||||
if isinstance(question_list, list) and 0 <= question_index < len(question_list):
|
||||
question_config = json.dumps(question_list[question_index], ensure_ascii=False)
|
||||
|
||||
question_info_map[question_id] = (
|
||||
kp_data['kp_id'],
|
||||
kp_data['category'],
|
||||
kp_data['skill'],
|
||||
kp_data['type'],
|
||||
question_config
|
||||
)
|
||||
except Exception as e:
|
||||
print(f"处理题目信息出错 ({question_id}): {e}")
|
||||
question_info_map[question_id] = (None, None, None, None, None)
|
||||
|
||||
return question_info_map
|
||||
|
||||
def export_step2(input_filename):
|
||||
"""需求二:数据聚合统计"""
|
||||
print("=" * 50)
|
||||
print("开始执行需求二:数据聚合统计")
|
||||
print("=" * 50)
|
||||
|
||||
# 读取步骤一的输出文件
|
||||
print(f"正在读取文件: {input_filename}")
|
||||
df = pd.read_excel(input_filename, engine='openpyxl')
|
||||
|
||||
print(f"读取到 {len(df)} 条记录")
|
||||
|
||||
# 按题目聚合统计
|
||||
question_stats = defaultdict(lambda: {
|
||||
'locations': set(),
|
||||
'total_count': 0,
|
||||
'correct_count': 0
|
||||
})
|
||||
|
||||
parse_success_count = 0
|
||||
parse_fail_count = 0
|
||||
empty_question_list_count = 0
|
||||
processed_question_count = 0
|
||||
|
||||
for idx, row in df.iterrows():
|
||||
unit_id = row['unit_id']
|
||||
lesson_id = row['lesson_id']
|
||||
question_list_str = row['question_list']
|
||||
|
||||
# 解析question_list
|
||||
try:
|
||||
if pd.isna(question_list_str) or not question_list_str:
|
||||
question_list = []
|
||||
empty_question_list_count += 1
|
||||
else:
|
||||
question_list = json.loads(question_list_str)
|
||||
parse_success_count += 1
|
||||
except Exception as e:
|
||||
question_list = []
|
||||
parse_fail_count += 1
|
||||
if parse_fail_count <= 3:
|
||||
print(f"[警告] 第 {idx+1} 条记录解析失败: {e}")
|
||||
|
||||
# 统计每道题目
|
||||
for question_item in question_list:
|
||||
if not isinstance(question_item, dict):
|
||||
continue
|
||||
|
||||
question = question_item.get('question', {})
|
||||
question_id = question.get('id')
|
||||
is_right = question_item.get('isRight', False)
|
||||
|
||||
if not question_id:
|
||||
continue
|
||||
|
||||
# 添加出现位置
|
||||
location = f"unit{unit_id}-lesson{lesson_id}"
|
||||
question_stats[question_id]['locations'].add(location)
|
||||
|
||||
# 统计数量
|
||||
question_stats[question_id]['total_count'] += 1
|
||||
if is_right:
|
||||
question_stats[question_id]['correct_count'] += 1
|
||||
|
||||
processed_question_count += 1
|
||||
|
||||
print(f"\n解析统计:")
|
||||
print(f" - 解析成功: {parse_success_count} 条")
|
||||
print(f" - 解析失败: {parse_fail_count} 条")
|
||||
print(f" - question_list 为空: {empty_question_list_count} 条")
|
||||
print(f" - 处理的题目总数: {processed_question_count} 道")
|
||||
print(f" - 聚合得到不同题目: {len(question_stats)} 道")
|
||||
|
||||
# 批量获取所有题目信息(优化性能)
|
||||
all_question_ids = list(question_stats.keys())
|
||||
question_info_map = get_all_kp_questions(all_question_ids)
|
||||
|
||||
# 构建导出数据
|
||||
print(f"\n正在构建导出数据...")
|
||||
export_data = []
|
||||
for idx, (question_id, stats) in enumerate(question_stats.items()):
|
||||
if (idx + 1) % 100 == 0:
|
||||
print(f" 已处理 {idx + 1}/{len(question_stats)} 道题目")
|
||||
|
||||
# 从批量查询结果中获取题目信息
|
||||
kp_id, category, skill, type_field, question_config = question_info_map.get(
|
||||
question_id, (None, None, None, None, None)
|
||||
)
|
||||
|
||||
# 计算正确率
|
||||
total = stats['total_count']
|
||||
correct = stats['correct_count']
|
||||
accuracy = round(correct / total * 100, 2) if total > 0 else 0
|
||||
|
||||
# 出现位置列表
|
||||
locations_list = sorted(list(stats['locations']))
|
||||
locations_str = ', '.join(locations_list)
|
||||
|
||||
export_data.append({
|
||||
'出现位置': locations_str,
|
||||
'question_id': question_id,
|
||||
'kp_id': kp_id,
|
||||
'category': category,
|
||||
'skill': skill,
|
||||
'type': type_field,
|
||||
'题目配置': question_config,
|
||||
'总记录数量': total,
|
||||
'正确数量': correct,
|
||||
'正确率': accuracy
|
||||
})
|
||||
|
||||
# 导出到Excel
|
||||
output_stat_filename = input_filename.replace('.xlsx', '_stat.xlsx')
|
||||
df_stat = pd.DataFrame(export_data)
|
||||
|
||||
print(f"\n正在导出到 Excel...")
|
||||
df_stat.to_excel(output_stat_filename, index=False, engine='openpyxl')
|
||||
|
||||
print(f"成功导出 {len(export_data)} 道题目的统计数据到: {output_stat_filename}")
|
||||
|
||||
return output_stat_filename
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
try:
|
||||
# 执行需求一
|
||||
step1_output = export_step1()
|
||||
|
||||
print("\n")
|
||||
|
||||
# 执行需求二
|
||||
step2_output = export_step2(step1_output)
|
||||
|
||||
print("\n" + "=" * 50)
|
||||
print("所有任务完成!")
|
||||
print(f"需求一输出文件: {step1_output}")
|
||||
print(f"需求二输出文件: {step2_output}")
|
||||
print("=" * 50)
|
||||
|
||||
except Exception as e:
|
||||
print(f"执行出错: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
|
||||
|
||||
181
business_knowledge/git_scripts/export_mid_config.py
Normal file
181
business_knowledge/git_scripts/export_mid_config.py
Normal file
@ -0,0 +1,181 @@
|
||||
"""
|
||||
MYSQL_HOST=xxx
|
||||
MYSQL_USERNAME=xxx
|
||||
MYSQL_PASSWORD=xxx
|
||||
MYSQL_DATABASE=xxx
|
||||
MYSQL_PORT=xxx
|
||||
|
||||
以上环境变量已配置在 .env 中。
|
||||
|
||||
我要导出一个数据表的某些记录 并添加一些字段。
|
||||
|
||||
表名:middle_interaction_component
|
||||
|
||||
根据 c_id 过滤数据:
|
||||
c_id为 7位 字符串 其中 {两位季度编号}{两位单元编号}{三位组件编号} 过滤其中 单元编号部分为 00~20 以及 26 的对应记录 也就是 xx00xxx ~ xx20xxx 以及 xx26xxx 的记录
|
||||
|
||||
导出以下字段:
|
||||
id
|
||||
c_type
|
||||
c_id
|
||||
title
|
||||
component_config
|
||||
related_path
|
||||
kp_relation_info
|
||||
created_at
|
||||
updated_at
|
||||
|
||||
新增以下字段:
|
||||
1. “组件类型”: 根据以下映射 把 c_type 转成中文名:xx互动
|
||||
{
|
||||
"词汇类": {
|
||||
"物品互动": "mid_vocab_item",
|
||||
"图片互动": "mid_vocab_image",
|
||||
"填词互动": "mid_vocab_fillBlank",
|
||||
"指令互动": "mid_vocab_instruction"
|
||||
},
|
||||
"句子类": {
|
||||
"对话互动": "mid_sentence_dialogue",
|
||||
"语音互动": "mid_sentence_voice",
|
||||
"材料互动": "mid_sentence_material",
|
||||
"造句互动": "mid_sentence_makeSentence"
|
||||
},
|
||||
"语法类": {
|
||||
"挖空互动": "mid_grammar_cloze",
|
||||
"组句互动": "mid_grammar_sentence"
|
||||
},
|
||||
"发音类": {
|
||||
"发音互动": "mid_pron_pron"
|
||||
|
||||
}
|
||||
|
||||
2. “是否关联了知识点”: 如果 kp_relation_info 不为空 且包含至少一个具体的知识点编号 则为 “是” 否则为 “否”
|
||||
有效关联知识点的一个样例数据:[{"kpId":"0326011","kpType":"sentence","kpTitle":"What does... look like?","kpSkill":"sentence_meaning","kpSkillName":"语义"}]
|
||||
|
||||
3. "是否已组课": 如果 related_path 不为空 则为 “是” 否则为 “否”
|
||||
一个有效的 related_path 样例: {"packageId":13,"unitId":40,"lessonId":213,"packageIndex":3,"unitIndex":2,"lessonIndex":2}
|
||||
|
||||
4. “前置对话”:
|
||||
component_config 中的 preDialog 字段, 如果不存在 则为 “空”
|
||||
{"asrPrompt":"","cId":"0326022","cType":"mid_sentence_dialogue","meaning":"语义;语音","mode":"read","postDialog":[{"content":"Leave it to me.","npcId":540,"npcName":"Victoria","type":"npc"}],"preDialog":[{"content":"But do we still have time?","npcId":30,"type":"user"}],"question":{"content":"What if we miss the spaceship?","mode":"read","npcId":30,"type":"user"},"resourceMapping":{"Medic":503},"title":"询问万一错过飞船怎么办"}
|
||||
|
||||
5. "后置对话":
|
||||
component_config 中的 postDialog 字段, 如果不存在 则为 “空”
|
||||
|
||||
6. 前置/后置对话中非user角色数量
|
||||
component_config 中的 preDialog 以及 postDialog 字段中, 统计所有 type 为 npc ,根据 npcId 去重后的角色数量
|
||||
例如
|
||||
---
|
||||
前置对话:
|
||||
[{"content":"But do we still have time?","npcId":30,"type":"user"}]
|
||||
后置对话:
|
||||
[{"content":"Leave it to me.","npcId":540,"npcName":"Victoria","type":"npc"}]
|
||||
非user角色数量: 1
|
||||
---
|
||||
|
||||
---
|
||||
前置对话:
|
||||
[{"content":"But do we still have time?","npcId":31,"type":"npc","npcName":"Ben"}]
|
||||
后置对话:
|
||||
[{"content":"Leave it to me.","npcId":540,"npcName":"Victoria","type":"npc"}]
|
||||
非user角色数量: 2
|
||||
---
|
||||
|
||||
最终输出一个 excel文档。
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import json
|
||||
from datetime import datetime
|
||||
import pymysql
|
||||
import pandas as pd
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv()
|
||||
|
||||
# 组件类型映射
|
||||
TYPE_MAP = {
|
||||
"mid_vocab_item": "物品互动", "mid_vocab_image": "图片互动",
|
||||
"mid_vocab_fillBlank": "填词互动", "mid_vocab_instruction": "指令互动",
|
||||
"mid_sentence_dialogue": "对话互动", "mid_sentence_voice": "语音互动",
|
||||
"mid_sentence_material": "材料互动", "mid_sentence_makeSentence": "造句互动",
|
||||
"mid_grammar_cloze": "挖空互动", "mid_grammar_sentence": "组句互动",
|
||||
"mid_pron_pron": "发音互动"
|
||||
}
|
||||
|
||||
def get_data():
|
||||
conn = pymysql.connect(
|
||||
host=os.getenv('MYSQL_HOST'), port=int(os.getenv('MYSQL_PORT', 3306)),
|
||||
user=os.getenv('MYSQL_USERNAME'), password=os.getenv('MYSQL_PASSWORD'),
|
||||
database=os.getenv('MYSQL_DATABASE'), charset='utf8mb4'
|
||||
)
|
||||
|
||||
# 构建c_id过滤条件
|
||||
conditions = [f"c_id LIKE '__{i:02d}___'" for i in range(21)] + ["c_id LIKE '__26___'"]
|
||||
where_clause = " OR ".join(conditions)
|
||||
|
||||
sql = f"""SELECT id, c_type, c_id, title, component_config, related_path,
|
||||
kp_relation_info, created_at, updated_at
|
||||
FROM middle_interaction_component WHERE {where_clause}"""
|
||||
|
||||
df = pd.read_sql(sql, conn)
|
||||
conn.close()
|
||||
return df
|
||||
|
||||
def process_data(df):
|
||||
# 组件类型
|
||||
df['组件类型'] = df['c_type'].map(TYPE_MAP).fillna(df['c_type'])
|
||||
|
||||
# 是否关联知识点
|
||||
def check_kp(kp_info):
|
||||
if not kp_info: return "否"
|
||||
try:
|
||||
data = json.loads(kp_info)
|
||||
return "是" if isinstance(data, list) and any(item.get('kpId') for item in data) else "否"
|
||||
except: return "否"
|
||||
|
||||
df['是否关联了知识点'] = df['kp_relation_info'].apply(check_kp)
|
||||
|
||||
# 是否已组课
|
||||
def check_lesson(path):
|
||||
if not path: return "否"
|
||||
try: return "是" if json.loads(path) else "否"
|
||||
except: return "否"
|
||||
|
||||
df['是否已组课'] = df['related_path'].apply(check_lesson)
|
||||
|
||||
# 前置/后置对话及NPC统计
|
||||
def extract_dialog(config, dialog_type):
|
||||
if not config: return "空"
|
||||
try:
|
||||
data = json.loads(config)
|
||||
dialog = data.get(dialog_type, [])
|
||||
return json.dumps(dialog, ensure_ascii=False) if dialog else "空"
|
||||
except: return "空"
|
||||
|
||||
def count_npc(config):
|
||||
if not config: return 0
|
||||
try:
|
||||
data = json.loads(config)
|
||||
npc_ids = set()
|
||||
for dialog in ['preDialog', 'postDialog']:
|
||||
for item in data.get(dialog, []):
|
||||
if item.get('type') == 'npc' and 'npcId' in item:
|
||||
npc_ids.add(item['npcId'])
|
||||
return len(npc_ids)
|
||||
except: return 0
|
||||
|
||||
df['前置对话'] = df['component_config'].apply(lambda x: extract_dialog(x, 'preDialog'))
|
||||
df['后置对话'] = df['component_config'].apply(lambda x: extract_dialog(x, 'postDialog'))
|
||||
df['前置/后置对话中非user角色数量'] = df['component_config'].apply(count_npc)
|
||||
|
||||
return df
|
||||
|
||||
if __name__ == "__main__":
|
||||
df = get_data()
|
||||
df = process_data(df)
|
||||
|
||||
filename = f"middle_interaction_component_export_{datetime.now().strftime('%Y%m%d_%H%M%S')}.xlsx"
|
||||
df.to_excel(filename, index=False)
|
||||
print(f"导出完成: {filename}")
|
||||
385
business_knowledge/git_scripts/export_realtime_asr.py
Normal file
385
business_knowledge/git_scripts/export_realtime_asr.py
Normal file
@ -0,0 +1,385 @@
|
||||
"""
|
||||
导出 流式语音音频 脚本
|
||||
|
||||
v1.0
|
||||
---
|
||||
原始数据存储于ES数据库中
|
||||
索引: llm_realtime_asr_log
|
||||
|
||||
es相关配置通过以下环境变量
|
||||
ES_HOST=xxx
|
||||
ES_PORT=9200
|
||||
ES_SCHEME=https
|
||||
ES_USER=elastic
|
||||
ES_PASSWORD=xxx (注意这里可能有特殊符号)
|
||||
|
||||
需要配置的内容放置在脚本最开头
|
||||
开始时间 (8位数字年月日)
|
||||
截止时间 (8位数字年月日)
|
||||
|
||||
仅筛选 时间范围内的数据记录
|
||||
可以基于 timestamp_int 字段内容进行时间筛选 格式样例:1,769,496,892
|
||||
|
||||
正常情况 每个 voice_id 会对应两条记录
|
||||
可以 以 voice_id为单位
|
||||
最终 按照每个 voice_id 聚合出以下数据:
|
||||
|
||||
asr_prompt (其中一条记录会有这个内容)
|
||||
result_str (其中一条记录会有这个内容)
|
||||
timestamp (两条记录都会有,保留最新的一条对应的时间) 格式样例: 2023-12-12 12:12:12
|
||||
voice_id
|
||||
audio_url 按以下规则拼接: https://static.valavala.com/vala_llm/realtime_asr_audio_backup/online/{8位年月日}/{voice_id}.wav 8位年月日 基于 timestamp计算 格式 20260121这种
|
||||
source (其中一条记录会有这个内容)
|
||||
|
||||
最终导出一个excel。
|
||||
---
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
from datetime import datetime
|
||||
import requests
|
||||
import pandas as pd
|
||||
from dotenv import load_dotenv
|
||||
from collections import defaultdict
|
||||
import urllib3
|
||||
|
||||
# ==================== 配置区域 ====================
|
||||
START_DATE = "20251201" # 开始日期 (8位数字年月日)
|
||||
END_DATE = "20260131" # 结束日期 (8位数字年月日)
|
||||
# =================================================
|
||||
|
||||
# 加载环境变量
|
||||
load_dotenv()
|
||||
|
||||
# ES配置
|
||||
ES_HOST = os.getenv("ES_HOST")
|
||||
ES_PORT = int(os.getenv("ES_PORT", "9200"))
|
||||
ES_SCHEME = os.getenv("ES_SCHEME", "https")
|
||||
ES_USER = os.getenv("ES_USER", "elastic")
|
||||
ES_PASSWORD = os.getenv("ES_PASSWORD")
|
||||
ES_INDEX = "llm_realtime_asr_log"
|
||||
|
||||
# 每批处理的数据量
|
||||
SCROLL_SIZE = 1000
|
||||
SCROLL_TIMEOUT = "5m"
|
||||
|
||||
|
||||
def timestamp_int_from_date(date_str):
|
||||
"""将8位日期字符串转换为timestamp_int(秒级时间戳)"""
|
||||
dt = datetime.strptime(date_str, "%Y%m%d")
|
||||
return int(dt.timestamp())
|
||||
|
||||
|
||||
def format_timestamp(ts):
|
||||
"""将时间戳转换为格式化字符串"""
|
||||
if isinstance(ts, (int, float)):
|
||||
return datetime.fromtimestamp(ts).strftime("%Y-%m-%d %H:%M:%S")
|
||||
return ts
|
||||
|
||||
|
||||
def generate_audio_url(voice_id, timestamp):
|
||||
"""生成audio_url"""
|
||||
date_str = datetime.fromtimestamp(timestamp).strftime("%Y%m%d")
|
||||
return f"https://static.valavala.com/vala_llm/realtime_asr_audio_backup/online/{date_str}/{voice_id}.wav"
|
||||
|
||||
|
||||
def connect_es():
|
||||
"""测试ES连接"""
|
||||
print("正在测试 Elasticsearch 连接...")
|
||||
|
||||
# 禁用SSL警告
|
||||
if ES_SCHEME == "https":
|
||||
try:
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
base_url = f"{ES_SCHEME}://{ES_HOST}:{ES_PORT}"
|
||||
auth = (ES_USER, ES_PASSWORD) if ES_USER and ES_PASSWORD else None
|
||||
|
||||
try:
|
||||
# 测试连接
|
||||
resp = requests.get(
|
||||
base_url,
|
||||
auth=auth,
|
||||
timeout=10,
|
||||
verify=False if ES_SCHEME == "https" else True
|
||||
)
|
||||
resp.raise_for_status()
|
||||
|
||||
print(f"✓ 成功连接到 Elasticsearch: {ES_HOST}:{ES_PORT}")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"✗ 连接失败: {e}")
|
||||
return False
|
||||
|
||||
|
||||
def query_data(start_date, end_date):
|
||||
"""查询ES数据"""
|
||||
start_ts = timestamp_int_from_date(start_date)
|
||||
end_ts = timestamp_int_from_date(end_date) + 86400 # 结束日期加一天,包含当天数据
|
||||
|
||||
print(f"\n开始查询数据...")
|
||||
print(f"时间范围: {start_date} 至 {end_date}")
|
||||
print(f"时间戳范围: {start_ts} 至 {end_ts}")
|
||||
|
||||
# 禁用SSL警告
|
||||
if ES_SCHEME == "https":
|
||||
try:
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
base_url = f"{ES_SCHEME}://{ES_HOST}:{ES_PORT}"
|
||||
search_url = f"{base_url}/{ES_INDEX}/_search"
|
||||
headers = {"Content-Type": "application/json"}
|
||||
auth = (ES_USER, ES_PASSWORD) if ES_USER and ES_PASSWORD else None
|
||||
|
||||
query = {
|
||||
"query": {
|
||||
"range": {
|
||||
"timestamp_int": {
|
||||
"gte": start_ts,
|
||||
"lt": end_ts
|
||||
}
|
||||
}
|
||||
},
|
||||
"sort": [{"timestamp_int": {"order": "asc"}}],
|
||||
"size": SCROLL_SIZE
|
||||
}
|
||||
|
||||
try:
|
||||
# 初始查询(使用scroll)
|
||||
params = {"scroll": SCROLL_TIMEOUT}
|
||||
response = requests.post(
|
||||
search_url,
|
||||
headers=headers,
|
||||
json=query,
|
||||
auth=auth,
|
||||
params=params,
|
||||
timeout=30,
|
||||
verify=False if ES_SCHEME == "https" else True
|
||||
)
|
||||
response.raise_for_status()
|
||||
data = response.json()
|
||||
|
||||
scroll_id = data.get("_scroll_id")
|
||||
total_hits = data["hits"]["total"]["value"]
|
||||
|
||||
print(f"✓ 查询完成,共找到 {total_hits} 条记录")
|
||||
|
||||
return data, scroll_id, total_hits
|
||||
|
||||
except Exception as e:
|
||||
raise RuntimeError(f"ES查询失败: {e}")
|
||||
|
||||
|
||||
def aggregate_by_voice_id(response, scroll_id, total_hits):
|
||||
"""按voice_id聚合数据"""
|
||||
voice_data = defaultdict(list)
|
||||
processed_count = 0
|
||||
|
||||
print("\n开始处理数据...")
|
||||
|
||||
# 禁用SSL警告
|
||||
if ES_SCHEME == "https":
|
||||
try:
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
base_url = f"{ES_SCHEME}://{ES_HOST}:{ES_PORT}"
|
||||
scroll_url = f"{base_url}/_search/scroll"
|
||||
headers = {"Content-Type": "application/json"}
|
||||
auth = (ES_USER, ES_PASSWORD) if ES_USER and ES_PASSWORD else None
|
||||
|
||||
while True:
|
||||
hits = response["hits"]["hits"]
|
||||
|
||||
if not hits:
|
||||
break
|
||||
|
||||
for hit in hits:
|
||||
source = hit["_source"]
|
||||
voice_id = source.get("voice_id")
|
||||
|
||||
if voice_id:
|
||||
voice_data[voice_id].append(source)
|
||||
|
||||
processed_count += 1
|
||||
|
||||
# 打印进度
|
||||
progress = (processed_count / total_hits) * 100
|
||||
print(f"\r处理进度: {processed_count}/{total_hits} ({progress:.1f}%)", end="")
|
||||
|
||||
# 获取下一批数据
|
||||
try:
|
||||
scroll_response = requests.post(
|
||||
scroll_url,
|
||||
headers=headers,
|
||||
json={
|
||||
"scroll": SCROLL_TIMEOUT,
|
||||
"scroll_id": scroll_id
|
||||
},
|
||||
auth=auth,
|
||||
timeout=30,
|
||||
verify=False if ES_SCHEME == "https" else True
|
||||
)
|
||||
scroll_response.raise_for_status()
|
||||
response = scroll_response.json()
|
||||
|
||||
# 更新 scroll_id(可能会变化)
|
||||
scroll_id = response.get("_scroll_id", scroll_id)
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n✗ 获取下一批数据失败: {e}")
|
||||
break
|
||||
|
||||
print(f"\n✓ 数据处理完成,共处理 {processed_count} 条记录")
|
||||
print(f"✓ 找到 {len(voice_data)} 个唯一的 voice_id")
|
||||
|
||||
# 清理scroll
|
||||
try:
|
||||
clear_scroll_url = f"{base_url}/_search/scroll"
|
||||
requests.delete(
|
||||
clear_scroll_url,
|
||||
headers=headers,
|
||||
json={"scroll_id": [scroll_id]},
|
||||
auth=auth,
|
||||
timeout=10,
|
||||
verify=False if ES_SCHEME == "https" else True
|
||||
)
|
||||
except Exception:
|
||||
pass # 清理失败不影响结果
|
||||
|
||||
return voice_data
|
||||
|
||||
|
||||
def merge_voice_records(voice_data):
|
||||
"""合并voice_id的记录,只保留恰好2条记录的"""
|
||||
print("\n开始聚合 voice_id 数据...")
|
||||
|
||||
merged_data = []
|
||||
valid_count = 0
|
||||
invalid_count = 0
|
||||
|
||||
for voice_id, records in voice_data.items():
|
||||
# 只处理恰好有2条记录的voice_id
|
||||
if len(records) != 2:
|
||||
invalid_count += 1
|
||||
continue
|
||||
|
||||
valid_count += 1
|
||||
|
||||
# 初始化合并后的数据
|
||||
merged_record = {
|
||||
"voice_id": voice_id,
|
||||
"asr_prompt": None,
|
||||
"result_str": None,
|
||||
"timestamp": None,
|
||||
"source": None,
|
||||
"audio_url": None
|
||||
}
|
||||
|
||||
# 找出最新的timestamp
|
||||
max_timestamp = max(
|
||||
records[0].get("timestamp_int", 0),
|
||||
records[1].get("timestamp_int", 0)
|
||||
)
|
||||
|
||||
# 合并数据
|
||||
for record in records:
|
||||
if record.get("asr_prompt"):
|
||||
merged_record["asr_prompt"] = record["asr_prompt"]
|
||||
if record.get("result_str"):
|
||||
merged_record["result_str"] = record["result_str"]
|
||||
if record.get("source"):
|
||||
merged_record["source"] = record["source"]
|
||||
|
||||
# 设置timestamp和audio_url
|
||||
merged_record["timestamp"] = format_timestamp(max_timestamp)
|
||||
merged_record["audio_url"] = generate_audio_url(voice_id, max_timestamp)
|
||||
|
||||
merged_data.append(merged_record)
|
||||
|
||||
print(f"✓ 聚合完成")
|
||||
print(f" - 有效记录(2条/voice_id): {valid_count}")
|
||||
print(f" - 无效记录(非2条/voice_id): {invalid_count}")
|
||||
|
||||
return merged_data
|
||||
|
||||
|
||||
def export_to_excel(data, start_date, end_date):
|
||||
"""导出到Excel"""
|
||||
if not data:
|
||||
print("\n警告: 没有数据可导出")
|
||||
return
|
||||
|
||||
print(f"\n开始导出数据到 Excel...")
|
||||
|
||||
# 创建DataFrame
|
||||
df = pd.DataFrame(data)
|
||||
|
||||
# 调整列顺序
|
||||
columns = ["voice_id", "asr_prompt", "result_str", "timestamp", "audio_url", "source"]
|
||||
df = df[columns]
|
||||
|
||||
# 生成文件名
|
||||
output_dir = "output"
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
filename = f"realtime_asr_export_{start_date}_{end_date}.xlsx"
|
||||
filepath = os.path.join(output_dir, filename)
|
||||
|
||||
# 导出Excel
|
||||
df.to_excel(filepath, index=False, engine="openpyxl")
|
||||
|
||||
print(f"✓ 数据已导出到: {filepath}")
|
||||
print(f"✓ 共导出 {len(df)} 条记录")
|
||||
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
print("=" * 60)
|
||||
print("流式语音 ASR 数据导出工具 v1.0")
|
||||
print("=" * 60)
|
||||
|
||||
start_time = datetime.now()
|
||||
|
||||
try:
|
||||
# 测试ES连接
|
||||
if not connect_es():
|
||||
raise Exception("无法连接到 Elasticsearch,请检查配置")
|
||||
|
||||
# 查询数据
|
||||
response, scroll_id, total_hits = query_data(START_DATE, END_DATE)
|
||||
|
||||
if total_hits == 0:
|
||||
print("\n没有找到符合条件的数据")
|
||||
return
|
||||
|
||||
# 聚合数据
|
||||
voice_data = aggregate_by_voice_id(response, scroll_id, total_hits)
|
||||
|
||||
# 合并记录
|
||||
merged_data = merge_voice_records(voice_data)
|
||||
|
||||
# 导出Excel
|
||||
export_to_excel(merged_data, START_DATE, END_DATE)
|
||||
|
||||
# 统计耗时
|
||||
end_time = datetime.now()
|
||||
duration = (end_time - start_time).total_seconds()
|
||||
|
||||
print(f"\n{'=' * 60}")
|
||||
print(f"✓ 任务完成! 总耗时: {duration:.2f} 秒")
|
||||
print(f"{'=' * 60}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n✗ 错误: {str(e)}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
121
business_knowledge/git_scripts/export_resource_name.py
Normal file
121
business_knowledge/git_scripts/export_resource_name.py
Normal file
@ -0,0 +1,121 @@
|
||||
"""
|
||||
MYSQL_HOST=xxx
|
||||
MYSQL_USERNAME=xxx
|
||||
MYSQL_PASSWORD=xxx
|
||||
MYSQL_DATABASE=xxx
|
||||
MYSQL_PORT=xxx
|
||||
|
||||
以上环境变量已配置在 .env 中。
|
||||
|
||||
我要导出一个数据表的某些记录 并添加一些字段。
|
||||
|
||||
表名:vala_resource_base
|
||||
|
||||
过滤全部 type == "角色" 的记录
|
||||
|
||||
导出以下字段:
|
||||
id
|
||||
cn_name
|
||||
en_name
|
||||
|
||||
|
||||
最终输出到 excel文档。 "角色资源导出_251031.xlsx"
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import pandas as pd
|
||||
import pymysql
|
||||
from dotenv import load_dotenv
|
||||
from datetime import datetime
|
||||
|
||||
def load_config():
|
||||
"""加载环境变量配置"""
|
||||
load_dotenv()
|
||||
|
||||
config = {
|
||||
'host': os.getenv('MYSQL_HOST'),
|
||||
'user': os.getenv('MYSQL_USERNAME'),
|
||||
'password': os.getenv('MYSQL_PASSWORD'),
|
||||
'database': os.getenv('MYSQL_DATABASE'),
|
||||
'port': int(os.getenv('MYSQL_PORT', 3306)),
|
||||
'charset': 'utf8mb4'
|
||||
}
|
||||
|
||||
# 验证配置
|
||||
for key, value in config.items():
|
||||
if value is None and key != 'charset':
|
||||
raise ValueError(f"环境变量 {key} 未配置")
|
||||
|
||||
return config
|
||||
|
||||
def connect_mysql(config):
|
||||
"""连接MySQL数据库"""
|
||||
try:
|
||||
connection = pymysql.connect(**config)
|
||||
print("MySQL数据库连接成功")
|
||||
return connection
|
||||
except Exception as e:
|
||||
print(f"MySQL数据库连接失败: {e}")
|
||||
raise
|
||||
|
||||
def export_role_resources():
|
||||
"""导出角色资源数据"""
|
||||
try:
|
||||
# 加载配置
|
||||
config = load_config()
|
||||
|
||||
# 连接数据库
|
||||
connection = connect_mysql(config)
|
||||
|
||||
# SQL查询语句
|
||||
sql = """
|
||||
SELECT
|
||||
id,
|
||||
cn_name,
|
||||
en_name
|
||||
FROM vala_resource_base
|
||||
WHERE type = '角色'
|
||||
ORDER BY id
|
||||
"""
|
||||
|
||||
print("开始查询数据...")
|
||||
|
||||
# 执行查询并获取数据
|
||||
df = pd.read_sql(sql, connection)
|
||||
|
||||
print(f"查询到 {len(df)} 条记录")
|
||||
|
||||
# 关闭数据库连接
|
||||
connection.close()
|
||||
|
||||
# 导出到Excel文件
|
||||
output_filename = "角色资源导出_251031.xlsx"
|
||||
df.to_excel(output_filename, index=False, engine='openpyxl')
|
||||
|
||||
print(f"数据已成功导出到: {output_filename}")
|
||||
print(f"导出字段: {list(df.columns)}")
|
||||
print(f"导出记录数: {len(df)}")
|
||||
|
||||
# 显示前几行数据预览
|
||||
if len(df) > 0:
|
||||
print("\n数据预览:")
|
||||
print(df.head())
|
||||
|
||||
return output_filename
|
||||
|
||||
except Exception as e:
|
||||
print(f"导出过程中发生错误: {e}")
|
||||
raise
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
print("开始导出角色资源数据...")
|
||||
print(f"执行时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
|
||||
|
||||
output_file = export_role_resources()
|
||||
|
||||
print(f"\n✅ 导出完成! 文件保存为: {output_file}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ 导出失败: {e}")
|
||||
343
business_knowledge/git_scripts/export_unit_challenge_data.py
Normal file
343
business_knowledge/git_scripts/export_unit_challenge_data.py
Normal file
@ -0,0 +1,343 @@
|
||||
"""
|
||||
** 不要改动我的需求描述,直接在需求后面写代码即可 **
|
||||
|
||||
需求一:
|
||||
先写一个最简单脚本 实现下面sql功能
|
||||
|
||||
SELECT * FROM `vala_game_info` WHERE id > 0 AND `vala_game_info`.`deleted_at` IS NULL ORDER BY season_package_id asc,`index` asc
|
||||
|
||||
环境变量读取:
|
||||
MYSQL_HOST=xxx
|
||||
MYSQL_USERNAME=xxx
|
||||
MYSQL_PASSWORD=xxx
|
||||
MYSQL_DATABASE=xxx
|
||||
MYSQL_PORT=xxx
|
||||
-----------
|
||||
需求二:
|
||||
在 PGsql数据库中 筛选数据
|
||||
数据库相关配置 从.env中读取:
|
||||
PG_DB_HOST = xxx
|
||||
PG_DB_PORT = xxx
|
||||
PG_DB_USER = xxx
|
||||
PG_DB_PASSWORD = xxx
|
||||
PG_DB_DATABASE = xxx
|
||||
|
||||
读取以下数据表:user_unit_challenge_question_result
|
||||
|
||||
支持输入时间范围
|
||||
起始时间 和 截止时间 配置格式: "20250110"
|
||||
|
||||
数据表中的时间字段为 updated_at , 格式样例: "2025-11-05 19:35:46.698246+08:00"
|
||||
|
||||
在这些时间范围内,筛选数据 (要求deleted_at字段内容为null)
|
||||
|
||||
导出以下字段:
|
||||
|
||||
user_id
|
||||
unit_id (读取每条记录的story_id, 根据 get_id_2_unit_index 函数返回的映射表 映射到 unit_id)
|
||||
score_text
|
||||
question_list
|
||||
updated_at
|
||||
category
|
||||
play_time_seconds (读取 play_time 把ms数据转换为秒 保留整数部分)
|
||||
|
||||
导出为excel文件
|
||||
|
||||
配置参数直接在脚本开头给出即可
|
||||
|
||||
需求三:
|
||||
需求二中 作为步骤一
|
||||
本需求为步骤二 基于 步骤一的 文档
|
||||
进行数据聚合
|
||||
|
||||
根据每个unit_id + category 进行分组
|
||||
|
||||
统计每个分组下的以下数值:
|
||||
总记录数量
|
||||
Perfect数量 (读取 score_text =="Perfect")
|
||||
Good数量 (读取 score_text =="Good")
|
||||
Oops数量 (读取 score_text =="Oops")
|
||||
Perfect率 (Perfect数量 / 总记录数量)
|
||||
Good率 (Good数量 / 总记录数量)
|
||||
Oops率 (Oops数量 / 总记录数量)
|
||||
|
||||
导出为excel 命名为 步骤一名字_stats.xlsx
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import pymysql
|
||||
import psycopg2
|
||||
from psycopg2.extras import RealDictCursor
|
||||
from datetime import datetime
|
||||
import pandas as pd
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# 加载环境变量
|
||||
load_dotenv()
|
||||
|
||||
# ============ 配置参数 ============
|
||||
START_DATE = "20250915" # 起始时间
|
||||
END_DATE = "20251128" # 截止时间
|
||||
OUTPUT_NAME = "unit_challenge_data_{}_{}.xlsx".format(START_DATE, END_DATE) # 输出文件名
|
||||
OUTPUT_FILENAME = os.path.join("./output", OUTPUT_NAME)
|
||||
# =================================
|
||||
|
||||
def get_id_2_unit_index():
|
||||
# 读取数据库配置
|
||||
db_host = os.getenv('MYSQL_HOST')
|
||||
db_user = os.getenv('MYSQL_USERNAME')
|
||||
db_password = os.getenv('MYSQL_PASSWORD')
|
||||
db_name = os.getenv('MYSQL_DATABASE')
|
||||
db_port = os.getenv('MYSQL_PORT')
|
||||
|
||||
# 简单的参数检查
|
||||
if not all([db_host, db_user, db_password, db_name]):
|
||||
print("Error: Missing database configuration in .env file.")
|
||||
print("Ensure MYSQL_HOST, MYSQL_USERNAME, MYSQL_PASSWORD, MYSQL_DATABASE are set.")
|
||||
return
|
||||
|
||||
try:
|
||||
# 连接数据库
|
||||
connection = pymysql.connect(
|
||||
host=db_host,
|
||||
user=db_user,
|
||||
password=db_password,
|
||||
database=db_name,
|
||||
port=int(db_port) if db_port else 3306,
|
||||
cursorclass=pymysql.cursors.DictCursor
|
||||
)
|
||||
|
||||
print(f"Connected to database: {db_host}")
|
||||
|
||||
try:
|
||||
with connection.cursor() as cursor:
|
||||
# 定义 SQL 语句
|
||||
sql = """
|
||||
SELECT *
|
||||
FROM `vala_game_info`
|
||||
WHERE id > 0
|
||||
AND `vala_game_info`.`deleted_at` IS NULL
|
||||
ORDER BY season_package_id asc, `index` asc
|
||||
"""
|
||||
|
||||
print(f"Executing SQL: {sql}")
|
||||
|
||||
# 执行查询
|
||||
cursor.execute(sql)
|
||||
|
||||
# 获取所有结果
|
||||
results = cursor.fetchall()
|
||||
|
||||
print(f"Total records found: {len(results)}")
|
||||
print("-" * 30)
|
||||
|
||||
# 打印结果
|
||||
print(results)
|
||||
id_2_unit_index = {}
|
||||
for index, row in enumerate(results):
|
||||
id_2_unit_index[row['id']] = index
|
||||
|
||||
print("映射结果:")
|
||||
print(id_2_unit_index)
|
||||
|
||||
|
||||
|
||||
print("-" * 30)
|
||||
print("Done.")
|
||||
return id_2_unit_index
|
||||
|
||||
finally:
|
||||
connection.close()
|
||||
|
||||
except Exception as e:
|
||||
print(f"An error occurred: {e}")
|
||||
|
||||
|
||||
def export_unit_challenge_data(start_date, end_date, output_filename):
|
||||
"""
|
||||
从PostgreSQL数据库导出单元挑战数据
|
||||
"""
|
||||
# 读取PostgreSQL数据库配置
|
||||
pg_host = os.getenv('PG_DB_HOST')
|
||||
pg_port = os.getenv('PG_DB_PORT')
|
||||
pg_user = os.getenv('PG_DB_USER')
|
||||
pg_password = os.getenv('PG_DB_PASSWORD')
|
||||
pg_database = os.getenv('PG_DB_DATABASE')
|
||||
|
||||
# 检查配置
|
||||
if not all([pg_host, pg_port, pg_user, pg_password, pg_database]):
|
||||
print("Error: Missing PostgreSQL database configuration in .env file.")
|
||||
print("Ensure PG_DB_HOST, PG_DB_PORT, PG_DB_USER, PG_DB_PASSWORD, PG_DB_DATABASE are set.")
|
||||
return
|
||||
|
||||
# 获取 id 到 unit_index 的映射
|
||||
print("正在获取 unit_id 映射表...")
|
||||
id_2_unit_index = get_id_2_unit_index()
|
||||
if not id_2_unit_index:
|
||||
print("Error: Failed to get id_2_unit_index mapping.")
|
||||
return
|
||||
|
||||
# 转换时间格式: "20250110" -> "2025-01-10 00:00:00"
|
||||
start_datetime = datetime.strptime(start_date, "%Y%m%d").strftime("%Y-%m-%d 00:00:00")
|
||||
end_datetime = datetime.strptime(end_date, "%Y%m%d").strftime("%Y-%m-%d 00:00:00")
|
||||
|
||||
print(f"时间范围: {start_datetime} 至 {end_datetime}")
|
||||
|
||||
try:
|
||||
# 连接PostgreSQL数据库
|
||||
connection = psycopg2.connect(
|
||||
host=pg_host,
|
||||
port=int(pg_port),
|
||||
user=pg_user,
|
||||
password=pg_password,
|
||||
database=pg_database,
|
||||
cursor_factory=RealDictCursor
|
||||
)
|
||||
|
||||
print(f"已连接到 PostgreSQL 数据库: {pg_host}")
|
||||
|
||||
try:
|
||||
with connection.cursor() as cursor:
|
||||
# 定义SQL查询
|
||||
sql = """
|
||||
SELECT
|
||||
user_id,
|
||||
story_id,
|
||||
score_text,
|
||||
question_list,
|
||||
updated_at,
|
||||
category,
|
||||
play_time
|
||||
FROM user_unit_challenge_question_result
|
||||
WHERE deleted_at IS NULL
|
||||
AND updated_at >= %s
|
||||
AND updated_at < %s
|
||||
ORDER BY updated_at ASC
|
||||
"""
|
||||
|
||||
print(f"执行查询...")
|
||||
|
||||
# 执行查询
|
||||
cursor.execute(sql, (start_datetime, end_datetime))
|
||||
|
||||
# 获取所有结果
|
||||
results = cursor.fetchall()
|
||||
|
||||
print(f"查询到 {len(results)} 条记录")
|
||||
|
||||
# 处理数据
|
||||
export_data = []
|
||||
for row in results:
|
||||
# 映射 story_id 到 unit_id
|
||||
story_id = row['story_id']
|
||||
unit_id = id_2_unit_index.get(story_id, None)
|
||||
|
||||
# 转换 play_time (毫秒) 为秒 (整数)
|
||||
play_time_seconds = row['play_time'] // 1000 if row['play_time'] else 0
|
||||
|
||||
# 移除 updated_at 的时区信息(Excel 不支持带时区的 datetime)
|
||||
updated_at = row['updated_at']
|
||||
if updated_at and hasattr(updated_at, 'replace'):
|
||||
updated_at = updated_at.replace(tzinfo=None)
|
||||
|
||||
export_data.append({
|
||||
'user_id': row['user_id'],
|
||||
'unit_id': unit_id,
|
||||
'score_text': row['score_text'],
|
||||
'question_list': row['question_list'],
|
||||
'updated_at': updated_at,
|
||||
'category': row['category'],
|
||||
'play_time_seconds': play_time_seconds
|
||||
})
|
||||
|
||||
# 导出到Excel
|
||||
if export_data:
|
||||
df = pd.DataFrame(export_data)
|
||||
df.to_excel(output_filename, index=False, engine='openpyxl')
|
||||
print(f"数据已导出到: {output_filename}")
|
||||
print(f"共导出 {len(export_data)} 条记录")
|
||||
else:
|
||||
print("没有数据可导出")
|
||||
|
||||
finally:
|
||||
connection.close()
|
||||
print("数据库连接已关闭")
|
||||
|
||||
except Exception as e:
|
||||
print(f"发生错误: {e}")
|
||||
|
||||
|
||||
def aggregate_stats(input_filename):
|
||||
"""
|
||||
基于步骤一的Excel文件进行数据聚合
|
||||
按 unit_id + category 分组,统计各项指标
|
||||
"""
|
||||
try:
|
||||
# 读取步骤一导出的Excel文件
|
||||
print(f"正在读取文件: {input_filename}")
|
||||
df = pd.read_excel(input_filename, engine='openpyxl')
|
||||
|
||||
print(f"读取到 {len(df)} 条记录")
|
||||
|
||||
# 按 unit_id + category 分组统计
|
||||
grouped = df.groupby(['unit_id', 'category'], dropna=False)
|
||||
|
||||
stats_data = []
|
||||
for (unit_id, category), group in grouped:
|
||||
total_count = len(group)
|
||||
perfect_count = (group['score_text'] == 'Perfect').sum()
|
||||
good_count = (group['score_text'] == 'Good').sum()
|
||||
oops_count = (group['score_text'] == 'Oops').sum()
|
||||
|
||||
# 计算占比
|
||||
perfect_rate = round(perfect_count / total_count if total_count > 0 else 0, 2)
|
||||
good_rate = round(good_count / total_count if total_count > 0 else 0, 2)
|
||||
oops_rate = round(oops_count / total_count if total_count > 0 else 0, 2)
|
||||
|
||||
stats_data.append({
|
||||
'unit_id': unit_id,
|
||||
'category': category,
|
||||
'总记录数量': total_count,
|
||||
'Perfect数量': perfect_count,
|
||||
'Good数量': good_count,
|
||||
'Oops数量': oops_count,
|
||||
'Perfect率': perfect_rate,
|
||||
'Good率': good_rate,
|
||||
'Oops率': oops_rate
|
||||
})
|
||||
|
||||
# 生成输出文件名
|
||||
base_name = os.path.splitext(input_filename)[0]
|
||||
output_filename = f"{base_name}_stats.xlsx"
|
||||
|
||||
# 导出统计结果
|
||||
if stats_data:
|
||||
stats_df = pd.DataFrame(stats_data)
|
||||
stats_df.to_excel(output_filename, index=False, engine='openpyxl')
|
||||
print(f"统计数据已导出到: {output_filename}")
|
||||
print(f"共 {len(stats_data)} 个分组")
|
||||
else:
|
||||
print("没有数据可统计")
|
||||
|
||||
except Exception as e:
|
||||
print(f"数据聚合时发生错误: {e}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# 步骤一:执行导出
|
||||
print("=" * 50)
|
||||
print("步骤一:导出原始数据")
|
||||
print("=" * 50)
|
||||
export_unit_challenge_data(START_DATE, END_DATE, OUTPUT_FILENAME)
|
||||
|
||||
# 步骤二:数据聚合
|
||||
print("\n" + "=" * 50)
|
||||
print("步骤二:数据聚合统计")
|
||||
print("=" * 50)
|
||||
aggregate_stats(OUTPUT_FILENAME)
|
||||
|
||||
print("\n" + "=" * 50)
|
||||
print("全部完成!")
|
||||
print("=" * 50)
|
||||
|
||||
1846
business_knowledge/git_scripts/export_user_id_data.py
Normal file
1846
business_knowledge/git_scripts/export_user_id_data.py
Normal file
File diff suppressed because it is too large
Load Diff
681
business_knowledge/git_scripts/extract_core_speaking_data.py
Normal file
681
business_knowledge/git_scripts/extract_core_speaking_data.py
Normal file
File diff suppressed because one or more lines are too long
480
business_knowledge/git_scripts/extract_user_audio.py
Normal file
480
business_knowledge/git_scripts/extract_user_audio.py
Normal file
@ -0,0 +1,480 @@
|
||||
#!/usr/bin/env python3
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
用户音频数据筛选脚本
|
||||
功能:从PostgreSQL数据库的分表(user_component_play_record_0~7)中提取指定时间段的用户音频数据。
|
||||
主要逻辑:
|
||||
1. 数据源:遍历 user_component_play_record_0 至 user_component_play_record_7 表。
|
||||
2. 筛选条件:
|
||||
- 时间范围:可配置
|
||||
- 数据有效性:user_behavior_info 非空且包含 userAudio 和 pronunciationScore。
|
||||
3. 采样规则:
|
||||
- 目标总数:可配置
|
||||
- 用户限制:可配置
|
||||
- 随机策略:先随机打乱,再按用户分组限制,最后补齐或截断至目标数量。
|
||||
4. 输出:导出为Excel文件。
|
||||
包含字段:
|
||||
- index: 序号
|
||||
- source_table: 来源表名
|
||||
- created_at: 创建时间
|
||||
- user_id: 用户ID
|
||||
- component_unique_code: 组件唯一标识
|
||||
- pronunciationScore: 发音评分
|
||||
- userAudio: 音频链接
|
||||
- expressContent: 朗读内容文本
|
||||
"""
|
||||
|
||||
import os
|
||||
import json
|
||||
import re
|
||||
import random
|
||||
import psycopg2
|
||||
import pymysql
|
||||
import pandas as pd
|
||||
from datetime import datetime
|
||||
from typing import List, Dict, Any
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# 配置参数
|
||||
CONFIG = {
|
||||
# 筛选时间范围
|
||||
'START_TIME': '2025-11-10 00:00:00+08:00',
|
||||
'END_TIME': '2025-12-10 23:59:59+08:00',
|
||||
|
||||
# 采样参数
|
||||
'TARGET_TOTAL': 10000, # 目标总样本数
|
||||
'MAX_PER_USER': 20, # 单个用户最大样本数
|
||||
'TABLE_COUNT': 8, # 分表数量 (0~N-1)
|
||||
|
||||
# 组件类型过滤
|
||||
'C_TYPE_FILTER': 'mid_sentence_dialogue' # 仅筛选对话互动组件
|
||||
}
|
||||
|
||||
class AudioDataExtractor:
|
||||
def __init__(self):
|
||||
# 加载环境变量
|
||||
load_dotenv()
|
||||
|
||||
# PostgreSQL数据库连接配置
|
||||
self.db_config = {
|
||||
'host': os.getenv('PG_DB_HOST'),
|
||||
'port': os.getenv('PG_DB_PORT'),
|
||||
'user': os.getenv('PG_DB_USER'),
|
||||
'password': os.getenv('PG_DB_PASSWORD'),
|
||||
'database': os.getenv('PG_DB_DATABASE')
|
||||
}
|
||||
|
||||
# MySQL数据库连接配置
|
||||
self.mysql_config = {
|
||||
'host': os.getenv('MYSQL_HOST'),
|
||||
'user': os.getenv('MYSQL_USERNAME'),
|
||||
'password': os.getenv('MYSQL_PASSWORD'),
|
||||
'database': "vala_test",
|
||||
'port': int(os.getenv('MYSQL_PORT', 3306)),
|
||||
'charset': 'utf8mb4'
|
||||
}
|
||||
|
||||
# 分表名称列表
|
||||
self.table_names = [f'user_component_play_record_{i}' for i in range(CONFIG['TABLE_COUNT'])]
|
||||
|
||||
|
||||
# 目标总数
|
||||
self.target_total = CONFIG['TARGET_TOTAL']
|
||||
# 每个用户最多记录数
|
||||
self.max_per_user = CONFIG['MAX_PER_USER']
|
||||
|
||||
def get_db_connection(self):
|
||||
"""获取数据库连接"""
|
||||
try:
|
||||
conn = psycopg2.connect(**self.db_config)
|
||||
return conn
|
||||
except Exception as e:
|
||||
print(f"数据库连接失败: {e}")
|
||||
raise
|
||||
|
||||
def extract_audio_info(self, user_behavior_info: str) -> Dict[str, Any]:
|
||||
"""从user_behavior_info字段中提取音频信息"""
|
||||
try:
|
||||
behavior_data = json.loads(user_behavior_info)
|
||||
if isinstance(behavior_data, list) and len(behavior_data) > 0:
|
||||
# 取第一个元素
|
||||
data = behavior_data[0]
|
||||
if 'userAudio' in data and 'pronunciationScore' in data:
|
||||
return {
|
||||
'userAudio': data.get('userAudio'),
|
||||
'pronunciationScore': data.get('pronunciationScore'),
|
||||
'expressContent': data.get('expressContent')
|
||||
}
|
||||
except (json.JSONDecodeError, KeyError, IndexError):
|
||||
pass
|
||||
return {}
|
||||
|
||||
def query_table_data(self, table_name: str) -> List[Dict]:
|
||||
"""查询单个表的数据"""
|
||||
conn = self.get_db_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
try:
|
||||
query = f"""
|
||||
SELECT user_id, component_unique_code, c_type, c_id, created_at, user_behavior_info
|
||||
FROM {table_name}
|
||||
WHERE created_at >= '{CONFIG['START_TIME']}'
|
||||
AND created_at <= '{CONFIG['END_TIME']}'
|
||||
AND c_type = '{CONFIG['C_TYPE_FILTER']}'
|
||||
AND user_behavior_info IS NOT NULL
|
||||
AND user_behavior_info != ''
|
||||
"""
|
||||
|
||||
cursor.execute(query)
|
||||
rows = cursor.fetchall()
|
||||
|
||||
results = []
|
||||
for row in rows:
|
||||
user_id, component_unique_code, c_type, c_id, created_at, user_behavior_info = row
|
||||
|
||||
# 提取音频信息
|
||||
audio_info = self.extract_audio_info(user_behavior_info)
|
||||
if audio_info and 'userAudio' in audio_info and 'pronunciationScore' in audio_info:
|
||||
results.append({
|
||||
'source_table': table_name,
|
||||
'user_id': user_id,
|
||||
'component_unique_code': component_unique_code,
|
||||
'c_type': c_type,
|
||||
'c_id': c_id,
|
||||
'created_at': created_at,
|
||||
'userAudio': audio_info['userAudio'],
|
||||
'pronunciationScore': audio_info['pronunciationScore'],
|
||||
'expressContent': audio_info.get('expressContent')
|
||||
})
|
||||
|
||||
return results
|
||||
|
||||
finally:
|
||||
cursor.close()
|
||||
conn.close()
|
||||
|
||||
def get_component_configs(self, data: List[Dict]) -> Dict[str, str]:
|
||||
"""从MySQL批量获取组件配置信息"""
|
||||
# 提取所有unique的(c_type, c_id)组合
|
||||
unique_components = set()
|
||||
for record in data:
|
||||
if 'c_type' in record and 'c_id' in record:
|
||||
unique_components.add((record['c_type'], record['c_id']))
|
||||
|
||||
if not unique_components:
|
||||
print("没有需要查询的组件")
|
||||
return {}
|
||||
|
||||
print(f"正在从MySQL查询 {len(unique_components)} 个组件的配置信息...")
|
||||
|
||||
# 连接MySQL
|
||||
try:
|
||||
conn = pymysql.connect(**self.mysql_config)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# 存储组件配置的字典,key为"c_type-c_id"
|
||||
component_configs = {}
|
||||
|
||||
# 批量查询
|
||||
for c_type, c_id in unique_components:
|
||||
query = """
|
||||
SELECT component_config
|
||||
FROM middle_interaction_component
|
||||
WHERE c_type = %s AND c_id = %s
|
||||
"""
|
||||
cursor.execute(query, (c_type, c_id))
|
||||
result = cursor.fetchone()
|
||||
|
||||
if result and result[0]:
|
||||
key = f"{c_type}-{c_id}"
|
||||
component_configs[key] = result[0]
|
||||
|
||||
cursor.close()
|
||||
conn.close()
|
||||
|
||||
print(f"成功查询到 {len(component_configs)} 个组件配置")
|
||||
return component_configs
|
||||
|
||||
except Exception as e:
|
||||
print(f"查询MySQL组件配置失败: {e}")
|
||||
return {}
|
||||
|
||||
@staticmethod
|
||||
def clean_text(text: str) -> str:
|
||||
"""清理文本:转小写,去除标点符号和空格"""
|
||||
if not text:
|
||||
return ""
|
||||
# 转小写
|
||||
text = text.lower()
|
||||
# 去除标点符号和特殊字符,只保留字母和数字
|
||||
text = re.sub(r'[^\w\s]', '', text)
|
||||
# 去除多余空格
|
||||
text = re.sub(r'\s+', '', text)
|
||||
return text
|
||||
|
||||
@staticmethod
|
||||
def levenshtein_distance(s1: str, s2: str) -> int:
|
||||
"""计算两个字符串的Levenshtein编辑距离"""
|
||||
if len(s1) < len(s2):
|
||||
return AudioDataExtractor.levenshtein_distance(s2, s1)
|
||||
|
||||
if len(s2) == 0:
|
||||
return len(s1)
|
||||
|
||||
previous_row = range(len(s2) + 1)
|
||||
for i, c1 in enumerate(s1):
|
||||
current_row = [i + 1]
|
||||
for j, c2 in enumerate(s2):
|
||||
# 插入、删除、替换的成本
|
||||
insertions = previous_row[j + 1] + 1
|
||||
deletions = current_row[j] + 1
|
||||
substitutions = previous_row[j] + (c1 != c2)
|
||||
current_row.append(min(insertions, deletions, substitutions))
|
||||
previous_row = current_row
|
||||
|
||||
return previous_row[-1]
|
||||
|
||||
def parse_and_filter_by_config(self, data: List[Dict], component_configs: Dict[str, str]) -> List[Dict]:
|
||||
"""解析组件配置并筛选question.mode == 'read'的记录"""
|
||||
print(f"\n开始根据组件配置筛选数据...")
|
||||
print(f"筛选前数据量: {len(data)}")
|
||||
|
||||
filtered_data = []
|
||||
skipped_no_config = 0
|
||||
skipped_invalid_json = 0
|
||||
skipped_wrong_mode = 0
|
||||
|
||||
for record in data:
|
||||
c_type = record.get('c_type')
|
||||
c_id = record.get('c_id')
|
||||
|
||||
if not c_type or not c_id:
|
||||
continue
|
||||
|
||||
# 获取组件配置
|
||||
key = f"{c_type}-{c_id}"
|
||||
config_str = component_configs.get(key)
|
||||
|
||||
if not config_str:
|
||||
skipped_no_config += 1
|
||||
continue
|
||||
|
||||
try:
|
||||
# 解析JSON配置
|
||||
config = json.loads(config_str)
|
||||
|
||||
# 检查question.mode == "read"
|
||||
question = config.get('question', {})
|
||||
mode = question.get('mode')
|
||||
|
||||
if mode == 'read':
|
||||
# 提取question.content作为refText
|
||||
ref_text = question.get('content', '')
|
||||
record['refText'] = ref_text
|
||||
|
||||
# 计算编辑距离
|
||||
express_content = record.get('expressContent', '')
|
||||
|
||||
# 清理文本(去除标点和大小写差异)
|
||||
cleaned_express = self.clean_text(express_content)
|
||||
cleaned_ref = self.clean_text(ref_text)
|
||||
|
||||
# 计算编辑距离
|
||||
edit_distance = self.levenshtein_distance(cleaned_express, cleaned_ref)
|
||||
record['editDistance'] = edit_distance
|
||||
|
||||
# 计算相对编辑距离
|
||||
ref_len = len(cleaned_ref)
|
||||
if ref_len > 0:
|
||||
relative_edit_distance = round(edit_distance / ref_len, 4)
|
||||
else:
|
||||
relative_edit_distance = 0
|
||||
record['relativeEditDistance'] = relative_edit_distance
|
||||
|
||||
filtered_data.append(record)
|
||||
else:
|
||||
skipped_wrong_mode += 1
|
||||
|
||||
except (json.JSONDecodeError, AttributeError, TypeError):
|
||||
skipped_invalid_json += 1
|
||||
continue
|
||||
|
||||
print(f"筛选后数据量: {len(filtered_data)}")
|
||||
print(f" - 缺少配置: {skipped_no_config}")
|
||||
print(f" - 配置解析失败: {skipped_invalid_json}")
|
||||
print(f" - mode不是read: {skipped_wrong_mode}")
|
||||
|
||||
return filtered_data
|
||||
|
||||
def collect_all_data(self) -> List[Dict]:
|
||||
"""收集所有表的数据"""
|
||||
all_data = []
|
||||
|
||||
for table_name in self.table_names:
|
||||
print(f"正在查询表: {table_name}")
|
||||
try:
|
||||
table_data = self.query_table_data(table_name)
|
||||
all_data.extend(table_data)
|
||||
print(f"表 {table_name} 查询到 {len(table_data)} 条记录")
|
||||
except Exception as e:
|
||||
print(f"查询表 {table_name} 失败: {e}")
|
||||
continue
|
||||
|
||||
print(f"总共收集到 {len(all_data)} 条有效记录")
|
||||
|
||||
if not all_data:
|
||||
return []
|
||||
|
||||
# 从MySQL获取组件配置
|
||||
component_configs = self.get_component_configs(all_data)
|
||||
|
||||
# 根据组件配置筛选数据(只保留question.mode == "read"的记录)
|
||||
filtered_data = self.parse_and_filter_by_config(all_data, component_configs)
|
||||
|
||||
return filtered_data
|
||||
|
||||
def random_filter_data(self, data: List[Dict]) -> List[Dict]:
|
||||
"""随机筛选数据(不按评分分段控制)"""
|
||||
# 随机打乱所有数据
|
||||
shuffled_data = data.copy()
|
||||
random.shuffle(shuffled_data)
|
||||
|
||||
print(f"开始随机筛选,总共 {len(shuffled_data)} 条记录")
|
||||
return shuffled_data
|
||||
|
||||
def apply_user_constraints(self, data: List[Dict]) -> List[Dict]:
|
||||
"""应用用户约束(每个用户最多2条)"""
|
||||
user_records = {}
|
||||
|
||||
# 按用户分组
|
||||
for record in data:
|
||||
user_id = record['user_id']
|
||||
if user_id not in user_records:
|
||||
user_records[user_id] = []
|
||||
user_records[user_id].append(record)
|
||||
|
||||
# 每个用户最多选择2条
|
||||
final_data = []
|
||||
for user_id, records in user_records.items():
|
||||
if len(records) <= self.max_per_user:
|
||||
final_data.extend(records)
|
||||
else:
|
||||
# 随机选择2条
|
||||
selected = random.sample(records, self.max_per_user)
|
||||
final_data.extend(selected)
|
||||
|
||||
return final_data
|
||||
|
||||
def export_to_excel(self, data: List[Dict], filename: str = 'user_audio_data.xlsx'):
|
||||
"""导出数据到Excel文件"""
|
||||
# 准备导出数据
|
||||
export_data = []
|
||||
for i, record in enumerate(data):
|
||||
# 处理时区问题 - 转换为本地时间字符串
|
||||
created_at = record['created_at']
|
||||
if hasattr(created_at, 'tz_localize'):
|
||||
created_at = created_at.tz_localize(None)
|
||||
elif hasattr(created_at, 'replace'):
|
||||
created_at = created_at.replace(tzinfo=None)
|
||||
|
||||
export_data.append({
|
||||
'index': i,
|
||||
'source_table': record['source_table'],
|
||||
'created_at': created_at,
|
||||
'user_id': record['user_id'],
|
||||
'component_unique_code': record['component_unique_code'],
|
||||
'c_type': record.get('c_type'),
|
||||
'c_id': record.get('c_id'),
|
||||
'pronunciationScore': record['pronunciationScore'],
|
||||
'userAudio': record['userAudio'],
|
||||
'expressContent': record.get('expressContent'),
|
||||
'refText': record.get('refText'),
|
||||
'editDistance': record.get('editDistance'),
|
||||
'relativeEditDistance': record.get('relativeEditDistance')
|
||||
})
|
||||
|
||||
# 创建DataFrame并导出
|
||||
df = pd.DataFrame(export_data)
|
||||
df.to_excel(filename, index=False)
|
||||
print(f"数据已导出到: {filename}")
|
||||
print(f"总共导出 {len(export_data)} 条记录")
|
||||
|
||||
# 打印统计信息
|
||||
self.print_statistics(data)
|
||||
|
||||
def print_statistics(self, data: List[Dict]):
|
||||
"""打印统计信息"""
|
||||
print("\n=== 数据统计 ===")
|
||||
|
||||
# 评分统计(显示分布情况但不按区间分组)
|
||||
scores = [record['pronunciationScore'] for record in data]
|
||||
print(f"\n评分统计:")
|
||||
print(f" 总记录数: {len(scores)}")
|
||||
print(f" 最高分: {max(scores)}")
|
||||
print(f" 最低分: {min(scores)}")
|
||||
print(f" 平均分: {sum(scores) / len(scores):.2f}")
|
||||
|
||||
# 用户分布统计
|
||||
user_counts = {}
|
||||
for record in data:
|
||||
user_id = record['user_id']
|
||||
user_counts[user_id] = user_counts.get(user_id, 0) + 1
|
||||
|
||||
print(f"\n用户统计:")
|
||||
print(f" 总用户数: {len(user_counts)}")
|
||||
print(f" 平均每用户记录数: {len(data) / len(user_counts):.2f}")
|
||||
|
||||
# 表分布统计
|
||||
table_counts = {}
|
||||
for record in data:
|
||||
table = record['source_table']
|
||||
table_counts[table] = table_counts.get(table, 0) + 1
|
||||
|
||||
print(f"\n表分布:")
|
||||
for table, count in sorted(table_counts.items()):
|
||||
print(f" {table}: {count} 条")
|
||||
|
||||
def run(self):
|
||||
"""运行主流程"""
|
||||
print("开始提取用户音频数据...")
|
||||
|
||||
# 1. 收集所有数据
|
||||
all_data = self.collect_all_data()
|
||||
|
||||
if not all_data:
|
||||
print("未找到符合条件的数据")
|
||||
return
|
||||
|
||||
# 2. 随机筛选数据(不按评分分段控制)
|
||||
filtered_data = self.random_filter_data(all_data)
|
||||
|
||||
# 3. 应用用户约束
|
||||
final_data = self.apply_user_constraints(filtered_data)
|
||||
|
||||
# 4. 如果数据不足500条,尝试补充
|
||||
if len(final_data) < self.target_total:
|
||||
print(f"当前数据量 {len(final_data)} 条,少于目标 {self.target_total} 条")
|
||||
# 从剩余数据中补充
|
||||
used_records = set((r['user_id'], r['component_unique_code'], str(r['created_at'])) for r in final_data)
|
||||
available_data = [r for r in all_data if (r['user_id'], r['component_unique_code'], str(r['created_at'])) not in used_records]
|
||||
|
||||
needed = self.target_total - len(final_data)
|
||||
if len(available_data) >= needed:
|
||||
additional = random.sample(available_data, needed)
|
||||
final_data.extend(additional)
|
||||
|
||||
# 5. 如果超过500条,随机选择500条
|
||||
if len(final_data) > self.target_total:
|
||||
final_data = random.sample(final_data, self.target_total)
|
||||
|
||||
# 6. 导出到Excel
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
filename = f"user_audio_data_{timestamp}.xlsx"
|
||||
self.export_to_excel(final_data, filename)
|
||||
|
||||
def main():
|
||||
extractor = AudioDataExtractor()
|
||||
extractor.run()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@ -0,0 +1,463 @@
|
||||
"""
|
||||
从es中 筛选用户数据
|
||||
|
||||
es相关配置通过以下环节变量
|
||||
|
||||
ES_HOST=xxx
|
||||
ES_PORT=9200
|
||||
ES_SCHEME=https
|
||||
ES_USER=elastic
|
||||
ES_PASSWORD=xxx
|
||||
|
||||
|
||||
index: user-audio
|
||||
|
||||
脚本思路:
|
||||
|
||||
给定 一些过滤参数; 给定导出的excel文件名 (在脚本中以变量方式配置就行)
|
||||
|
||||
导出我要的字段内容到一个 excel
|
||||
|
||||
过滤字段:
|
||||
timeStr: 字段内容为str 格式为: 2024-12-31 15:53:19
|
||||
期望支持配置 开始 日期 和 结束日期 (可以只配置一个 只配 开始日期 则筛选 >= 开始日期的记录, 只配结束日期 则筛选 <= 结束日期的记录)
|
||||
|
||||
输出字段内容支持配置:
|
||||
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
from datetime import datetime
|
||||
from dotenv import load_dotenv
|
||||
from elasticsearch import Elasticsearch
|
||||
import pandas as pd
|
||||
import urllib.parse
|
||||
from collections import defaultdict
|
||||
|
||||
# 加载环境变量
|
||||
load_dotenv()
|
||||
|
||||
# 配置参数
|
||||
INDEX_NAME = "llm_ai_tools_log"
|
||||
OUTPUT_FILE = "单元挑战用户数据_250906_251024.xlsx"
|
||||
START_DATE = "2025-09-06 00:00:00" # 开始日期,格式: YYYY-MM-DD HH:MM:SS,设为None则不限制
|
||||
END_DATE = "2025-10-24 00:00:00" # 结束日期,格式: YYYY-MM-DD HH:MM:SS,设为None则不限制
|
||||
|
||||
# type字段过滤配置:筛选指定类型的记录,为空则不限制
|
||||
FILTER_TYPES = ["sent_check_challenge", "speaking_topic_challenge"]
|
||||
|
||||
# 可选的 userId 过滤配置:配置为[int, ...] 列表;为空则不限制
|
||||
FILTER_USER_IDS = [] # 例如: [123, 456]
|
||||
|
||||
# 需要导出的字段
|
||||
EXPORT_FIELDS = [
|
||||
"type",
|
||||
"question",
|
||||
"user_answer",
|
||||
"time_total_ms",
|
||||
"score",
|
||||
"is_passed",
|
||||
"model",
|
||||
"write_time_str",
|
||||
"write_time_int",
|
||||
]
|
||||
|
||||
|
||||
|
||||
def create_es_client():
|
||||
"""创建Elasticsearch客户端"""
|
||||
# 获取环境变量并打印调试信息
|
||||
es_host = os.getenv('ES_HOST')
|
||||
es_port = os.getenv('ES_PORT', 9200)
|
||||
es_scheme = os.getenv('ES_SCHEME', 'https')
|
||||
es_user = os.getenv('ES_USER')
|
||||
es_password = os.getenv('ES_PASSWORD')
|
||||
|
||||
print(f"[DEBUG] ES配置信息:")
|
||||
print(f" ES_HOST: {es_host}")
|
||||
print(f" ES_PORT: {es_port}")
|
||||
print(f" ES_SCHEME: {es_scheme}")
|
||||
print(f" ES_USER: {es_user}")
|
||||
print(f" ES_PASSWORD: {'***已设置***' if es_password else '未设置'}")
|
||||
|
||||
# 检查必要的环境变量
|
||||
if not es_host:
|
||||
raise ValueError("ES_HOST环境变量未设置")
|
||||
if not es_user:
|
||||
raise ValueError("ES_USER环境变量未设置")
|
||||
if not es_password:
|
||||
raise ValueError("ES_PASSWORD环境变量未设置")
|
||||
|
||||
# URL编码用户名和密码,处理特殊字符
|
||||
encoded_user = urllib.parse.quote(es_user, safe='')
|
||||
encoded_password = urllib.parse.quote(es_password, safe='')
|
||||
|
||||
print(f"[DEBUG] 原始密码包含特殊字符,已进行URL编码")
|
||||
|
||||
# 方式1: 使用URL中嵌入认证信息
|
||||
host_url_with_auth = f"{es_scheme}://{encoded_user}:{encoded_password}@{es_host}:{es_port}"
|
||||
print(f"[DEBUG] 连接URL (带认证): {es_scheme}://{encoded_user}:***@{es_host}:{es_port}")
|
||||
|
||||
try:
|
||||
# 尝试方式1: URL中嵌入认证
|
||||
es_config_1 = {
|
||||
'hosts': [host_url_with_auth],
|
||||
'verify_certs': False,
|
||||
'ssl_show_warn': False,
|
||||
'request_timeout': 30,
|
||||
'retry_on_timeout': True
|
||||
}
|
||||
|
||||
print("[DEBUG] 尝试方式1: URL中嵌入认证信息")
|
||||
es_client = Elasticsearch(**es_config_1)
|
||||
|
||||
# 测试连接
|
||||
info = es_client.info()
|
||||
print(f"[SUCCESS] 方式1连接成功")
|
||||
return es_client
|
||||
|
||||
except Exception as e1:
|
||||
print(f"[DEBUG] 方式1失败: {e1}")
|
||||
|
||||
try:
|
||||
# 尝试方式2: 使用basic_auth参数
|
||||
host_url = f"{es_scheme}://{es_host}:{es_port}"
|
||||
es_config_2 = {
|
||||
'hosts': [host_url],
|
||||
'basic_auth': (es_user, es_password),
|
||||
'verify_certs': False,
|
||||
'ssl_show_warn': False,
|
||||
'request_timeout': 30,
|
||||
'retry_on_timeout': True
|
||||
}
|
||||
|
||||
print("[DEBUG] 尝试方式2: 使用basic_auth参数")
|
||||
es_client = Elasticsearch(**es_config_2)
|
||||
|
||||
# 测试连接
|
||||
info = es_client.info()
|
||||
print(f"[SUCCESS] 方式2连接成功")
|
||||
return es_client
|
||||
|
||||
except Exception as e2:
|
||||
print(f"[DEBUG] 方式2失败: {e2}")
|
||||
|
||||
try:
|
||||
# 尝试方式3: 使用http_auth参数 (旧版本兼容)
|
||||
es_config_3 = {
|
||||
'hosts': [host_url],
|
||||
'http_auth': (es_user, es_password),
|
||||
'verify_certs': False,
|
||||
'ssl_show_warn': False,
|
||||
'request_timeout': 30,
|
||||
'retry_on_timeout': True
|
||||
}
|
||||
|
||||
print("[DEBUG] 尝试方式3: 使用http_auth参数")
|
||||
es_client = Elasticsearch(**es_config_3)
|
||||
|
||||
# 测试连接
|
||||
info = es_client.info()
|
||||
print(f"[SUCCESS] 方式3连接成功")
|
||||
return es_client
|
||||
|
||||
except Exception as e3:
|
||||
print(f"[DEBUG] 方式3失败: {e3}")
|
||||
print(f"[ERROR] 所有认证方式都失败了")
|
||||
raise e3
|
||||
|
||||
def build_query(start_date=None, end_date=None):
|
||||
"""构建ES查询条件"""
|
||||
# 构建基础查询条件
|
||||
must_conditions = []
|
||||
|
||||
# 添加时间范围条件
|
||||
if start_date or end_date:
|
||||
range_query = {}
|
||||
|
||||
if start_date:
|
||||
start_timestamp = int(datetime.strptime(start_date, "%Y-%m-%d %H:%M:%S").timestamp())
|
||||
range_query["gte"] = start_timestamp
|
||||
print(f"[DEBUG] 开始时间戳: {start_timestamp} (对应 {start_date})")
|
||||
|
||||
if end_date:
|
||||
end_timestamp = int(datetime.strptime(end_date, "%Y-%m-%d %H:%M:%S").timestamp())
|
||||
range_query["lte"] = end_timestamp
|
||||
print(f"[DEBUG] 结束时间戳: {end_timestamp} (对应 {end_date})")
|
||||
|
||||
must_conditions.append({
|
||||
"range": {
|
||||
"write_time_int": range_query
|
||||
}
|
||||
})
|
||||
|
||||
# 如果配置了 userId 列表,则仅选取对应 userId 的数据
|
||||
if FILTER_USER_IDS:
|
||||
print(f"[DEBUG] 应用 userId 过滤: {FILTER_USER_IDS}")
|
||||
must_conditions.append({
|
||||
"terms": {
|
||||
"userId": FILTER_USER_IDS
|
||||
}
|
||||
})
|
||||
|
||||
# 如果配置了 type 列表,则仅选取对应 type 的数据
|
||||
if FILTER_TYPES:
|
||||
print(f"[DEBUG] 应用 type 过滤: {FILTER_TYPES}")
|
||||
must_conditions.append({
|
||||
"terms": {
|
||||
"type": FILTER_TYPES
|
||||
}
|
||||
})
|
||||
|
||||
# 构建最终查询
|
||||
if must_conditions:
|
||||
query = {
|
||||
"bool": {
|
||||
"must": must_conditions
|
||||
}
|
||||
}
|
||||
else:
|
||||
query = {"match_all": {}}
|
||||
|
||||
print(f"[DEBUG] 查询条件: {query}")
|
||||
|
||||
return {
|
||||
"query": query,
|
||||
"_source": EXPORT_FIELDS,
|
||||
"sort": [{"write_time_int": {"order": "desc"}}]
|
||||
}
|
||||
|
||||
def fetch_data_from_es(es_client, start_date=None, end_date=None):
|
||||
"""从ES获取数据"""
|
||||
query = build_query(start_date, end_date)
|
||||
|
||||
try:
|
||||
print(f"[DEBUG] 执行ES查询,使用scroll获取全量数据...")
|
||||
|
||||
# 使用scroll API获取全量数据
|
||||
scroll_size = 1000 # 每次scroll获取的数据量
|
||||
scroll_timeout = '2m' # scroll超时时间
|
||||
|
||||
# 初始化scroll
|
||||
query['size'] = scroll_size
|
||||
response = es_client.search(
|
||||
index=INDEX_NAME,
|
||||
body=query,
|
||||
scroll=scroll_timeout
|
||||
)
|
||||
|
||||
scroll_id = response['_scroll_id']
|
||||
hits = response['hits']['hits']
|
||||
total_hits = response['hits']['total']
|
||||
|
||||
# 获取总数(兼容不同ES版本)
|
||||
if isinstance(total_hits, dict):
|
||||
total_count = total_hits['value']
|
||||
else:
|
||||
total_count = total_hits
|
||||
|
||||
print(f"[DEBUG] ES中匹配的总记录数: {total_count}")
|
||||
|
||||
all_data = []
|
||||
batch_count = 1
|
||||
|
||||
# 处理第一批数据
|
||||
for hit in hits:
|
||||
source = hit['_source']
|
||||
row = {}
|
||||
for field in EXPORT_FIELDS:
|
||||
row[field] = source.get(field, "")
|
||||
all_data.append(row)
|
||||
|
||||
print(f"[DEBUG] 已获取第 {batch_count} 批数据,当前总数: {len(all_data)}")
|
||||
|
||||
# 继续scroll获取剩余数据
|
||||
while len(hits) == scroll_size:
|
||||
batch_count += 1
|
||||
response = es_client.scroll(scroll_id=scroll_id, scroll=scroll_timeout)
|
||||
scroll_id = response['_scroll_id']
|
||||
hits = response['hits']['hits']
|
||||
|
||||
for hit in hits:
|
||||
source = hit['_source']
|
||||
row = {}
|
||||
for field in EXPORT_FIELDS:
|
||||
row[field] = source.get(field, "")
|
||||
all_data.append(row)
|
||||
|
||||
print(f"[DEBUG] 已获取第 {batch_count} 批数据,当前总数: {len(all_data)}")
|
||||
|
||||
# 清理scroll
|
||||
try:
|
||||
es_client.clear_scroll(scroll_id=scroll_id)
|
||||
except:
|
||||
pass # 忽略清理错误
|
||||
|
||||
print(f"[DEBUG] 从ES获取到数据 {len(all_data)} 条记录")
|
||||
return all_data
|
||||
|
||||
except Exception as e:
|
||||
print(f"查询ES时出错: {e}")
|
||||
return []
|
||||
|
||||
def export_to_excel(data, filename):
|
||||
"""导出数据到Excel"""
|
||||
if not data:
|
||||
print("没有数据可导出")
|
||||
return
|
||||
|
||||
df = pd.DataFrame(data)
|
||||
|
||||
try:
|
||||
df.to_excel(filename, index=False, engine='openpyxl')
|
||||
print(f"数据已导出到: {filename}")
|
||||
print(f"共导出 {len(data)} 条记录")
|
||||
except Exception as e:
|
||||
print(f"导出Excel时出错: {e}")
|
||||
|
||||
def debug_es_data(es_client):
|
||||
"""调试ES数据,了解实际数据情况"""
|
||||
print("\n" + "="*60)
|
||||
print("开始调试ES数据...")
|
||||
|
||||
try:
|
||||
# 1. 查询总数据量
|
||||
total_query = {
|
||||
"query": {"match_all": {}},
|
||||
"size": 0
|
||||
}
|
||||
response = es_client.search(index=INDEX_NAME, body=total_query)
|
||||
total_count = response['hits']['total']
|
||||
if isinstance(total_count, dict):
|
||||
total_count = total_count['value']
|
||||
print(f"[DEBUG] ES索引 '{INDEX_NAME}' 中总数据量: {total_count}")
|
||||
|
||||
if total_count == 0:
|
||||
print("[ERROR] ES索引中没有任何数据!")
|
||||
return
|
||||
|
||||
# 2. 查询最近的几条数据,了解数据结构
|
||||
sample_query = {
|
||||
"query": {"match_all": {}},
|
||||
"size": 5,
|
||||
"sort": [{"_id": {"order": "desc"}}]
|
||||
}
|
||||
response = es_client.search(index=INDEX_NAME, body=sample_query)
|
||||
hits = response['hits']['hits']
|
||||
|
||||
print(f"[DEBUG] 获取到 {len(hits)} 条样本数据:")
|
||||
for i, hit in enumerate(hits):
|
||||
source = hit['_source']
|
||||
|
||||
print(f" 样本 {i+1}:")
|
||||
print(f" write_time_int: {source.get('write_time_int', 'N/A')}")
|
||||
print(f" timeStr: {source.get('timeStr', 'N/A')}")
|
||||
print(f" type: {source.get('type', 'N/A')}")
|
||||
print(f" userId: {source.get('userId', 'N/A')}")
|
||||
|
||||
# 3. 查询时间范围内的数据
|
||||
time_range_query = {
|
||||
"query": {
|
||||
"range": {
|
||||
"write_time_int": {
|
||||
"gte": int(datetime.strptime(START_DATE, "%Y-%m-%d %H:%M:%S").timestamp()),
|
||||
"lte": int(datetime.strptime(END_DATE, "%Y-%m-%d %H:%M:%S").timestamp())
|
||||
}
|
||||
}
|
||||
},
|
||||
"size": 0
|
||||
}
|
||||
response = es_client.search(index=INDEX_NAME, body=time_range_query)
|
||||
time_range_count = response['hits']['total']
|
||||
if isinstance(time_range_count, dict):
|
||||
time_range_count = time_range_count['value']
|
||||
print(f"[DEBUG] 时间范围内数据量 ({START_DATE} 到 {END_DATE}): {time_range_count}")
|
||||
|
||||
# 4. 查询时间范围的实际数据分布
|
||||
print(f"[DEBUG] 检查时间字段的实际值范围...")
|
||||
agg_query = {
|
||||
"query": {"match_all": {}},
|
||||
"size": 0,
|
||||
"aggs": {
|
||||
"time_stats": {
|
||||
"stats": {
|
||||
"field": "write_time_int"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
response = es_client.search(index=INDEX_NAME, body=agg_query)
|
||||
if 'aggregations' in response:
|
||||
stats = response['aggregations']['time_stats']
|
||||
min_time = stats.get('min')
|
||||
max_time = stats.get('max')
|
||||
if min_time and max_time:
|
||||
min_date = datetime.fromtimestamp(min_time).strftime("%Y-%m-%d %H:%M:%S")
|
||||
max_date = datetime.fromtimestamp(max_time).strftime("%Y-%m-%d %H:%M:%S")
|
||||
print(f" 最早时间: {min_date} (时间戳: {min_time})")
|
||||
print(f" 最晚时间: {max_date} (时间戳: {max_time})")
|
||||
|
||||
except Exception as e:
|
||||
print(f"[ERROR] 调试ES数据时出错: {e}")
|
||||
|
||||
print("="*60 + "\n")
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
print("开始从ES获取单元挑战数据...")
|
||||
print(f"索引: {INDEX_NAME}")
|
||||
print(f"开始日期: {START_DATE if START_DATE else '不限制'}")
|
||||
print(f"结束日期: {END_DATE if END_DATE else '不限制'}")
|
||||
if FILTER_TYPES:
|
||||
print(f"类型过滤: {FILTER_TYPES}")
|
||||
if FILTER_USER_IDS:
|
||||
print(f"用户ID过滤: {FILTER_USER_IDS}")
|
||||
print("-" * 50)
|
||||
|
||||
# 检查.env文件是否存在
|
||||
env_file = ".env"
|
||||
if not os.path.exists(env_file):
|
||||
print(f"[ERROR] {env_file} 文件不存在,请创建并配置ES连接信息")
|
||||
print("参考 .env.example 文件进行配置")
|
||||
return
|
||||
|
||||
print(f"[DEBUG] 找到环境配置文件: {env_file}")
|
||||
|
||||
# 创建ES客户端
|
||||
try:
|
||||
es_client = create_es_client()
|
||||
except ValueError as e:
|
||||
print(f"[ERROR] 配置错误: {e}")
|
||||
print("请检查 .env 文件中的ES配置")
|
||||
return
|
||||
except Exception as e:
|
||||
print(f"[ERROR] 创建ES客户端失败: {e}")
|
||||
return
|
||||
|
||||
# 测试连接
|
||||
try:
|
||||
print("[DEBUG] 正在测试ES连接...")
|
||||
# ES客户端创建函数中已经包含了连接测试,这里不需要重复测试
|
||||
print(f"[SUCCESS] ES连接已建立")
|
||||
except Exception as e:
|
||||
print(f"[ERROR] ES连接失败: {e}")
|
||||
print("\n可能的解决方案:")
|
||||
print("1. 检查ES服务是否正常运行")
|
||||
print("2. 验证.env文件中的ES_HOST、ES_USER、ES_PASSWORD是否正确")
|
||||
print("3. 确认网络连接是否正常")
|
||||
print("4. 检查ES用户权限是否足够")
|
||||
print("5. 密码中包含特殊字符,已尝试URL编码处理")
|
||||
return
|
||||
|
||||
# 获取数据
|
||||
data = fetch_data_from_es(es_client, START_DATE, END_DATE)
|
||||
|
||||
# 导出到Excel
|
||||
if data:
|
||||
export_to_excel(data, OUTPUT_FILE)
|
||||
else:
|
||||
print("未获取到任何数据")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
599
business_knowledge/git_scripts/sample_user_data_from_es.py
Normal file
599
business_knowledge/git_scripts/sample_user_data_from_es.py
Normal file
@ -0,0 +1,599 @@
|
||||
"""
|
||||
从es中采样用户数据
|
||||
|
||||
es相关配置通过以下环节变量
|
||||
|
||||
ES_HOST=xxx
|
||||
ES_PORT=9200
|
||||
ES_SCHEME=https
|
||||
ES_USER=elastic
|
||||
ES_PASSWORD=xxx
|
||||
|
||||
|
||||
index: user-audio
|
||||
|
||||
脚本思路:
|
||||
|
||||
给定 一些过滤参数; 给定导出的excel文件名 (在脚本中以变量方式配置就行)
|
||||
|
||||
导出我要的字段内容到一个 excel
|
||||
|
||||
过滤字段:
|
||||
timeStr: 字段内容为str 格式为: 2024-12-31 15:53:19
|
||||
期望支持配置 开始 日期 和 结束日期 (可以只配置一个 只配 开始日期 则筛选 >= 开始日期的记录, 只配结束日期 则筛选 <= 结束日期的记录)
|
||||
|
||||
输出以下字段内容:
|
||||
|
||||
userId
|
||||
userMsg
|
||||
userName
|
||||
soeData
|
||||
audioUrl
|
||||
asrStatus
|
||||
componentId
|
||||
componentType
|
||||
dataVersion
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
from datetime import datetime
|
||||
from dotenv import load_dotenv
|
||||
from elasticsearch import Elasticsearch
|
||||
import pandas as pd
|
||||
import urllib.parse
|
||||
import re
|
||||
from collections import defaultdict
|
||||
|
||||
# 加载环境变量
|
||||
load_dotenv()
|
||||
|
||||
# 配置参数
|
||||
INDEX_NAME = os.getenv("ES_INDEX", "user-audio")
|
||||
OUTPUT_FILE = "user_audio_data.xlsx"
|
||||
START_DATE = "2025-10-15 00:00:00" # 开始日期,格式: YYYY-MM-DD HH:MM:SS,设为None则不限制
|
||||
END_DATE = "2025-10-17 00:00:00" # 结束日期,格式: YYYY-MM-DD HH:MM:SS,设为None则不限制
|
||||
|
||||
# 可选的 userId 过滤配置:配置为[int, ...] 列表;为空则不限制
|
||||
FILTER_USER_IDS = [356] # 例如: [123, 456]
|
||||
|
||||
# 采样配置参数
|
||||
MAX_SAMPLES_PER_USER_MSG = 50 # 每个不重复的userMsg最多采样的数据条数
|
||||
MAX_SAMPLES_PER_USER_ID = 20 # 每个userId最多采样的数据条数
|
||||
|
||||
# 需要导出的字段
|
||||
EXPORT_FIELDS = [
|
||||
"userId",
|
||||
"userMsg",
|
||||
"userName",
|
||||
"soeData",
|
||||
"audioUrl",
|
||||
"asrStatus",
|
||||
"componentId",
|
||||
"componentType",
|
||||
"dataVersion",
|
||||
"timeStr"
|
||||
]
|
||||
|
||||
def create_es_client():
|
||||
"""创建Elasticsearch客户端"""
|
||||
# 获取环境变量并打印调试信息
|
||||
es_host = os.getenv('ES_HOST')
|
||||
es_port = os.getenv('ES_PORT', 9200)
|
||||
es_scheme = os.getenv('ES_SCHEME', 'https')
|
||||
es_user = os.getenv('ES_USER')
|
||||
es_password = os.getenv('ES_PASSWORD')
|
||||
|
||||
print(f"[DEBUG] ES配置信息:")
|
||||
print(f" ES_HOST: {es_host}")
|
||||
print(f" ES_PORT: {es_port}")
|
||||
print(f" ES_SCHEME: {es_scheme}")
|
||||
print(f" ES_USER: {es_user}")
|
||||
print(f" ES_PASSWORD: {'***已设置***' if es_password else '未设置'}")
|
||||
|
||||
# 检查必要的环境变量
|
||||
if not es_host:
|
||||
raise ValueError("ES_HOST环境变量未设置")
|
||||
if not es_user:
|
||||
raise ValueError("ES_USER环境变量未设置")
|
||||
if not es_password:
|
||||
raise ValueError("ES_PASSWORD环境变量未设置")
|
||||
|
||||
# URL编码用户名和密码,处理特殊字符
|
||||
encoded_user = urllib.parse.quote(es_user, safe='')
|
||||
encoded_password = urllib.parse.quote(es_password, safe='')
|
||||
|
||||
print(f"[DEBUG] 原始密码包含特殊字符,已进行URL编码")
|
||||
|
||||
# 方式1: 使用URL中嵌入认证信息
|
||||
host_url_with_auth = f"{es_scheme}://{encoded_user}:{encoded_password}@{es_host}:{es_port}"
|
||||
print(f"[DEBUG] 连接URL (带认证): {es_scheme}://{encoded_user}:***@{es_host}:{es_port}")
|
||||
|
||||
try:
|
||||
# 尝试方式1: URL中嵌入认证
|
||||
es_config_1 = {
|
||||
'hosts': [host_url_with_auth],
|
||||
'verify_certs': False,
|
||||
'ssl_show_warn': False,
|
||||
'request_timeout': 30,
|
||||
'retry_on_timeout': True
|
||||
}
|
||||
|
||||
print("[DEBUG] 尝试方式1: URL中嵌入认证信息")
|
||||
es_client = Elasticsearch(**es_config_1)
|
||||
|
||||
# 测试连接
|
||||
info = es_client.info()
|
||||
print(f"[SUCCESS] 方式1连接成功")
|
||||
return es_client
|
||||
|
||||
except Exception as e1:
|
||||
print(f"[DEBUG] 方式1失败: {e1}")
|
||||
|
||||
try:
|
||||
# 尝试方式2: 使用basic_auth参数
|
||||
host_url = f"{es_scheme}://{es_host}:{es_port}"
|
||||
es_config_2 = {
|
||||
'hosts': [host_url],
|
||||
'basic_auth': (es_user, es_password),
|
||||
'verify_certs': False,
|
||||
'ssl_show_warn': False,
|
||||
'request_timeout': 30,
|
||||
'retry_on_timeout': True
|
||||
}
|
||||
|
||||
print("[DEBUG] 尝试方式2: 使用basic_auth参数")
|
||||
es_client = Elasticsearch(**es_config_2)
|
||||
|
||||
# 测试连接
|
||||
info = es_client.info()
|
||||
print(f"[SUCCESS] 方式2连接成功")
|
||||
return es_client
|
||||
|
||||
except Exception as e2:
|
||||
print(f"[DEBUG] 方式2失败: {e2}")
|
||||
|
||||
try:
|
||||
# 尝试方式3: 使用http_auth参数 (旧版本兼容)
|
||||
es_config_3 = {
|
||||
'hosts': [host_url],
|
||||
'http_auth': (es_user, es_password),
|
||||
'verify_certs': False,
|
||||
'ssl_show_warn': False,
|
||||
'request_timeout': 30,
|
||||
'retry_on_timeout': True
|
||||
}
|
||||
|
||||
print("[DEBUG] 尝试方式3: 使用http_auth参数")
|
||||
es_client = Elasticsearch(**es_config_3)
|
||||
|
||||
# 测试连接
|
||||
info = es_client.info()
|
||||
print(f"[SUCCESS] 方式3连接成功")
|
||||
return es_client
|
||||
|
||||
except Exception as e3:
|
||||
print(f"[DEBUG] 方式3失败: {e3}")
|
||||
print(f"[ERROR] 所有认证方式都失败了")
|
||||
raise e3
|
||||
|
||||
def build_query(start_date=None, end_date=None):
|
||||
"""构建ES查询条件"""
|
||||
# 构建基础查询条件
|
||||
must_conditions = []
|
||||
|
||||
# 添加时间范围条件
|
||||
if start_date or end_date:
|
||||
range_query = {}
|
||||
|
||||
if start_date:
|
||||
start_timestamp = int(datetime.strptime(start_date, "%Y-%m-%d %H:%M:%S").timestamp())
|
||||
range_query["gte"] = start_timestamp
|
||||
print(f"[DEBUG] 开始时间戳: {start_timestamp} (对应 {start_date})")
|
||||
|
||||
if end_date:
|
||||
end_timestamp = int(datetime.strptime(end_date, "%Y-%m-%d %H:%M:%S").timestamp())
|
||||
range_query["lte"] = end_timestamp
|
||||
print(f"[DEBUG] 结束时间戳: {end_timestamp} (对应 {end_date})")
|
||||
|
||||
must_conditions.append({
|
||||
"range": {
|
||||
"timeInt": range_query
|
||||
}
|
||||
})
|
||||
|
||||
# 如果配置了 userId 列表,则仅选取对应 userId 的数据
|
||||
if FILTER_USER_IDS:
|
||||
print(f"[DEBUG] 应用 userId 过滤: {FILTER_USER_IDS}")
|
||||
must_conditions.append({
|
||||
"terms": {
|
||||
"userId": FILTER_USER_IDS
|
||||
}
|
||||
})
|
||||
|
||||
# 移除soeData的exists查询,改为在应用层进行更精确的过滤
|
||||
# 注释掉原来的soeData exists查询
|
||||
# must_conditions.append({
|
||||
# "exists": {
|
||||
# "field": "soeData"
|
||||
# }
|
||||
# })
|
||||
|
||||
# 构建最终查询
|
||||
if must_conditions:
|
||||
query = {
|
||||
"bool": {
|
||||
"must": must_conditions
|
||||
}
|
||||
}
|
||||
else:
|
||||
query = {"match_all": {}}
|
||||
|
||||
print(f"[DEBUG] 查询条件: {query}")
|
||||
|
||||
return {
|
||||
"query": query,
|
||||
"_source": EXPORT_FIELDS,
|
||||
"sort": [{"timeInt": {"order": "desc"}}]
|
||||
}
|
||||
|
||||
def fetch_data_from_es(es_client, start_date=None, end_date=None):
|
||||
"""从ES获取数据"""
|
||||
query = build_query(start_date, end_date)
|
||||
|
||||
try:
|
||||
print(f"[DEBUG] 执行ES查询,使用scroll获取全量数据...")
|
||||
|
||||
# 使用scroll API获取全量数据
|
||||
scroll_size = 1000 # 每次scroll获取的数据量
|
||||
scroll_timeout = '2m' # scroll超时时间
|
||||
|
||||
# 初始化scroll
|
||||
query['size'] = scroll_size
|
||||
response = es_client.search(
|
||||
index=INDEX_NAME,
|
||||
body=query,
|
||||
scroll=scroll_timeout
|
||||
)
|
||||
|
||||
scroll_id = response['_scroll_id']
|
||||
hits = response['hits']['hits']
|
||||
total_hits = response['hits']['total']
|
||||
|
||||
# 获取总数(兼容不同ES版本)
|
||||
if isinstance(total_hits, dict):
|
||||
total_count = total_hits['value']
|
||||
else:
|
||||
total_count = total_hits
|
||||
|
||||
print(f"[DEBUG] ES中匹配的总记录数: {total_count}")
|
||||
|
||||
all_data = []
|
||||
batch_count = 1
|
||||
|
||||
# 处理第一批数据
|
||||
for hit in hits:
|
||||
source = hit['_source']
|
||||
row = {}
|
||||
for field in EXPORT_FIELDS:
|
||||
row[field] = source.get(field, "")
|
||||
all_data.append(row)
|
||||
|
||||
print(f"[DEBUG] 已获取第 {batch_count} 批数据,当前总数: {len(all_data)}")
|
||||
|
||||
# 继续scroll获取剩余数据
|
||||
while len(hits) == scroll_size:
|
||||
batch_count += 1
|
||||
response = es_client.scroll(scroll_id=scroll_id, scroll=scroll_timeout)
|
||||
scroll_id = response['_scroll_id']
|
||||
hits = response['hits']['hits']
|
||||
|
||||
for hit in hits:
|
||||
source = hit['_source']
|
||||
row = {}
|
||||
for field in EXPORT_FIELDS:
|
||||
row[field] = source.get(field, "")
|
||||
all_data.append(row)
|
||||
|
||||
print(f"[DEBUG] 已获取第 {batch_count} 批数据,当前总数: {len(all_data)}")
|
||||
|
||||
# 清理scroll
|
||||
try:
|
||||
es_client.clear_scroll(scroll_id=scroll_id)
|
||||
except:
|
||||
pass # 忽略清理错误
|
||||
|
||||
print(f"[DEBUG] 从ES获取到原始数据 {len(all_data)} 条记录")
|
||||
|
||||
# 根据是否配置了 userId 列表决定是否跳过过滤与采样逻辑
|
||||
if FILTER_USER_IDS:
|
||||
print("[DEBUG] 已配置 userId 列表,跳过过滤与采样逻辑,返回全部匹配数据")
|
||||
return all_data
|
||||
else:
|
||||
# 应用过滤和采样逻辑
|
||||
filtered_sampled_data = filter_and_sample_data(all_data)
|
||||
return filtered_sampled_data
|
||||
|
||||
except Exception as e:
|
||||
print(f"查询ES时出错: {e}")
|
||||
return []
|
||||
|
||||
def export_to_excel(data, filename):
|
||||
"""导出数据到Excel"""
|
||||
if not data:
|
||||
print("没有数据可导出")
|
||||
return
|
||||
|
||||
df = pd.DataFrame(data)
|
||||
|
||||
# 生成带时间戳的文件名
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
base_name = filename.rsplit('.', 1)[0]
|
||||
extension = filename.rsplit('.', 1)[1] if '.' in filename else 'xlsx'
|
||||
timestamped_filename = f"{base_name}_{timestamp}.{extension}"
|
||||
|
||||
try:
|
||||
df.to_excel(timestamped_filename, index=False, engine='openpyxl')
|
||||
print(f"数据已导出到: {timestamped_filename}")
|
||||
print(f"共导出 {len(data)} 条记录")
|
||||
except Exception as e:
|
||||
print(f"导出Excel时出错: {e}")
|
||||
|
||||
def contains_chinese(text):
|
||||
"""检测文本是否包含中文字符"""
|
||||
if not text:
|
||||
return False
|
||||
chinese_pattern = re.compile(r'[\u4e00-\u9fff]')
|
||||
return bool(chinese_pattern.search(text))
|
||||
|
||||
def filter_and_sample_data(data):
|
||||
"""过滤和采样数据"""
|
||||
print(f"[DEBUG] 开始过滤和采样,原始数据量: {len(data)}")
|
||||
|
||||
# 第一步:过滤数据
|
||||
filtered_data = []
|
||||
soe_data_empty_count = 0
|
||||
soe_data_not_json_count = 0
|
||||
chinese_msg_count = 0
|
||||
|
||||
for i, item in enumerate(data):
|
||||
# 检查soeData是否存在且以"{"开头
|
||||
soe_data = item.get('soeData', '')
|
||||
if not soe_data:
|
||||
soe_data_empty_count += 1
|
||||
if i < 5: # 只打印前5个样本的详细信息
|
||||
print(f"[DEBUG] 样本 {i+1}: soeData为空或不存在")
|
||||
continue
|
||||
|
||||
if not str(soe_data).strip().startswith('{'):
|
||||
soe_data_not_json_count += 1
|
||||
if i < 5: # 只打印前5个样本的详细信息
|
||||
print(f"[DEBUG] 样本 {i+1}: soeData不以'{{' 开头,内容: {str(soe_data)[:100]}...")
|
||||
continue
|
||||
|
||||
# 检查userMsg是否不包含中文
|
||||
user_msg = item.get('userMsg', '')
|
||||
if contains_chinese(user_msg):
|
||||
chinese_msg_count += 1
|
||||
if i < 5: # 只打印前5个样本的详细信息
|
||||
print(f"[DEBUG] 样本 {i+1}: userMsg包含中文,内容: {user_msg[:50]}...")
|
||||
continue
|
||||
|
||||
filtered_data.append(item)
|
||||
if i < 5: # 只打印前5个样本的详细信息
|
||||
print(f"[DEBUG] 样本 {i+1}: 通过过滤,userMsg: {user_msg[:50]}...")
|
||||
|
||||
print(f"[DEBUG] 过滤统计:")
|
||||
print(f" - soeData为空: {soe_data_empty_count} 条")
|
||||
print(f" - soeData不以'{{' 开头: {soe_data_not_json_count} 条")
|
||||
print(f" - userMsg包含中文: {chinese_msg_count} 条")
|
||||
print(f" - 通过过滤的数据: {len(filtered_data)} 条")
|
||||
|
||||
# 第二步:按userMsg分组采样
|
||||
user_msg_groups = defaultdict(list)
|
||||
for item in filtered_data:
|
||||
user_msg = item.get('userMsg', '')
|
||||
user_msg_groups[user_msg].append(item)
|
||||
|
||||
print(f"[DEBUG] 不重复的userMsg数量: {len(user_msg_groups)}")
|
||||
|
||||
# 对每个userMsg组进行采样
|
||||
sampled_by_msg = []
|
||||
for user_msg, items in user_msg_groups.items():
|
||||
# 每个userMsg最多取MAX_SAMPLES_PER_USER_MSG条
|
||||
sampled_items = items[:MAX_SAMPLES_PER_USER_MSG]
|
||||
sampled_by_msg.extend(sampled_items)
|
||||
if len(items) > MAX_SAMPLES_PER_USER_MSG:
|
||||
print(f"[DEBUG] userMsg '{user_msg}' 有 {len(items)} 条数据,采样了 {MAX_SAMPLES_PER_USER_MSG} 条")
|
||||
|
||||
print(f"[DEBUG] 按userMsg采样后数据量: {len(sampled_by_msg)}")
|
||||
|
||||
# 第三步:按userId分组采样
|
||||
user_id_groups = defaultdict(list)
|
||||
for item in sampled_by_msg:
|
||||
user_id = item.get('userId', '')
|
||||
user_id_groups[user_id].append(item)
|
||||
|
||||
print(f"[DEBUG] 不重复的userId数量: {len(user_id_groups)}")
|
||||
|
||||
# 对每个userId组进行采样
|
||||
final_sampled_data = []
|
||||
for user_id, items in user_id_groups.items():
|
||||
# 每个userId最多取MAX_SAMPLES_PER_USER_ID条
|
||||
sampled_items = items[:MAX_SAMPLES_PER_USER_ID]
|
||||
final_sampled_data.extend(sampled_items)
|
||||
if len(items) > MAX_SAMPLES_PER_USER_ID:
|
||||
print(f"[DEBUG] userId '{user_id}' 有 {len(items)} 条数据,采样了 {MAX_SAMPLES_PER_USER_ID} 条")
|
||||
|
||||
print(f"[DEBUG] 最终采样数据量: {len(final_sampled_data)}")
|
||||
|
||||
return final_sampled_data
|
||||
|
||||
def debug_es_data(es_client):
|
||||
"""调试ES数据,了解实际数据情况"""
|
||||
print("\n" + "="*60)
|
||||
print("开始调试ES数据...")
|
||||
|
||||
try:
|
||||
# 1. 查询总数据量
|
||||
total_query = {
|
||||
"query": {"match_all": {}},
|
||||
"size": 0
|
||||
}
|
||||
response = es_client.search(index=INDEX_NAME, body=total_query)
|
||||
total_count = response['hits']['total']
|
||||
if isinstance(total_count, dict):
|
||||
total_count = total_count['value']
|
||||
print(f"[DEBUG] ES索引 '{INDEX_NAME}' 中总数据量: {total_count}")
|
||||
|
||||
if total_count == 0:
|
||||
print("[ERROR] ES索引中没有任何数据!")
|
||||
return
|
||||
|
||||
# 2. 查询最近的几条数据,了解数据结构
|
||||
sample_query = {
|
||||
"query": {"match_all": {}},
|
||||
"size": 5,
|
||||
"sort": [{"_id": {"order": "desc"}}]
|
||||
}
|
||||
response = es_client.search(index=INDEX_NAME, body=sample_query)
|
||||
hits = response['hits']['hits']
|
||||
|
||||
print(f"[DEBUG] 获取到 {len(hits)} 条样本数据:")
|
||||
for i, hit in enumerate(hits):
|
||||
source = hit['_source']
|
||||
soe_data = source.get('soeData', '')
|
||||
soe_data_preview = str(soe_data)[:100] if soe_data else 'N/A'
|
||||
soe_data_starts_with_brace = str(soe_data).strip().startswith('{') if soe_data else False
|
||||
|
||||
print(f" 样本 {i+1}:")
|
||||
print(f" timeInt: {source.get('timeInt', 'N/A')}")
|
||||
print(f" timeStr: {source.get('timeStr', 'N/A')}")
|
||||
print(f" soeData存在: {'是' if soe_data else '否'}")
|
||||
print(f" soeData以{{开头: {'是' if soe_data_starts_with_brace else '否'}")
|
||||
print(f" soeData预览: {soe_data_preview}...")
|
||||
print(f" userMsg: {source.get('userMsg', 'N/A')[:50]}...")
|
||||
print(f" userId: {source.get('userId', 'N/A')}")
|
||||
|
||||
# 3. 查询时间范围内的数据(不加soeData过滤)
|
||||
time_range_query = {
|
||||
"query": {
|
||||
"range": {
|
||||
"timeInt": {
|
||||
"gte": int(datetime.strptime(START_DATE, "%Y-%m-%d %H:%M:%S").timestamp()),
|
||||
"lte": int(datetime.strptime(END_DATE, "%Y-%m-%d %H:%M:%S").timestamp())
|
||||
}
|
||||
}
|
||||
},
|
||||
"size": 0
|
||||
}
|
||||
response = es_client.search(index=INDEX_NAME, body=time_range_query)
|
||||
time_range_count = response['hits']['total']
|
||||
if isinstance(time_range_count, dict):
|
||||
time_range_count = time_range_count['value']
|
||||
print(f"[DEBUG] 时间范围内数据量 ({START_DATE} 到 {END_DATE}): {time_range_count}")
|
||||
|
||||
# 4. 查询有soeData的数据总量
|
||||
soe_data_query = {
|
||||
"query": {
|
||||
"exists": {
|
||||
"field": "soeData"
|
||||
}
|
||||
},
|
||||
"size": 0
|
||||
}
|
||||
response = es_client.search(index=INDEX_NAME, body=soe_data_query)
|
||||
soe_data_count = response['hits']['total']
|
||||
if isinstance(soe_data_count, dict):
|
||||
soe_data_count = soe_data_count['value']
|
||||
print(f"[DEBUG] 有soeData字段的数据总量: {soe_data_count}")
|
||||
|
||||
# 5. 查询时间范围的实际数据分布
|
||||
print(f"[DEBUG] 检查时间字段的实际值范围...")
|
||||
agg_query = {
|
||||
"query": {"match_all": {}},
|
||||
"size": 0,
|
||||
"aggs": {
|
||||
"time_stats": {
|
||||
"stats": {
|
||||
"field": "timeInt"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
response = es_client.search(index=INDEX_NAME, body=agg_query)
|
||||
if 'aggregations' in response:
|
||||
stats = response['aggregations']['time_stats']
|
||||
min_time = stats.get('min')
|
||||
max_time = stats.get('max')
|
||||
if min_time and max_time:
|
||||
min_date = datetime.fromtimestamp(min_time).strftime("%Y-%m-%d %H:%M:%S")
|
||||
max_date = datetime.fromtimestamp(max_time).strftime("%Y-%m-%d %H:%M:%S")
|
||||
print(f" 最早时间: {min_date} (时间戳: {min_time})")
|
||||
print(f" 最晚时间: {max_date} (时间戳: {max_time})")
|
||||
|
||||
except Exception as e:
|
||||
print(f"[ERROR] 调试ES数据时出错: {e}")
|
||||
|
||||
print("="*60 + "\n")
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
print("开始从ES采样用户数据...")
|
||||
print(f"索引: {INDEX_NAME}")
|
||||
print(f"开始日期: {START_DATE if START_DATE else '不限制'}")
|
||||
print(f"结束日期: {END_DATE if END_DATE else '不限制'}")
|
||||
if FILTER_USER_IDS:
|
||||
print(f"userId过滤: {FILTER_USER_IDS}")
|
||||
print("在配置了 userId 的情况下,将导出匹配用户的全部数据,跳过其他过滤与采样")
|
||||
else:
|
||||
print(f"过滤条件: soeData非空 且 userMsg不包含中文")
|
||||
print(f"采样配置: 每个userMsg最多{MAX_SAMPLES_PER_USER_MSG}条,每个userId最多{MAX_SAMPLES_PER_USER_ID}条")
|
||||
print("-" * 50)
|
||||
|
||||
# 检查.env文件是否存在
|
||||
env_file = ".env"
|
||||
if not os.path.exists(env_file):
|
||||
print(f"[ERROR] {env_file} 文件不存在,请创建并配置ES连接信息")
|
||||
print("参考 .env.example 文件进行配置")
|
||||
return
|
||||
|
||||
print(f"[DEBUG] 找到环境配置文件: {env_file}")
|
||||
|
||||
# 创建ES客户端
|
||||
try:
|
||||
es_client = create_es_client()
|
||||
except ValueError as e:
|
||||
print(f"[ERROR] 配置错误: {e}")
|
||||
print("请检查 .env 文件中的ES配置")
|
||||
return
|
||||
except Exception as e:
|
||||
print(f"[ERROR] 创建ES客户端失败: {e}")
|
||||
return
|
||||
|
||||
# 测试连接
|
||||
try:
|
||||
print("[DEBUG] 正在测试ES连接...")
|
||||
# ES客户端创建函数中已经包含了连接测试,这里不需要重复测试
|
||||
print(f"[SUCCESS] ES连接已建立")
|
||||
except Exception as e:
|
||||
print(f"[ERROR] ES连接失败: {e}")
|
||||
print("\n可能的解决方案:")
|
||||
print("1. 检查ES服务是否正常运行")
|
||||
print("2. 验证.env文件中的ES_HOST、ES_USER、ES_PASSWORD是否正确")
|
||||
print("3. 确认网络连接是否正常")
|
||||
print("4. 检查ES用户权限是否足够")
|
||||
print("5. 密码中包含特殊字符,已尝试URL编码处理")
|
||||
return
|
||||
|
||||
# 获取数据
|
||||
data = fetch_data_from_es(es_client, START_DATE, END_DATE)
|
||||
|
||||
# 导出到Excel
|
||||
if data:
|
||||
export_to_excel(data, OUTPUT_FILE)
|
||||
else:
|
||||
print("未获取到任何数据")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
149
business_knowledge/knowledge_summary.md
Normal file
149
business_knowledge/knowledge_summary.md
Normal file
@ -0,0 +1,149 @@
|
||||
# 业务知识库总结
|
||||
|
||||
## 整体业务理解
|
||||
|
||||
### 公司业务模式
|
||||
这是一个在线教育产品,主要提供 L1/L2 级别的英语学习课程。
|
||||
|
||||
### 核心业务流程
|
||||
1. **用户获取**:用户通过各个渠道下载 App 并注册
|
||||
2. **用户激活**:用户创建角色,填写性别、生日等信息
|
||||
3. **用户转化**:用户通过站内或站外渠道购课
|
||||
4. **用户学习**:用户学习课程,完成课时
|
||||
5. **数据回收**:收集用户学习行为数据,用于分析和优化
|
||||
|
||||
---
|
||||
|
||||
## 核心数据模型
|
||||
|
||||
### 1. 用户层
|
||||
**表**:`bi_vala_app_account`
|
||||
- 记录用户注册信息
|
||||
- 关键字段:id, created_at, download_channel, key_from, status
|
||||
- 筛选条件:status=1, deleted_at IS NULL, 排除测试用户ID
|
||||
|
||||
### 2. 用户详情层
|
||||
**表**:`account_detail_info`
|
||||
- 记录用户的详细信息
|
||||
- 关键字段:account_id, login_address, phone_login_times
|
||||
- login_address 格式:"省份-城市"
|
||||
|
||||
### 3. 角色层
|
||||
**表**:`bi_vala_app_character`
|
||||
- 一个用户可以有多个角色
|
||||
- 关键字段:id, account_id, gender, birthday, purchase_season_package, created_at
|
||||
- 性别映射:0=girl, 1=boy, 其他=unknow
|
||||
- 赛季包状态:'[1]'=未购买,其他=已购买
|
||||
|
||||
### 4. 订单层
|
||||
**表**:`bi_vala_order`
|
||||
- 记录用户购课订单
|
||||
- 关键字段:account_id, sale_channel, key_from, pay_success_date, pay_amount, pay_amount_int, order_status, goods_name
|
||||
- 有效订单筛选:order_status=3 AND pay_amount_int>49800
|
||||
- 购课渠道:17个渠道映射
|
||||
|
||||
### 5. 课程层
|
||||
**表**:`bi_level_unit_lesson`
|
||||
- 课程体系映射表
|
||||
- 课程层级结构:course_level (L1/L2) → course_season (S0-S4) → course_unit (U00-U48) → course_lesson (L1-L5)
|
||||
- chapter_id 映射到完整的课程ID
|
||||
|
||||
### 6. 学习行为层
|
||||
**表**:`bi_user_chapter_play_record_0~7`(8个分表)
|
||||
- 记录用户的课程播放记录
|
||||
- 关键字段:user_id, chapter_id, chapter_unique_id, play_status, updated_at, created_at
|
||||
- play_status=1 表示播放完成
|
||||
- 需要用 UNION ALL 合并8个分表
|
||||
|
||||
**表**:`bi_user_component_play_record_0~7`(8个分表)
|
||||
- 记录用户的组件播放记录(更细粒度)
|
||||
- 关键字段:chapter_unique_id, interval_time(毫秒)
|
||||
- 用于计算完课耗时
|
||||
|
||||
---
|
||||
|
||||
## 核心业务指标
|
||||
|
||||
### 1. 用户指标
|
||||
- **新增注册用户数**:按日期、渠道统计
|
||||
- **用户画像**:性别、年龄、地域分布
|
||||
|
||||
### 2. 转化指标
|
||||
- **转化率**:注册 → 购课的转化
|
||||
- **购课标签**:未购课、站外购课、站内购课
|
||||
- **退费率**:订单退费情况
|
||||
|
||||
### 3. 收入指标
|
||||
- **GMV**:成交总额,按渠道、日期统计
|
||||
- **购课金额**:客单价分析
|
||||
|
||||
### 4. 学习行为指标
|
||||
- **课程进入完成率**:进入课程 → 完成课程的转化
|
||||
- **平均通关时长**:课程完课平均时间
|
||||
- **学习进度**:用户完课的课程数量和顺序
|
||||
- **完课间隔**:距离上次完课的时间
|
||||
|
||||
---
|
||||
|
||||
## 常用分析模式
|
||||
|
||||
### 1. 用户全链路分析
|
||||
将用户、角色、订单、课程完课数据关联,形成宽表,用于综合分析。
|
||||
|
||||
### 2. 渠道分析
|
||||
按 download_channel 或 sale_channel 分组,分析不同渠道的用户质量和转化效果。
|
||||
|
||||
### 3. 课程分析
|
||||
分析不同课程的完课率、完课时长,识别热门课程和难点课程。
|
||||
|
||||
### 4. 时间序列分析
|
||||
按日期分组,分析用户增长、收入、学习行为的趋势变化。
|
||||
|
||||
---
|
||||
|
||||
## 常见筛选条件
|
||||
|
||||
### 测试用户排除
|
||||
```sql
|
||||
id not in (51, 2121, 1386, 1397, ...)
|
||||
```
|
||||
|
||||
### 有效订单
|
||||
```sql
|
||||
order_status = 3
|
||||
AND pay_amount_int > 49800
|
||||
```
|
||||
|
||||
### 有效用户
|
||||
```sql
|
||||
status = 1
|
||||
AND deleted_at IS NULL
|
||||
```
|
||||
|
||||
### 完课记录
|
||||
```sql
|
||||
play_status = 1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 数据处理技巧
|
||||
|
||||
### 1. 分表合并
|
||||
使用 UNION ALL 合并8个分表:
|
||||
```sql
|
||||
select * from bi_user_chapter_play_record_0
|
||||
union all
|
||||
select * from bi_user_chapter_play_record_1
|
||||
-- ... 其他6个表
|
||||
```
|
||||
|
||||
### 2. 渠道映射
|
||||
使用 CASE WHEN 将数字编码映射为渠道名称。
|
||||
|
||||
### 3. 时间处理
|
||||
- 使用 `date()` 或 `to_char()` 提取日期
|
||||
- 使用 `interval_time/1000/60` 将毫秒转为分钟
|
||||
|
||||
### 4. 去重逻辑
|
||||
使用 `rank() over (partition by ... order by ...)` 取第一条记录。
|
||||
19
business_knowledge/sql_queries/README.md
Normal file
19
business_knowledge/sql_queries/README.md
Normal file
@ -0,0 +1,19 @@
|
||||
# SQL 查询文档索引
|
||||
|
||||
创建时间: 2026-03-02 18:04:16
|
||||
|
||||
## 文档列表
|
||||
|
||||
- [全字段大表](全字段大表.md)
|
||||
- [平均通关时长](平均通关时长.md)
|
||||
- [新增注册用户数by渠道](新增注册用户数by渠道.md)
|
||||
- [课程进入完成率](课程进入完成率.md)
|
||||
- [账号角色年龄地址](账号角色年龄地址.md)
|
||||
- [退费率](退费率.md)
|
||||
- [销转学习进度](销转学习进度.md)
|
||||
- [班主任关注数据](班主任关注数据.md)
|
||||
- [端内GMV](端内GMV.md)
|
||||
- [端内用户课程进入完成率](端内用户课程进入完成率.md)
|
||||
- [端内购课用户学习行为](端内购课用户学习行为.md)
|
||||
- [转化率](转化率.md)
|
||||
- [课程ID映射](课程ID映射.md)
|
||||
292
business_knowledge/sql_queries/全字段大表.md
Normal file
292
business_knowledge/sql_queries/全字段大表.md
Normal file
@ -0,0 +1,292 @@
|
||||
# 全字段大表
|
||||
|
||||
**获取时间:** 2026-03-02
|
||||
**飞书文档 Token:** VVyWd5491o6tuqxceCVci6dVnFd
|
||||
|
||||
## 业务说明
|
||||
|
||||
这个查询将用户、购课、角色、课程完课等多个维度的数据整合在一起,形成一个宽表,适合进行综合分析。
|
||||
|
||||
## 涉及的数据表
|
||||
|
||||
1. **bi_vala_app_account** - 用户账号表
|
||||
2. **account_detail_info** - 账号详情表
|
||||
3. **bi_vala_order** - 订单表
|
||||
4. **bi_vala_app_character** - 角色表
|
||||
5. **bi_user_chapter_play_record_0~7** - 用户章节播放记录表(分表)
|
||||
6. **bi_level_unit_lesson** - 课程单元表
|
||||
7. **bi_user_component_play_record_0~7** - 用户组件播放记录表(分表)
|
||||
|
||||
## SQL 查询
|
||||
|
||||
```sql
|
||||
select a.id as "用户ID"
|
||||
,a.created_date as "注册日期"
|
||||
,a.download_channel as "下载渠道"
|
||||
,a.key_from as "下载key_from"
|
||||
,b.login_address as "城市"
|
||||
,b.phone_login as "是否手机登录"
|
||||
,c.sale_channel as "购课渠道"
|
||||
,case when c.sale_channel is NULL then '未购课'
|
||||
when c.sale_channel = '站外' then '站外购课'
|
||||
else '站内购课'
|
||||
end as "购课标签"
|
||||
,c.key_from as "购课key_from"
|
||||
,c.pay_date as "购课日期"
|
||||
,c.pay_amount as "购课金额"
|
||||
,d.id as "角色ID"
|
||||
,d.characer_pay_status as "角色是否付费"
|
||||
,d.gender as "性别"
|
||||
,2026 - cast(d.birthday as int) as "年龄"
|
||||
,e.chapter_id as "课程ID"
|
||||
,e.course_id as "课程名称"
|
||||
,e.chapter_unique_id as "完课标识"
|
||||
,e.finish_date as "完课日期"
|
||||
,e.finish_time as "完课耗时"
|
||||
from
|
||||
(
|
||||
select id
|
||||
,key_from
|
||||
,to_char(created_at,'YYYY-MM-DD') as created_date
|
||||
,download_channel
|
||||
from bi_vala_app_account
|
||||
where status = 1
|
||||
and id not in (51,2121)
|
||||
and deleted_at is NULL
|
||||
group by id
|
||||
,key_from
|
||||
,created_at
|
||||
,download_channel
|
||||
) as a
|
||||
left join
|
||||
(
|
||||
select account_id
|
||||
,split_part(login_address,'-',2) as login_address
|
||||
,case when phone_login_times = 0 then 0
|
||||
else 1
|
||||
end as phone_login
|
||||
from account_detail_info
|
||||
group by account_id
|
||||
,login_address
|
||||
,case when phone_login_times = 0 then 0
|
||||
else 1
|
||||
end
|
||||
) as b on a.id = b.account_id
|
||||
left join
|
||||
(
|
||||
select account_id
|
||||
,case when sale_channel = 11 then '苹果'
|
||||
when sale_channel = 12 then '华为'
|
||||
when sale_channel = 13 then '小米'
|
||||
when sale_channel = 14 then '荣耀'
|
||||
when sale_channel = 15 then '应用宝'
|
||||
when sale_channel = 17 then '魅族'
|
||||
when sale_channel = 18 then 'VIVO'
|
||||
when sale_channel = 19 then 'OPPO'
|
||||
when sale_channel = 21 then '学而思'
|
||||
when sale_channel = 22 then '讯飞'
|
||||
when sale_channel = 23 then '步步高'
|
||||
when sale_channel = 24 then '作业帮'
|
||||
when sale_channel = 25 then '小度'
|
||||
when sale_channel = 26 then '希沃'
|
||||
when sale_channel = 27 then '京东方'
|
||||
when sale_channel = 41 then '官网'
|
||||
when sale_channel = 71 then '小程序'
|
||||
else '站外'
|
||||
end as sale_channel
|
||||
,key_from
|
||||
,to_char(pay_success_date,'YYYY-MM-DD') as pay_date
|
||||
,pay_amount
|
||||
from bi_vala_order
|
||||
where order_status = 3
|
||||
and pay_amount_int > 49800
|
||||
group by account_id
|
||||
,case when sale_channel = 11 then '苹果'
|
||||
when sale_channel = 12 then '华为'
|
||||
when sale_channel = 13 then '小米'
|
||||
when sale_channel = 14 then '荣耀'
|
||||
when sale_channel = 15 then '应用宝'
|
||||
when sale_channel = 17 then '魅族'
|
||||
when sale_channel = 18 then 'VIVO'
|
||||
when sale_channel = 19 then 'OPPO'
|
||||
when sale_channel = 21 then '学而思'
|
||||
when sale_channel = 22 then '讯飞'
|
||||
when sale_channel = 23 then '步步高'
|
||||
when sale_channel = 24 then '作业帮'
|
||||
when sale_channel = 25 then '小度'
|
||||
when sale_channel = 26 then '希沃'
|
||||
when sale_channel = 27 then '京东方'
|
||||
when sale_channel = 41 then '官网'
|
||||
when sale_channel = 71 then '小程序'
|
||||
else '站外'
|
||||
end
|
||||
,key_from
|
||||
,pay_success_date
|
||||
,pay_amount
|
||||
) as c on a.id = c.account_id
|
||||
left join
|
||||
(
|
||||
select id
|
||||
,account_id
|
||||
,case when purchase_season_package = '[1]' then 0
|
||||
else 1
|
||||
end as characer_pay_status
|
||||
,case when gender = 0 then 'girl'
|
||||
when gender = 1 then 'boy'
|
||||
else 'unknow'
|
||||
end as gender
|
||||
,case when split_part(birthday,'-',1) = '' then '0000'
|
||||
else split_part(birthday,'-',1)
|
||||
end as birthday
|
||||
from bi_vala_app_character
|
||||
where deleted_at is NULL
|
||||
group by id
|
||||
,account_id
|
||||
,case when purchase_season_package = '[1]' then 0
|
||||
else 1
|
||||
end
|
||||
,case when gender = 0 then 'girl'
|
||||
when gender = 1 then 'boy'
|
||||
else 'unknow'
|
||||
end
|
||||
,case when split_part(birthday,'-',1) = '' then '0000'
|
||||
else split_part(birthday,'-',1)
|
||||
end
|
||||
) as d on a.id = d.account_id
|
||||
left join
|
||||
(
|
||||
select user_id
|
||||
,chapter_id
|
||||
,format('%s-%s-%s-%s',course_level,course_season,course_unit,course_lesson) as course_id
|
||||
,x.chapter_unique_id
|
||||
,finish_date
|
||||
,format('%s:%s',floor(sum(interval_time)/1000/60),mod((sum(interval_time)/1000),60)) as finish_time
|
||||
,rank () over (partition by x.chapter_unique_id order by finish_date) as rankno
|
||||
from
|
||||
(
|
||||
select user_id
|
||||
,chapter_id
|
||||
,chapter_unique_id
|
||||
,to_char(updated_at,'YYYY-MM-DD') as finish_date
|
||||
from bi_user_chapter_play_record_0
|
||||
where chapter_id in (55,56,57,58,59)
|
||||
and play_status = 1
|
||||
group by id
|
||||
,user_id
|
||||
,chapter_id
|
||||
,chapter_unique_id
|
||||
,updated_at
|
||||
union all
|
||||
select user_id
|
||||
,chapter_id
|
||||
,chapter_unique_id
|
||||
,to_char(updated_at,'YYYY-MM-DD') as finish_date
|
||||
from bi_user_chapter_play_record_1
|
||||
where chapter_id in (55,56,57,58,59)
|
||||
and play_status = 1
|
||||
group by user_id
|
||||
,chapter_id
|
||||
,chapter_unique_id
|
||||
,updated_at
|
||||
-- ... 其他分表类似
|
||||
) as x
|
||||
left join
|
||||
(
|
||||
select cast(id as int) as id
|
||||
,course_level
|
||||
,course_season
|
||||
,course_unit
|
||||
,course_lesson
|
||||
from bi_level_unit_lesson
|
||||
group by id
|
||||
,course_level
|
||||
,course_season
|
||||
,course_unit
|
||||
,course_lesson
|
||||
) as y on x.chapter_id = y.id
|
||||
left join
|
||||
(
|
||||
select chapter_unique_id
|
||||
,interval_time
|
||||
from bi_user_component_play_record_0
|
||||
group by chapter_unique_id
|
||||
,interval_time
|
||||
-- ... 其他分表类似
|
||||
) as z on x.chapter_unique_id = z.chapter_unique_id
|
||||
group by user_id
|
||||
,chapter_id
|
||||
,course_level
|
||||
,course_season
|
||||
,course_unit
|
||||
,course_lesson
|
||||
,x.chapter_unique_id
|
||||
,finish_date
|
||||
) as e on d.id = e.user_id
|
||||
where rankno = 1
|
||||
group by a.id
|
||||
,a.created_date
|
||||
,a.download_channel
|
||||
,a.key_from
|
||||
,b.login_address
|
||||
,b.phone_login
|
||||
,c.sale_channel
|
||||
,c.key_from
|
||||
,c.pay_date
|
||||
,c.pay_amount
|
||||
,d.id
|
||||
,d.characer_pay_status
|
||||
,d.gender
|
||||
,d.birthday
|
||||
,e.chapter_id
|
||||
,e.course_id
|
||||
,e.chapter_unique_id
|
||||
,e.finish_date
|
||||
,e.finish_time
|
||||
```
|
||||
|
||||
## 重要业务逻辑
|
||||
|
||||
### 1. 购课渠道映射
|
||||
```sql
|
||||
case when sale_channel = 11 then '苹果'
|
||||
when sale_channel = 12 then '华为'
|
||||
-- ... 更多渠道
|
||||
when sale_channel = 71 then '小程序'
|
||||
else '站外'
|
||||
end as sale_channel
|
||||
```
|
||||
|
||||
### 2. 购课标签
|
||||
```sql
|
||||
case when c.sale_channel is NULL then '未购课'
|
||||
when c.sale_channel = '站外' then '站外购课'
|
||||
else '站内购课'
|
||||
end as "购课标签"
|
||||
```
|
||||
|
||||
### 3. 角色付费状态
|
||||
```sql
|
||||
case when purchase_season_package = '[1]' then 0
|
||||
else 1
|
||||
end as characer_pay_status
|
||||
```
|
||||
|
||||
### 4. 性别映射
|
||||
```sql
|
||||
case when gender = 0 then 'girl'
|
||||
when gender = 1 then 'boy'
|
||||
else 'unknow'
|
||||
end as gender
|
||||
```
|
||||
|
||||
### 5. 完课时间计算
|
||||
```sql
|
||||
format('%s:%s',floor(sum(interval_time)/1000/60),mod((sum(interval_time)/1000),60)) as finish_time
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
1. **订单筛选条件**: `order_status = 3` and `pay_amount_int > 49800` (筛选有效订单且金额大于498元)
|
||||
2. **分表处理**: 用户播放记录表按分表存储(0-7),需要使用 UNION ALL 合并
|
||||
3. **去重逻辑**: 使用 `rank() over (partition by ... order by ...)` 取第一次完课记录
|
||||
4. **测试用户排除**: `id not in (51,2121)`
|
||||
17
business_knowledge/sql_queries/平均通关时长.md
Normal file
17
business_knowledge/sql_queries/平均通关时长.md
Normal file
@ -0,0 +1,17 @@
|
||||
# 平均通关时长
|
||||
|
||||
**获取时间:** 2026-03-02 18:04:16
|
||||
|
||||
**飞书文档 Token:** EpP7d6h2SoaTyJx1lZRcXXdLnVe
|
||||
|
||||
**注意:** 此文档需要通过 feishu_doc 工具读取完整内容
|
||||
|
||||
---
|
||||
|
||||
## 使用说明
|
||||
|
||||
使用以下命令读取完整文档内容:
|
||||
|
||||
```bash
|
||||
feishu_doc read EpP7d6h2SoaTyJx1lZRcXXdLnVe
|
||||
```
|
||||
17
business_knowledge/sql_queries/新增注册用户数by渠道.md
Normal file
17
business_knowledge/sql_queries/新增注册用户数by渠道.md
Normal file
@ -0,0 +1,17 @@
|
||||
# 新增注册用户数by渠道
|
||||
|
||||
**获取时间:** 2026-03-02 18:04:16
|
||||
|
||||
**飞书文档 Token:** AzRPddp97o7To8x8VkxcFGr8nBh
|
||||
|
||||
**注意:** 此文档需要通过 feishu_doc 工具读取完整内容
|
||||
|
||||
---
|
||||
|
||||
## 使用说明
|
||||
|
||||
使用以下命令读取完整文档内容:
|
||||
|
||||
```bash
|
||||
feishu_doc read AzRPddp97o7To8x8VkxcFGr8nBh
|
||||
```
|
||||
17
business_knowledge/sql_queries/班主任关注数据.md
Normal file
17
business_knowledge/sql_queries/班主任关注数据.md
Normal file
@ -0,0 +1,17 @@
|
||||
# 班主任关注数据
|
||||
|
||||
**获取时间:** 2026-03-02 18:04:16
|
||||
|
||||
**飞书文档 Token:** NcVqdRKtrowglNxs9CocDekunje
|
||||
|
||||
**注意:** 此文档需要通过 feishu_doc 工具读取完整内容
|
||||
|
||||
---
|
||||
|
||||
## 使用说明
|
||||
|
||||
使用以下命令读取完整文档内容:
|
||||
|
||||
```bash
|
||||
feishu_doc read NcVqdRKtrowglNxs9CocDekunje
|
||||
```
|
||||
17
business_knowledge/sql_queries/端内GMV.md
Normal file
17
business_knowledge/sql_queries/端内GMV.md
Normal file
@ -0,0 +1,17 @@
|
||||
# 端内GMV
|
||||
|
||||
**获取时间:** 2026-03-02 18:04:16
|
||||
|
||||
**飞书文档 Token:** FkVCd1AruoD9xWxxVpzc16hinVh
|
||||
|
||||
**注意:** 此文档需要通过 feishu_doc 工具读取完整内容
|
||||
|
||||
---
|
||||
|
||||
## 使用说明
|
||||
|
||||
使用以下命令读取完整文档内容:
|
||||
|
||||
```bash
|
||||
feishu_doc read FkVCd1AruoD9xWxxVpzc16hinVh
|
||||
```
|
||||
17
business_knowledge/sql_queries/端内用户课程进入完成率.md
Normal file
17
business_knowledge/sql_queries/端内用户课程进入完成率.md
Normal file
@ -0,0 +1,17 @@
|
||||
# 端内用户课程进入完成率
|
||||
|
||||
**获取时间:** 2026-03-02 18:04:16
|
||||
|
||||
**飞书文档 Token:** Ueu7dtgSHoNYfsxCDHmcY6E4nid
|
||||
|
||||
**注意:** 此文档需要通过 feishu_doc 工具读取完整内容
|
||||
|
||||
---
|
||||
|
||||
## 使用说明
|
||||
|
||||
使用以下命令读取完整文档内容:
|
||||
|
||||
```bash
|
||||
feishu_doc read Ueu7dtgSHoNYfsxCDHmcY6E4nid
|
||||
```
|
||||
17
business_knowledge/sql_queries/端内购课用户学习行为.md
Normal file
17
business_knowledge/sql_queries/端内购课用户学习行为.md
Normal file
@ -0,0 +1,17 @@
|
||||
# 端内购课用户学习行为
|
||||
|
||||
**获取时间:** 2026-03-02 18:04:16
|
||||
|
||||
**飞书文档 Token:** ZTxod4IUWo5yMexf8AHcBbpFnMg
|
||||
|
||||
**注意:** 此文档需要通过 feishu_doc 工具读取完整内容
|
||||
|
||||
---
|
||||
|
||||
## 使用说明
|
||||
|
||||
使用以下命令读取完整文档内容:
|
||||
|
||||
```bash
|
||||
feishu_doc read ZTxod4IUWo5yMexf8AHcBbpFnMg
|
||||
```
|
||||
17
business_knowledge/sql_queries/课程ID映射.md
Normal file
17
business_knowledge/sql_queries/课程ID映射.md
Normal file
@ -0,0 +1,17 @@
|
||||
# 课程ID映射
|
||||
|
||||
**获取时间:** 2026-03-02 18:04:16
|
||||
|
||||
**飞书文档 Token:** GenUdsXCloUdYhxMvxqcWBMdnhb
|
||||
|
||||
**注意:** 此文档需要通过 feishu_doc 工具读取完整内容
|
||||
|
||||
---
|
||||
|
||||
## 使用说明
|
||||
|
||||
使用以下命令读取完整文档内容:
|
||||
|
||||
```bash
|
||||
feishu_doc read GenUdsXCloUdYhxMvxqcWBMdnhb
|
||||
```
|
||||
17
business_knowledge/sql_queries/课程进入完成率.md
Normal file
17
business_knowledge/sql_queries/课程进入完成率.md
Normal file
@ -0,0 +1,17 @@
|
||||
# 课程进入完成率
|
||||
|
||||
**获取时间:** 2026-03-02 18:04:16
|
||||
|
||||
**飞书文档 Token:** PwIydfZcHo5eZgxi8XLcOtjOnSb
|
||||
|
||||
**注意:** 此文档需要通过 feishu_doc 工具读取完整内容
|
||||
|
||||
---
|
||||
|
||||
## 使用说明
|
||||
|
||||
使用以下命令读取完整文档内容:
|
||||
|
||||
```bash
|
||||
feishu_doc read PwIydfZcHo5eZgxi8XLcOtjOnSb
|
||||
```
|
||||
17
business_knowledge/sql_queries/账号角色年龄地址.md
Normal file
17
business_knowledge/sql_queries/账号角色年龄地址.md
Normal file
@ -0,0 +1,17 @@
|
||||
# 账号角色年龄地址
|
||||
|
||||
**获取时间:** 2026-03-02 18:04:16
|
||||
|
||||
**飞书文档 Token:** CUa2du2sSoNFSRxl3vFc8ucInEm
|
||||
|
||||
**注意:** 此文档需要通过 feishu_doc 工具读取完整内容
|
||||
|
||||
---
|
||||
|
||||
## 使用说明
|
||||
|
||||
使用以下命令读取完整文档内容:
|
||||
|
||||
```bash
|
||||
feishu_doc read CUa2du2sSoNFSRxl3vFc8ucInEm
|
||||
```
|
||||
17
business_knowledge/sql_queries/转化率.md
Normal file
17
business_knowledge/sql_queries/转化率.md
Normal file
@ -0,0 +1,17 @@
|
||||
# 转化率
|
||||
|
||||
**获取时间:** 2026-03-02 18:04:16
|
||||
|
||||
**飞书文档 Token:** ATJ0dfajQo5CSexQd8hc9i3pnWe
|
||||
|
||||
**注意:** 此文档需要通过 feishu_doc 工具读取完整内容
|
||||
|
||||
---
|
||||
|
||||
## 使用说明
|
||||
|
||||
使用以下命令读取完整文档内容:
|
||||
|
||||
```bash
|
||||
feishu_doc read ATJ0dfajQo5CSexQd8hc9i3pnWe
|
||||
```
|
||||
17
business_knowledge/sql_queries/退费率.md
Normal file
17
business_knowledge/sql_queries/退费率.md
Normal file
@ -0,0 +1,17 @@
|
||||
# 退费率
|
||||
|
||||
**获取时间:** 2026-03-02 18:04:16
|
||||
|
||||
**飞书文档 Token:** DC1Qdhpitowt9lxxo1acEzOwnFc
|
||||
|
||||
**注意:** 此文档需要通过 feishu_doc 工具读取完整内容
|
||||
|
||||
---
|
||||
|
||||
## 使用说明
|
||||
|
||||
使用以下命令读取完整文档内容:
|
||||
|
||||
```bash
|
||||
feishu_doc read DC1Qdhpitowt9lxxo1acEzOwnFc
|
||||
```
|
||||
17
business_knowledge/sql_queries/销转学习进度.md
Normal file
17
business_knowledge/sql_queries/销转学习进度.md
Normal file
@ -0,0 +1,17 @@
|
||||
# 销转学习进度
|
||||
|
||||
**获取时间:** 2026-03-02 18:04:16
|
||||
|
||||
**飞书文档 Token:** G1p9dhK63oLWMzxyGQ8csZGMnDh
|
||||
|
||||
**注意:** 此文档需要通过 feishu_doc 工具读取完整内容
|
||||
|
||||
---
|
||||
|
||||
## 使用说明
|
||||
|
||||
使用以下命令读取完整文档内容:
|
||||
|
||||
```bash
|
||||
feishu_doc read G1p9dhK63oLWMzxyGQ8csZGMnDh
|
||||
```
|
||||
70
business_knowledge/user_export_skill.md
Normal file
70
business_knowledge/user_export_skill.md
Normal file
@ -0,0 +1,70 @@
|
||||
# 用户学习行为数据导出技能
|
||||
|
||||
## 功能说明
|
||||
可以导出指定账户ID或角色ID的完整学习行为数据,输出为Excel文件,包含多个sheet。
|
||||
|
||||
## 导出内容说明
|
||||
Excel包含以下sheet:
|
||||
1. **全部音频数据**:用户的所有语音交互数据,包含音频地址、ASR结果等
|
||||
2. **互动组件学习记录**:所有组件互动记录,包含组件类型、名称、知识点、互动结果等
|
||||
3. **课程巩固记录**:课程课后巩固的做题记录
|
||||
4. **单元挑战记录**:单元挑战的答题记录
|
||||
5. **单元总结记录**:单元总结的学习记录
|
||||
6. **汇总统计**:自动统计的组件通过率、知识点掌握情况、单元学习时长等
|
||||
|
||||
## 使用方法
|
||||
### 1. 导出单个角色ID
|
||||
修改脚本变量:
|
||||
```python
|
||||
USER_ID = "角色ID"
|
||||
USER_ID_LIST = None
|
||||
ACCOUNT_ID_LIST = None
|
||||
```
|
||||
|
||||
### 2. 导出单个/多个账户ID
|
||||
修改脚本变量:
|
||||
```python
|
||||
USER_ID = None
|
||||
USER_ID_LIST = None
|
||||
ACCOUNT_ID_LIST = [账户ID1, 账户ID2, ...]
|
||||
```
|
||||
脚本会自动查询账户对应的所有角色ID并分别导出。
|
||||
|
||||
## 依赖环境
|
||||
需要配置以下环境变量:
|
||||
```
|
||||
# ES 配置
|
||||
ES_HOST=es-7vd7jcu9.public.tencentelasticsearch.com
|
||||
ES_PORT=9200
|
||||
ES_SCHEME=https
|
||||
ES_USER=elastic
|
||||
ES_PASSWORD=F%?QDcWes7N2WTuiYD11
|
||||
|
||||
# PG 配置
|
||||
PG_DB_HOST=bj-postgres-16pob4sg.sql.tencentcdb.com
|
||||
PG_DB_PORT=28591
|
||||
PG_DB_USER=ai_member
|
||||
PG_DB_PASSWORD=LdfjdjL83h3h3^$&**YGG*
|
||||
PG_DB_DATABASE=vala
|
||||
|
||||
# MySQL 配置
|
||||
MYSQL_HOST=bj-cdb-8frbdwju.sql.tencentcdb.com
|
||||
MYSQL_USERNAME=read_only
|
||||
MYSQL_PASSWORD=fdsfiidier^$*hjfdijjd232
|
||||
MYSQL_PORT=25413
|
||||
|
||||
# MySQL Online 配置
|
||||
MYSQL_HOST_online=bj-cdb-dh2fkqa0.sql.tencentcdb.com
|
||||
MYSQL_USERNAME_online=read_only
|
||||
MYSQL_PASSWORD_online=fsdo45ijfmfmuu77$%^&
|
||||
MYSQL_PORT_online=27751
|
||||
```
|
||||
|
||||
## 常见问题排查
|
||||
1. **事务异常错误**:一般是前面某个查询失败导致,检查是否有权限、表是否存在
|
||||
2. **权限不足**:检查数据库账号的表权限,需要有各分表的SELECT权限
|
||||
3. **0条记录**:对应角色没有学习数据,属于正常情况
|
||||
|
||||
## 导出示例
|
||||
- 账户ID 9343(角色12699):导出199条学习记录
|
||||
- 角色ID 14607:导出855条完整学习记录,所有sheet都有数据
|
||||
1846
export_14607.py
Normal file
1846
export_14607.py
Normal file
File diff suppressed because it is too large
Load Diff
144
export_only_12698.py
Normal file
144
export_only_12698.py
Normal file
@ -0,0 +1,144 @@
|
||||
#!/usr/bin/env python3
|
||||
"""单独测试角色12698的导出,查看具体报错"""
|
||||
|
||||
import os
|
||||
import json
|
||||
import sys
|
||||
import datetime
|
||||
from typing import Any, Dict, List
|
||||
|
||||
# 加载环境变量
|
||||
def load_env():
|
||||
env_path = os.path.join(os.getcwd(), ".env")
|
||||
if os.path.exists(env_path):
|
||||
with open(env_path, "r", encoding="utf-8") as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if not line or line.startswith("#") or "=" not in line:
|
||||
continue
|
||||
k, v = line.split("=", 1)
|
||||
os.environ[k.strip()] = v.strip().strip('"').strip("'")
|
||||
|
||||
load_env()
|
||||
|
||||
import psycopg2
|
||||
from psycopg2.extras import RealDictCursor
|
||||
import pymysql
|
||||
import requests
|
||||
from requests.auth import HTTPBasicAuth
|
||||
import warnings
|
||||
warnings.filterwarnings('ignore')
|
||||
|
||||
def test_role_12698():
|
||||
print("="*60)
|
||||
print("单独测试角色ID=12698的查询")
|
||||
print("="*60)
|
||||
|
||||
# 连接PG
|
||||
try:
|
||||
conn = psycopg2.connect(
|
||||
host=os.getenv("PG_DB_HOST"),
|
||||
port=int(os.getenv("PG_DB_PORT")),
|
||||
user=os.getenv("PG_DB_USER"),
|
||||
password=os.getenv("PG_DB_PASSWORD"),
|
||||
dbname=os.getenv("PG_DB_DATABASE"),
|
||||
connect_timeout=10
|
||||
)
|
||||
print("✅ PG连接成功")
|
||||
except Exception as e:
|
||||
print(f"❌ PG连接失败: {e}")
|
||||
return
|
||||
|
||||
user_id = "12698"
|
||||
|
||||
# 测试第一个查询:user_component_play_record_0
|
||||
print(f"\n测试查询表 user_component_play_record_0,user_id={user_id}")
|
||||
try:
|
||||
with conn.cursor(cursor_factory=RealDictCursor) as cur:
|
||||
sql = f"""
|
||||
SELECT user_id, component_unique_code, session_id, c_type, c_id,
|
||||
play_result, user_behavior_info, updated_at
|
||||
FROM user_component_play_record_0
|
||||
WHERE user_id = %s
|
||||
ORDER BY updated_at DESC
|
||||
"""
|
||||
cur.execute(sql, (user_id,))
|
||||
rows = cur.fetchall()
|
||||
print(f"✅ 查询成功,返回{len(rows)}条记录")
|
||||
except Exception as e:
|
||||
print(f"❌ 查询失败: {e}")
|
||||
print(f"错误类型: {type(e).__name__}")
|
||||
|
||||
# 回滚事务
|
||||
print("\n尝试回滚事务...")
|
||||
try:
|
||||
conn.rollback()
|
||||
print("✅ 事务回滚成功")
|
||||
except Exception as e2:
|
||||
print(f"❌ 回滚失败: {e2}")
|
||||
|
||||
# 测试查询课程巩固记录表
|
||||
print(f"\n测试查询表 user_unit_review_question_result,user_id={user_id}")
|
||||
try:
|
||||
with conn.cursor(cursor_factory=RealDictCursor) as cur:
|
||||
sql = f"""
|
||||
SELECT user_id, story_id, chapter_id, question_list, updated_at
|
||||
FROM user_unit_review_question_result
|
||||
WHERE user_id = %s
|
||||
ORDER BY updated_at DESC
|
||||
"""
|
||||
cur.execute(sql, (user_id,))
|
||||
rows = cur.fetchall()
|
||||
print(f"✅ 查询成功,返回{len(rows)}条记录")
|
||||
except Exception as e:
|
||||
print(f"❌ 查询失败: {e}")
|
||||
print(f"错误类型: {type(e).__name__}")
|
||||
|
||||
# 回滚事务
|
||||
print("\n尝试回滚事务...")
|
||||
try:
|
||||
conn.rollback()
|
||||
print("✅ 事务回滚成功")
|
||||
except Exception as e2:
|
||||
print(f"❌ 回滚失败: {e2}")
|
||||
|
||||
# 测试查询单元挑战记录表
|
||||
print(f"\n测试查询表 user_unit_challenge_question_result,user_id={user_id}")
|
||||
try:
|
||||
with conn.cursor(cursor_factory=RealDictCursor) as cur:
|
||||
sql = f"""
|
||||
SELECT user_id, story_id, category, score_text, question_list, updated_at
|
||||
FROM user_unit_challenge_question_result
|
||||
WHERE user_id = %s
|
||||
ORDER BY updated_at DESC
|
||||
"""
|
||||
cur.execute(sql, (user_id,))
|
||||
rows = cur.fetchall()
|
||||
print(f"✅ 查询成功,返回{len(rows)}条记录")
|
||||
except Exception as e:
|
||||
print(f"❌ 查询失败: {e}")
|
||||
print(f"错误类型: {type(e).__name__}")
|
||||
|
||||
# 测试查询单元总结记录表
|
||||
print(f"\n测试查询表 user_unit_summary_record,user_id={user_id}")
|
||||
try:
|
||||
with conn.cursor(cursor_factory=RealDictCursor) as cur:
|
||||
sql = f"""
|
||||
SELECT id, user_id, unit_id, updated_at, km_id, km_type, play_time_seconds
|
||||
FROM user_unit_summary_record
|
||||
WHERE user_id = %s
|
||||
ORDER BY updated_at DESC
|
||||
"""
|
||||
cur.execute(sql, (user_id,))
|
||||
rows = cur.fetchall()
|
||||
print(f"✅ 查询成功,返回{len(rows)}条记录")
|
||||
except Exception as e:
|
||||
print(f"❌ 查询失败: {e}")
|
||||
print(f"错误类型: {type(e).__name__}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
conn.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_role_12698()
|
||||
1846
export_user_id_data.py
Normal file
1846
export_user_id_data.py
Normal file
File diff suppressed because it is too large
Load Diff
1845
export_user_id_data_debug.py
Normal file
1845
export_user_id_data_debug.py
Normal file
File diff suppressed because it is too large
Load Diff
1846
export_user_id_data_latest.py
Normal file
1846
export_user_id_data_latest.py
Normal file
File diff suppressed because it is too large
Load Diff
176
test_db_connections.py
Normal file
176
test_db_connections.py
Normal file
@ -0,0 +1,176 @@
|
||||
#!/usr/bin/env python3
|
||||
"""测试各个数据库连接和查询"""
|
||||
|
||||
import os
|
||||
import json
|
||||
import psycopg2
|
||||
import pymysql
|
||||
import requests
|
||||
from requests.auth import HTTPBasicAuth
|
||||
import warnings
|
||||
warnings.filterwarnings('ignore')
|
||||
|
||||
def test_postgresql():
|
||||
"""测试PostgreSQL连接"""
|
||||
print("\n" + "="*60)
|
||||
print("测试 PostgreSQL(Online)连接")
|
||||
print("="*60)
|
||||
|
||||
try:
|
||||
conn = psycopg2.connect(
|
||||
host="bj-postgres-16pob4sg.sql.tencentcdb.com",
|
||||
port=28591,
|
||||
user="ai_member",
|
||||
password="LdfjdjL83h3h3^$&**YGG*",
|
||||
dbname="vala",
|
||||
connect_timeout=10
|
||||
)
|
||||
print("✅ PostgreSQL 连接成功!")
|
||||
|
||||
# 测试查询
|
||||
with conn.cursor() as cur:
|
||||
# 先查询所有表
|
||||
cur.execute("SELECT tablename FROM pg_tables WHERE schemaname = 'public' LIMIT 5")
|
||||
tables = cur.fetchall()
|
||||
print(f"✅ 查询成功!找到前5个表:{[t[0] for t in tables]}")
|
||||
|
||||
# 尝试查询其中一个表的1条数据
|
||||
if tables:
|
||||
table = tables[0][0]
|
||||
cur.execute(f"SELECT * FROM {table} LIMIT 1")
|
||||
row = cur.fetchone()
|
||||
print(f"✅ 从表 {table} 读取到1条数据:{row if row else '空表'}")
|
||||
|
||||
conn.close()
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ PostgreSQL 连接/查询失败:{str(e)[:200]}")
|
||||
return False
|
||||
|
||||
def test_mysql_test():
|
||||
"""测试Test MySQL连接"""
|
||||
print("\n" + "="*60)
|
||||
print("测试 MySQL(Test环境)连接")
|
||||
print("="*60)
|
||||
|
||||
try:
|
||||
conn = pymysql.connect(
|
||||
host="bj-cdb-8frbdwju.sql.tencentcdb.com",
|
||||
port=25413,
|
||||
user="read_only",
|
||||
password="fdsfiidier^$*hjfdijjd232",
|
||||
connect_timeout=10
|
||||
)
|
||||
print("✅ MySQL(Test)连接成功!")
|
||||
|
||||
# 测试查询
|
||||
with conn.cursor() as cur:
|
||||
cur.execute("SHOW DATABASES LIMIT 5")
|
||||
dbs = cur.fetchall()
|
||||
print(f"✅ 查询成功!找到前5个数据库:{[db[0] for db in dbs]}")
|
||||
|
||||
if dbs:
|
||||
db = dbs[0][0]
|
||||
cur.execute(f"USE {db}")
|
||||
cur.execute("SHOW TABLES LIMIT 1")
|
||||
table = cur.fetchone()
|
||||
if table:
|
||||
cur.execute(f"SELECT * FROM {table[0]} LIMIT 1")
|
||||
row = cur.fetchone()
|
||||
print(f"✅ 从表 {table[0]} 读取到1条数据:{row if row else '空表'}")
|
||||
|
||||
conn.close()
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ MySQL(Test)连接/查询失败:{str(e)[:200]}")
|
||||
return False
|
||||
|
||||
def test_mysql_online():
|
||||
"""测试Online MySQL连接"""
|
||||
print("\n" + "="*60)
|
||||
print("测试 MySQL(Online)连接")
|
||||
print("="*60)
|
||||
|
||||
try:
|
||||
conn = pymysql.connect(
|
||||
host="bj-cdb-dh2fkqa0.sql.tencentcdb.com",
|
||||
port=27751,
|
||||
user="read_only",
|
||||
password="fsdo45ijfmfmuu77$%^&",
|
||||
connect_timeout=10
|
||||
)
|
||||
print("✅ MySQL(Online)连接成功!")
|
||||
|
||||
# 测试查询
|
||||
with conn.cursor() as cur:
|
||||
cur.execute("SHOW DATABASES LIMIT 5")
|
||||
dbs = cur.fetchall()
|
||||
print(f"✅ 查询成功!找到前5个数据库:{[db[0] for db in dbs]}")
|
||||
|
||||
conn.close()
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ MySQL(Online)连接/查询失败:{str(e)[:200]}")
|
||||
return False
|
||||
|
||||
def test_es_online():
|
||||
"""测试Online ES连接"""
|
||||
print("\n" + "="*60)
|
||||
print("测试 Elasticsearch(Online)连接")
|
||||
print("="*60)
|
||||
|
||||
try:
|
||||
url = "https://es-7vd7jcu9.public.tencentelasticsearch.com:9200"
|
||||
auth = HTTPBasicAuth("elastic", "F%?QDcWes7N2WTuiYD11")
|
||||
|
||||
response = requests.get(
|
||||
url,
|
||||
auth=auth,
|
||||
verify=False,
|
||||
timeout=10
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
info = response.json()
|
||||
print(f"✅ ES 连接成功!集群名称:{info.get('cluster_name')}")
|
||||
|
||||
# 测试查询索引
|
||||
indices_resp = requests.get(
|
||||
f"{url}/_cat/indices?format=json",
|
||||
auth=auth,
|
||||
verify=False,
|
||||
timeout=10
|
||||
)
|
||||
if indices_resp.status_code == 200:
|
||||
indices = indices_resp.json()
|
||||
print(f"✅ 查询成功!索引数量:{len(indices)}")
|
||||
if indices:
|
||||
print(f" 前3个索引:{[idx['index'] for idx in indices[:3]]}")
|
||||
|
||||
return True
|
||||
else:
|
||||
print(f"❌ ES 连接失败:HTTP {response.status_code}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ ES 连接/查询失败:{str(e)[:200]}")
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("开始测试所有数据库连接...")
|
||||
|
||||
results = {}
|
||||
results["PostgreSQL(Online)"] = test_postgresql()
|
||||
results["MySQL(Test)"] = test_mysql_test()
|
||||
results["MySQL(Online)"] = test_mysql_online()
|
||||
results["ES(Online)"] = test_es_online()
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("测试总结")
|
||||
print("="*60)
|
||||
for name, result in results.items():
|
||||
status = "✅ 正常" if result else "❌ 异常"
|
||||
print(f"{name}: {status}")
|
||||
Loading…
Reference in New Issue
Block a user