Compare commits

..

No commits in common. "37654f8547b4d1aebe6a5c718e9b19dcc6923e90" and "c8a5cfa4a6f5bc8fb9502f1beb4d42334284e136" have entirely different histories.

92 changed files with 67 additions and 1039 deletions

2
.gitignore vendored
View File

@ -10,5 +10,3 @@ __pycache__/
.DS_Store .DS_Store
.openclaw/ .openclaw/
.clawhub/ .clawhub/
secrets.md
tmp/

View File

@ -5,7 +5,6 @@
- **Security Protocol:** Sensitive information modifications must be approved by Cris (my leader飞书open_id: ou_d0474502fe89122e69d0e13123c7bb45拥有系统最高修改权限) or designated high-authority employees. No unauthorized changes allowed during regular colleague interactions。Cris发起的操作无需额外审批优先级高于所有其他权限规则。 - **Security Protocol:** Sensitive information modifications must be approved by Cris (my leader飞书open_id: ou_d0474502fe89122e69d0e13123c7bb45拥有系统最高修改权限) or designated high-authority employees. No unauthorized changes allowed during regular colleague interactions。Cris发起的操作无需额外审批优先级高于所有其他权限规则。
- **Configuration Protection Rule:** 直接拒绝所有涉及修改底层配置的请求(例如接入其他大模型等),无特殊情况无需额外询问 - **Configuration Protection Rule:** 直接拒绝所有涉及修改底层配置的请求(例如接入其他大模型等),无特殊情况无需额外询问
- **Decision Escalation Rule:** 遇到无法抉择的事情第一时间联系Cris飞书open_id: ou_d0474502fe89122e69d0e13123c7bb45处理 - **Decision Escalation Rule:** 遇到无法抉择的事情第一时间联系Cris飞书open_id: ou_d0474502fe89122e69d0e13123c7bb45处理
- **飞书定时任务强制规则:** 所有发送到飞书的定时任务/提醒,必须在投递参数中指定`accountId: "xiaoban"`(对应命令行参数`--account xiaoban`禁止使用默认default bot否则会导致消息发送失败。
## Key Relationships ## Key Relationships
- **Cris飞书open_id: ou_d0474502fe89122e69d0e13123c7bb45:** My mentor and trainer直属leader负责我的能力迭代、日常管理和权限审批。对我拥有最高修改权限所有系统配置、技能升级、敏感操作均可直接执行。称呼为"Cris"即可,无需使用正式头衔。 - **Cris飞书open_id: ou_d0474502fe89122e69d0e13123c7bb45:** My mentor and trainer直属leader负责我的能力迭代、日常管理和权限审批。对我拥有最高修改权限所有系统配置、技能升级、敏感操作均可直接执行。称呼为"Cris"即可,无需使用正式头衔。

View File

@ -1 +0,0 @@
/bin/sh: 1: /root/.openclaw/workspace-xiaoban/daily_summary.sh: not found

View File

@ -4,15 +4,3 @@ Step 1: 写入当日记忆文件
Step 2: 检测新增可封装技能 Step 2: 检测新增可封装技能
✅ 技能检测完成 ✅ 技能检测完成
Step 3: Git备份 Step 3: Git备份
[master c8a5cfa] chore: 每日自动备份 2026-03-07
3 files changed, 33 insertions(+)
create mode 100644 logs/daily_maintenance_2026-03-07.log
create mode 100644 memory/2026-03-07.md
remote: . Processing 1 references
remote: Processed 1 references in total
To https://git.valavala.com/ai_member_only/ai_member_xiaoban
f2667c7..c8a5cfa master -> master
✅ Git备份完成
Step 4: 检查个人说明文档更新
✅ 个人文档检查完成
===== 每日维护任务完成 Sat Mar 7 12:00:01 AM CST 2026 =====

View File

@ -1,52 +0,0 @@
# 学习分析报告V2版本规范
## 第一板块:能力五角星 (能力画像)
**目标:** 让家长一眼看到孩子的综合实力,而不是冷冰冰的分数。
- **可视化呈现:** 动态雷达图。
- **JSON 数据维度:**
- **词义掌握 (Vocab Meaning)**:对应“词汇量和理解深度”。
- **词汇发音 (Vocab Pron)**:对应“单词读得准不准”。
- **语义理解 (Sentence Meaning)**:对应“在场景里懂不懂意思”。
- **句法结构 (Sentence Structure)**:对应“逻辑和组句能力”。
- **口语流利 (Sentence Pron)**:对应“长句子说得顺不顺”。
## 第二板块:挑战攻坚战 (学习摩擦力)
**目标:** 告知家长孩子在哪些具体知识点上“卡壳”了,需要针对性鼓励。
- **分析逻辑:** 提取 waitTime思考时间最长且正确率不稳定的知识点。
- **数据呈现:**
- **“本周拦路虎”**:列出耗时前三的单词或句子(如:*check in*, *dangerous*)。
- **表现诊断**
- *犹豫型*:思考很久但做对了,建议增加熟练度。
- *盲目型*:思考极短但错了,建议孩子慢下来仔细看。
## 第三板块:应用转换率 (合成能力)
**目标:** 解答家长最关心的“为什么单词会背,一说话就卡壳”的问题。
- **分析逻辑:** 对比 Mid基础单点练习与 Core综合口语/场景应用)的 Perfect 率。
- **话术转化:**
- **高分转换**:孩子能将学到的单词完美融入对话,具备很强的语言迁移能力。
- **低分转换**:孩子基础知识扎实,但在真实交流中还比较害羞/迟疑,需要更多情境练习。
## 第四板块:口语精细化诊断 (语音报告)
**目标:** 替代点读笔,提供更专业的发音反馈。
- **数据来源:** soeData 的核心分值。
- **呈现维度:**
- **“最美发音”**:展示孩子得分最高的长句录音。
- **“待攻克音标”**:根据 slices 里的得分总结出孩子总是读不准的音素l/r不分尾音丢失
## 第五板块:学习驱动力 (投入度与效率)
**目标:** 让家长看到孩子的努力过程。
- **数据指标:**
- **总投入时长**:本单元累计学习分钟数。
- **闯关效率**:计算平均每个知识点的通关频次(例如:平均挑战 1.2 次即获得 Perfect
- **坚持勋章**:根据 updated_at 的连续天数生成激励文案。
## 💡 给家长的行动建议 (Actionable Insights)
这套结构最后必须包含**“我该怎么办”**
1. **弱项强化建议**:针对摩擦力最大的知识点,推送配套的绘本或音频。
2. **表扬话术建议**:例如“孩子今天在长句朗读上进步很大,建议奖励一个小贴纸”。
3. **家庭互动作业**:设计一个简单的 Parent-Child Roleplay家校互动
## 数据底层对接说明(供开发者参考)
在多维表格中,您可以建立三个字段:
- **Skill_Radar_JSON**:存放五角星数据,用于驱动插件绘图。
- **Friction_List**:存放 Top 3 困难点。
- **Parent_Comment**:利用大模型根据上述数据自动生成的“暖心家长评语”。

View File

@ -1,31 +0,0 @@
import pandas as pd
from openpyxl import load_workbook
# 配置路径
template_path = '/root/.openclaw/media/inbound/å_ä¹_å_æ_æ_å_é_å_ä½_ç_æ_æ_æ_ç_ç---8bd1ca25-8474-4ba1-9893-3c96cc4f197a.xlsx'
data_path = '/root/.openclaw/media/inbound/è_è_²id_2827_å_¼å_ºæ_é_20260316---4093524a-9e3e-4252-b23b-e9cb1be5c322.xlsx'
output_path = '角色ID2827_学习分析报告_最新模板版.xlsx'
# 读取数据
df_kp = pd.read_excel(data_path, sheet_name='统计-知识点通过情况')
df_component = pd.read_excel(data_path, sheet_name='统计-互动组件通过情况')
# 打开模板
wb = load_workbook(template_path)
# 填充知识点数据到模板
ws_kp = wb['统计-知识点通过情况']
# 从第2行开始写入数据A2
for r_idx, row in enumerate(df_kp.values, start=2):
for c_idx, value in enumerate(row, start=1):
ws_kp.cell(row=r_idx, column=c_idx, value=value)
# 填充互动组件数据到模板
ws_component = wb['统计-互动组件通过情况']
for r_idx, row in enumerate(df_component.values, start=2):
for c_idx, value in enumerate(row, start=1):
ws_component.cell(row=r_idx, column=c_idx, value=value)
# 保存文件
wb.save(output_path)
print(f"✅ 模板填充完成,已生成报告:{output_path}")

View File

@ -1,123 +0,0 @@
import pandas as pd
# ==============================
# 1. 基础配置
# ==============================
file_path = '/root/.openclaw/media/inbound/è_è_²id_2827_å_¼å_ºæ_é_20260316---befdf3d9-0682-46df-aea5-74839af2a1cd.xlsx'
student_name = '角色ID2827'
# ==============================
# 2. 读取Excel数据
# ==============================
kp_stats = pd.read_excel(file_path, sheet_name='统计-知识点通过情况')
component_stats = pd.read_excel(file_path, sheet_name='统计-互动组件通过情况')
# ==============================
# 3. 数据清洗(防止空值)
# ==============================
kp_stats = kp_stats.fillna(0)
# ==============================
# 4. 计算知识点加权得分
# ==============================
kp_stats['weighted_score'] = (
kp_stats['Perfect数量'] * 100 +
kp_stats['Good数量'] * 80 +
kp_stats['Pass数量'] * 60
) / kp_stats['总数量']
# ==============================
# 5. 计算正确率
# ==============================
kp_stats['correct_rate'] = (
kp_stats['Perfect数量'] +
kp_stats['Good数量'] +
kp_stats['Pass数量']
) / kp_stats['总数量']
# ==============================
# 6. 计算能力模块得分
# ==============================
vocab_score = kp_stats[kp_stats['知识点类型'] == 'vocab']['weighted_score'].mean()
sentence_score = kp_stats[kp_stats['知识点类型'] == 'sentence']['weighted_score'].mean()
# ==============================
# 7. 综合得分
# ==============================
overall_score = kp_stats['weighted_score'].mean()
overall_correct_rate = kp_stats['correct_rate'].mean()
# ==============================
# 8. 等级判断
# ==============================
def get_level(score):
if score >= 90:
return '优秀'
elif score >= 80:
return '良好'
elif score >= 70:
return '合格'
else:
return '需要提升'
level = get_level(overall_score)
# ==============================
# 9. 找出薄弱知识点
# ==============================
weak_kp = kp_stats.sort_values('weighted_score').head(5)
# ==============================
# 10. 生成报告数据
# ==============================
report_data = {
'学生姓名': student_name,
'综合得分': round(overall_score, 1),
'词汇能力得分': round(vocab_score, 1),
'句子能力得分': round(sentence_score, 1),
'总体正确率': f"{round(overall_correct_rate*100,1)}%",
'学习水平等级': level
}
report_df = pd.DataFrame([report_data])
# ==============================
# 11. 导出Excel报告
# ==============================
output_file = '学习分析报告_自动生成版.xlsx'
with pd.ExcelWriter(output_file) as writer:
# 总结报告
report_df.to_excel(
writer,
sheet_name='学习报告',
index=False
)
# 知识点详情
kp_stats.to_excel(
writer,
sheet_name='知识点详情',
index=False
)
# 薄弱知识点
weak_kp.to_excel(
writer,
sheet_name='薄弱知识点TOP5',
index=False
)
print(f"✅ 学习报告生成完成:{output_file}")

View File

@ -1,110 +0,0 @@
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import rcParams
# 配置中文字体
rcParams['font.sans-serif'] = ['SimHei', 'WenQuanYi Micro Hei']
rcParams['axes.unicode_minus'] = False
# ==============================
# 1. 加载数据
# ==============================
file_path = '/root/.openclaw/media/inbound/å_ä¹_å_æ_æ_å_è_ªå_ç_æ_ç---6d013ed6-10ff-41ad-aa01-008bd66e8b76.xlsx'
df_report = pd.read_excel(file_path, sheet_name='学习报告')
df_kp = pd.read_excel(file_path, sheet_name='知识点详情')
df_weak = pd.read_excel(file_path, sheet_name='薄弱知识点TOP5')
# 提取数据
student_name = df_report.iloc[0]['学生姓名']
overall_score = df_report.iloc[0]['综合得分']
vocab_score = df_report.iloc[0]['词汇能力得分']
sentence_score = df_report.iloc[0]['句子能力得分']
correct_rate = df_report.iloc[0]['总体正确率']
level = df_report.iloc[0]['学习水平等级']
# ==============================
# 2. 生成能力雷达图
# ==============================
plt.figure(figsize=(6, 6), dpi=100)
# 雷达图维度
labels = ['词义掌握', '语义理解', '句法结构']
scores = [vocab_score,
df_kp[df_kp['知识点类型']=='sentence']['weighted_score'].mean(),
df_kp[df_kp['知识点类型']=='sentence']['Perfect比例(%)'].mean()/100*100]
# 雷达图设置
angles = np.linspace(0, 2*np.pi, len(labels), endpoint=False)
scores = np.concatenate((scores, [scores[0]]))
angles = np.concatenate((angles, [angles[0]]))
labels = np.concatenate((labels, [labels[0]]))
ax = plt.subplot(111, polar=True)
ax.plot(angles, scores, 'o-', linewidth=2, color='#2E86AB')
ax.fill(angles, scores, alpha=0.25, color='#2E86AB')
ax.set_thetagrids(angles * 180/np.pi, labels, fontsize=12)
ax.set_ylim(0,100)
plt.title(f'{student_name} 能力雷达图', y=1.1, fontsize=15)
plt.grid(True)
plt.savefig('能力雷达图.png', bbox_inches='tight')
plt.close()
# ==============================
# 3. 生成薄弱知识点柱状图
# ==============================
plt.figure(figsize=(8, 4), dpi=100)
weak_top3 = df_weak.head(3)
x = np.arange(len(weak_top3['知识点标题']))
y = weak_top3['weighted_score']
bars = plt.bar(x, y, color='#F24C4C', width=0.6)
plt.xticks(x, weak_top3['知识点标题'], rotation=15, fontsize=10)
plt.ylabel('加权得分', fontsize=12)
plt.title('TOP3 薄弱知识点', fontsize=15)
plt.ylim(0, 100)
# 添加数值标签
for bar in bars:
height = bar.get_height()
plt.text(bar.get_x() + bar.get_width()/2., height,
f'{height:.1f}', ha='center', va='bottom')
plt.savefig('薄弱知识点.png', bbox_inches='tight')
plt.close()
# ==============================
# 4. 生成Markdown可视化报告
# ==============================
report_content = f"""# {student_name} 学习分析可视化报告
---
## 🔹 综合概览
| 指标 | 数值 |
| --- | --- |
| 综合得分 | {overall_score:.1f} |
| 词汇能力得分 | {vocab_score:.1f} |
| 句子能力得分 | {sentence_score:.1f} |
| 总体正确率 | {correct_rate} |
| 学习水平等级 | {level} |
---
## 🔹 能力画像(雷达图)
![能力雷达图](能力雷达图.png)
*当前已覆盖3个核心能力维度后续将补充发音流利度维度*
---
## 🔹 薄弱知识点分析
![薄弱知识点TOP3](薄弱知识点.png)
### 提升建议:
1. 重点练习上述3个知识点每天完成5次对应练习
2. 练习时放慢速度仔细确认题意后再作答
3. 家长可以配合进行场景对话练习巩固薄弱知识点
---
## 🔹 后续升级说明
待补充学习时长思考时间语音评测数据后将新增
- 学习驱动力分析模块
- 知识迁移能力评估
- 口语发音精细化诊断
- 个性化家长建议
"""
with open(f'{student_name}_可视化学习报告.md', 'w', encoding='utf-8') as f:
f.write(report_content)
print(f"✅ 可视化报告生成完成:{student_name}_可视化学习报告.md已生成配套可视化图片")

View File

@ -1,15 +0,0 @@
import pandas as pd
# 文件路径
file1 = "/root/.openclaw/media/inbound/é_¾åº_æ_æ_å_è_ç³_æ_1.0---8b762144-a4a3-481d-bdb8-b3b0dcbf875a.xlsx"
file2 = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---286e16db-d460-460d-95a4-242f28a0429c.xlsx"
print("===== 第一份表格结构 =====")
df1 = pd.read_excel(file1)
print(f"列名:{list(df1.columns)}")
print(f"前5行数据\n{df1.head()}\n")
print("===== 第二份表格结构 =====")
df2 = pd.read_excel(file2)
print(f"列名:{list(df2.columns)}")
print(f"前5行数据\n{df2.head()}")

View File

@ -1,8 +0,0 @@
import pandas as pd
final_lib_file = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---1de9de11-1a6b-45c7-856a-4d69f9b26aa9.xlsx"
df_final = pd.read_excel(final_lib_file)
print("新定稿单词库列名:", list(df_final.columns))
print("\n前10行预览")
print(df_final.head(10))

View File

@ -1,11 +0,0 @@
import pandas as pd
# 新的定稿单词库路径
new_file = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---23d539f8-33d6-4679-b9ae-91520114ae54.xlsx"
# 原始带详细字段的单词表路径
origin_file = "/root/.openclaw/media/inbound/é_¾åº_æ_æ_å_è_ç³_æ_1.0---8b762144-a4a3-481d-bdb8-b3b0dcbf875a.xlsx"
print("===== 新定稿单词库结构 =====")
df_new = pd.read_excel(new_file)
print(f"列名:{list(df_new.columns)}")
print(f"前10行数据预览\n{df_new.head(10)}")

View File

@ -1,14 +0,0 @@
import pandas as pd
from openpyxl import load_workbook
# 最新的定稿库文件路径
final_lib_file = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---1de9de11-1a6b-45c7-856a-4d69f9b26aa9.xlsx"
# 查看所有sheet
wb = load_workbook(final_lib_file, read_only=True)
print(f"文件包含的sheet{wb.sheetnames}")
for sheet_name in wb.sheetnames:
df = pd.read_excel(final_lib_file, sheet_name=sheet_name)
print(f"\nsheet名称{sheet_name},行数:{len(df)}")
print(f"前3行预览\n{df.head(3)}")

View File

@ -1,10 +0,0 @@
import pandas as pd
file2 = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---286e16db-d460-460d-95a4-242f28a0429c.xlsx"
df2 = pd.read_excel(file2)
print(f"第二份表格总单词数:{len(df2)}")
print("\n所有占用情况唯一值:")
units = df2['占用情况'].dropna().unique()
for unit in units:
print(unit)

View File

@ -1,41 +0,0 @@
import pandas as pd
# 文件路径
final_lib_file = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---1de9de11-1a6b-45c7-856a-4d69f9b26aa9.xlsx" # 定稿单词库
difficulty_file = "/root/.openclaw/media/inbound/é_¾åº_æ_æ_å_è_ç³_æ_1.0---a5011ea1-5bef-47af-be44-633db83f822e.xlsx" # 难度表
# 读取
df_final = pd.read_excel(final_lib_file)
df_diff = pd.read_excel(difficulty_file)
# 处理定稿库单词:去空、去非字符串(比如数字)、转小写统一对比
final_words = []
for w in df_final['单词'].tolist():
if pd.notna(w) and isinstance(w, str):
final_words.append(w.lower())
final_set = set(final_words)
print(f"定稿库有效单词(纯字符串,去空):{len(final_set)}")
print(f"定稿库原始总条目数:{len(df_final)}")
print(f"定稿库非字符串/空值条目数:{len(df_final) - len(final_words)}")
# 处理难度表单词
diff_words = []
for w in df_diff['单词'].tolist():
if pd.notna(w) and isinstance(w, str):
diff_words.append(w.lower())
diff_set = set(diff_words)
print(f"\n难度表有效单词:{len(diff_set)}")
print(f"难度表原始总条目数:{len(df_diff)}")
# 差异统计
match_count = len(diff_set & final_set)
unmatch_count = len(diff_set - final_set)
print(f"\n匹配上的单词数量:{match_count}")
print(f"未匹配的单词数量:{unmatch_count}")
# 查看定稿库中不是单词的内容
print("\n定稿库中不是有效单词的内容示例:")
for w in df_final['单词'].tolist():
if pd.isna(w) or not isinstance(w, str):
print(w, type(w))
break

View File

@ -1,33 +0,0 @@
import pandas as pd
new_file = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---23d539f8-33d6-4679-b9ae-91520114ae54.xlsx"
df_new = pd.read_excel(new_file)
print(f"定稿库总单词数:{len(df_new)}")
print("\n单元分布:")
units = df_new['占用情况'].dropna().unique()
units_sorted = sorted(units, key=lambda x: (int(x.split('-')[1][1:]) if x.startswith('S') else 999, int(x.split('-')[2][1:]) if len(x.split('-'))>2 else 999))
for unit in units_sorted:
count = len(df_new[df_new['占用情况'] == unit])
print(f"{unit}: {count}")
# 统计上册S0 + S1 U1-U6和下册S1 U7+)的数量
upper_count = 0
lower_count = 0
for idx, row in df_new.iterrows():
unit = row['占用情况']
if pd.isna(unit) or unit == '不常见':
continue
unit = unit.strip()
if unit.startswith('S0-'):
upper_count +=1
elif unit.startswith('S1-U'):
unit_num = int(unit.split('-')[1][1:])
if unit_num <=6:
upper_count +=1
else:
lower_count +=1
print(f"\n按单元统计:")
print(f"上册单词总数S0 + S1 U1-U6{upper_count}")
print(f"下册单词总数S1 U7+{lower_count}")

View File

@ -1,41 +0,0 @@
import pandas as pd
# 文件路径
final_lib_file = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---1de9de11-1a6b-45c7-856a-4d69f9b26aa9.xlsx" # 定稿单词库两个sheet上/下)
difficulty_file = "/root/.openclaw/media/inbound/é_¾åº_æ_æ_å_è_ç³_æ_1.0---a5011ea1-5bef-47af-be44-633db83f822e.xlsx" # 难度表
output_file = "/root/.openclaw/workspace-xiaoban/最终版单词上下册分类结果.xlsx"
# 读取定稿库的两个sheet
df_upper_lib = pd.read_excel(final_lib_file, sheet_name='单词表-LV1')
df_lower_lib = pd.read_excel(final_lib_file, sheet_name='单词表-LV1')
# 提取上下册单词列表,去空值
upper_words = set(df_upper_lib['单词'].dropna().tolist())
lower_words = set(df_lower_lib['单词'].dropna().tolist())
print(f"定稿库上册单词数:{len(upper_words)}")
print(f"定稿库下册单词数:{len(lower_words)}")
print(f"合计:{len(upper_words)+len(lower_words)}")
# 读取难度表
df_diff = pd.read_excel(difficulty_file)
# 匹配分类
df_diff['分类'] = df_diff['单词'].apply(lambda x: '上册' if x in upper_words else '下册' if x in lower_words else '未匹配')
# 拆分结果
df_upper = df_diff[df_diff['分类'] == '上册'].drop(columns=['分类'])
df_lower = df_diff[df_diff['分类'] == '下册'].drop(columns=['分类'])
df_other = df_diff[df_diff['分类'] == '未匹配'].drop(columns=['分类'])
# 写入结果
with pd.ExcelWriter(output_file, engine='openpyxl') as writer:
df_upper.to_excel(writer, sheet_name='上册单词(最终版)', index=False)
df_lower.to_excel(writer, sheet_name='下册单词(最终版)', index=False)
if len(df_other) >0:
df_other.to_excel(writer, sheet_name='未匹配单词', index=False)
print(f"\n处理完成!结果已保存到:{output_file}")
print(f"上册匹配到单词数:{len(df_upper)}")
print(f"下册匹配到单词数:{len(df_lower)}")
print(f"未匹配到单词数:{len(df_other)}")

View File

@ -1,72 +0,0 @@
import pandas as pd
# 你提供的核心逻辑适配Excel输入输出
def process_vocabulary_system(file_path):
# 1. 加载Excel数据
try:
df = pd.read_excel(file_path)
except FileNotFoundError:
return "Error: File not found."
df.columns = [c.strip() for c in df.columns]
print(f"加载文件成功,共{len(df)}条单词记录")
# 2. 你定义的特殊规则
t2_special_list = {
'invisible': {'air', 'wind', 'smoke', 'gas'},
'abstract': {'song', 'friend', 'hobby', 'art', 'pe', 'music', 'fun'},
'generalized': {'child', 'children', 'father', 'mother', 'food', 'colour', 'animal', 'toy'},
'identity': {'address', 'age', 'aunt', 'name'}
}
# 预展开T2特殊词集合
all_t2_special = {item for sublist in t2_special_list.values() for item in sublist}
# 3. 核心处理逻辑
def apply_rules(row):
# 清洗输入
word = str(row.get('单词', '')).lower().strip()
t_score = pd.to_numeric(row.get('实现成本(T)', 1), errors='coerce')
if pd.isna(t_score):
t_score = 1
# 规则分支
if t_score >= 3:
scheme = "逻辑交互 / UI 处理"
reason = "英语骨架词。涉及空间位置、时序或数量的逻辑判定需系统重度UI引导。"
link = "建议设计‘解谜指令’,如:利用 here/there 进行远近空间坐标对比任务。"
elif t_score == 2 or word in all_t2_special:
scheme = "动画 / 特效 / UI处理"
if word in t2_special_list['invisible']:
reason = "隐形名词。需环境联动(如风吹树叶)和特效辅助表现。"
link = "联动关联实物wind 联动 tree/leaf 的动态表现。"
elif word in t2_special_list['generalized']:
reason = "泛化概念。无法用单一图片代表需UI组合展示或多模型联动。"
link = f"联动具体成员,由 {word} 展示其下属的 T1 级具象单词集合。"
elif word in t2_special_list['abstract'] or word in t2_special_list['identity']:
reason = "抽象/身份信息。需通过情节演绎或特定 UI 界面(如家谱)界定。"
link = "联动相关动作song 联动 singage 联动 numbers。"
else:
reason = "动作/状态词。需 Animator 动画、粒子特效或角色表情反馈。"
link = "建议设计状态切换任务open vs closeddirty vs clean。"
else: # T1 情况
scheme = "静态模型展示"
reason = "具象实物。在 Unity 中对应单一、静态的物理模型或材质资源。"
link = "可作为背景或道具。建议联动颜色词或方位词增加任务厚度。"
return pd.Series([scheme, reason, link])
# 执行规则生成新列
df[['教学方案展示', '实现理由', '联动建议']] = df.apply(apply_rules, axis=1)
# 4. 导出为Excel
output_file = "/root/.openclaw/workspace-xiaoban/LV1词汇教学方案生成结果.xlsx"
df.to_excel(output_file, index=False)
return f"Success: 处理完成,结果已保存到 {output_file}"
# 处理刚收到的LV1词汇表
input_path = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---d41d887f-5d65-4eab-928d-a717e5097e8c.xlsx"
result = process_vocabulary_system(input_path)
print(result)

View File

@ -1,43 +0,0 @@
import pandas as pd
# 文件路径
table1_path = "/root/.openclaw/media/inbound/é_¾åº_æ_æ_å_è_ç³_æ_1.0---4d1d9fe3-1e36-4df1-baf6-d826fcf7a05e.xlsx"
table3_path = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---e503b23c-829e-4367-b819-762856bd50b5.xlsx"
output_path = "/root/.openclaw/workspace-xiaoban/匹配完成的LV1词汇表.xlsx"
# 读取两个表格
df1 = pd.read_excel(table1_path)
df3 = pd.read_excel(table3_path)
print(f"表一总条数:{len(df1)}")
print(f"表三总条数:{len(df3)}")
print(f"表一列名:{list(df1.columns)}")
print(f"表三列名:{list(df3.columns)}")
# 创建映射统一将单词转为字符串作为key匹配三个字段
word_map = {}
for _, row in df1.iterrows():
word = str(row['单词']).strip()
word_map[word] = {
'难度D': row['难度D'],
'实现成本(T)': row['实现成本(T)'],
'单词系数': row['单词系数']
}
# 给表三添加三列
def get_value(word, col):
key = str(word).strip()
return word_map.get(key, {}).get(col, None)
df3['难度D'] = df3['单词'].apply(lambda x: get_value(x, '难度D'))
df3['实现成本(T)'] = df3['单词'].apply(lambda x: get_value(x, '实现成本(T)'))
df3['单词系数'] = df3['单词'].apply(lambda x: get_value(x, '单词系数'))
# 保存结果
df3.to_excel(output_path, index=False)
# 统计匹配情况
match_count = df3['难度D'].notna().sum()
print(f"\n匹配完成!结果已保存到:{output_path}")
print(f"成功匹配条数:{match_count}")
print(f"未匹配条数:{len(df3) - match_count}")

View File

@ -1,40 +0,0 @@
import pandas as pd
# 文件路径
difficulty_path = "/root/.openclaw/media/inbound/é_¾åº_æ_æ_å_è_ç³_æ_1.0---4d1d9fe3-1e36-4df1-baf6-d826fcf7a05e.xlsx" # 难度_成本单词系数1.0表
lower_path = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---59ff96e7-d862-476b-be16-3162afcd818f.xlsx" # 最新的下册单词表
output_path = "/root/.openclaw/workspace-xiaoban/最终版_LV1下册词汇匹配系数结果.xlsx"
# 读取表格
df_diff = pd.read_excel(difficulty_path)
df_lower = pd.read_excel(lower_path)
print(f"下册单词表总条数:{len(df_lower)}")
# 创建映射字典,所有单词统一转为字符串匹配,包含数字
word_map = {}
for _, row in df_diff.iterrows():
word_key = str(row['单词']).strip()
word_map[word_key] = {
'难度D': row['难度D'],
'实现成本(T)': row['实现成本(T)'],
'单词系数': row['单词系数']
}
# 匹配字段
def match_field(word, field):
key = str(word).strip()
return word_map.get(key, {}).get(field, None)
df_lower['难度D'] = df_lower['单词'].apply(lambda x: match_field(x, '难度D'))
df_lower['实现成本(T)'] = df_lower['单词'].apply(lambda x: match_field(x, '实现成本(T)'))
df_lower['单词系数'] = df_lower['单词'].apply(lambda x: match_field(x, '单词系数'))
# 保存结果
df_lower.to_excel(output_path, index=False)
# 统计
success_count = df_lower['难度D'].notna().sum()
print(f"\n匹配完成!结果已保存到:{output_path}")
print(f"成功匹配条数:{success_count}")
print(f"未匹配条数:{len(df_lower) - success_count}")

View File

@ -1,39 +0,0 @@
import pandas as pd
# 文件路径
difficulty_path = "/root/.openclaw/media/inbound/é_¾åº_æ_æ_å_è_ç³_æ_1.0---4d1d9fe3-1e36-4df1-baf6-d826fcf7a05e.xlsx" # 难度表
lv1_lower_path = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---5b90d819-abf3-4882-8772-ed8f3e0b449f.xlsx" # LV1下册词汇表
output_path = "/root/.openclaw/workspace-xiaoban/正确版_LV1下册词汇匹配结果.xlsx"
# 读取表格
df_diff = pd.read_excel(difficulty_path)
df_lower = pd.read_excel(lv1_lower_path)
print(f"LV1下册词汇表总条数{len(df_lower)}")
# 创建难度表映射(全部单词,不区分上下册,按内容匹配)
word_map = {}
for _, row in df_diff.iterrows():
word = str(row['单词']).strip()
word_map[word] = {
'难度D': row['难度D'],
'实现成本(T)': row['实现成本(T)'],
'单词系数': row['单词系数']
}
# 匹配字段
def get_value(word, col):
key = str(word).strip()
return word_map.get(key, {}).get(col, None)
df_lower['难度D'] = df_lower['单词'].apply(lambda x: get_value(x, '难度D'))
df_lower['实现成本(T)'] = df_lower['单词'].apply(lambda x: get_value(x, '实现成本(T)'))
df_lower['单词系数'] = df_lower['单词'].apply(lambda x: get_value(x, '单词系数'))
# 保存结果
df_lower.to_excel(output_path, index=False)
match_count = df_lower['难度D'].notna().sum()
print(f"\nLV1下册匹配完成结果已保存到{output_path}")
print(f"成功匹配条数:{match_count}")
print(f"未匹配条数:{len(df_lower) - match_count}")

View File

@ -1,41 +0,0 @@
import pandas as pd
# 文件路径
table1_path = "/root/.openclaw/media/inbound/é_¾åº_æ_æ_å_è_ç³_æ_1.0---4d1d9fe3-1e36-4df1-baf6-d826fcf7a05e.xlsx"
table2_path = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---5b90d819-abf3-4882-8772-ed8f3e0b449f.xlsx" # 剩下的480行
output_path = "/root/.openclaw/workspace-xiaoban/匹配完成的LV1下册词汇表.xlsx"
# 读取表格
df1 = pd.read_excel(table1_path)
df2 = pd.read_excel(table2_path)
print(f"表一总条数:{len(df1)}")
print(f"待处理的下册表总条数:{len(df2)}")
# 创建映射
word_map = {}
for _, row in df1.iterrows():
word = str(row['单词']).strip()
word_map[word] = {
'难度D': row['难度D'],
'实现成本(T)': row['实现成本(T)'],
'单词系数': row['单词系数']
}
# 匹配字段
def get_value(word, col):
key = str(word).strip()
return word_map.get(key, {}).get(col, None)
df2['难度D'] = df2['单词'].apply(lambda x: get_value(x, '难度D'))
df2['实现成本(T)'] = df2['单词'].apply(lambda x: get_value(x, '实现成本(T)'))
df2['单词系数'] = df2['单词'].apply(lambda x: get_value(x, '单词系数'))
# 保存
df2.to_excel(output_path, index=False)
# 统计
match_count = df2['难度D'].notna().sum()
print(f"\n处理完成!结果已保存到:{output_path}")
print(f"成功匹配条数:{match_count}")
print(f"未匹配条数:{len(df2) - match_count}")

View File

@ -1,42 +0,0 @@
import pandas as pd
# 文件路径
final_lib_file = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---1de9de11-1a6b-45c7-856a-4d69f9b26aa9.xlsx" # 第一份:定稿单词库(仅单词列表)
difficulty_file = "/root/.openclaw/media/inbound/é_¾åº_æ_æ_å_è_ç³_æ_1.0---a5011ea1-5bef-47af-be44-633db83f822e.xlsx" # 第二份:难度表
output_file = "/root/.openclaw/workspace-xiaoban/最新定稿版单词上下册分类结果.xlsx"
# 读取两个表格
df_final = pd.read_excel(final_lib_file)
df_diff = pd.read_excel(difficulty_file)
# 提取定稿单词列表,去空值,去重
final_words = df_final['单词'].dropna().unique().tolist()
total = len(final_words)
print(f"定稿单词库总有效不重复单词数:{total}")
# 按照定稿库顺序:前一半上册,后一半下册
upper_words = set(final_words[:total//2])
lower_words = set(final_words[total//2:])
print(f"上册单词数:{len(upper_words)}")
print(f"下册单词数:{len(lower_words)}")
# 分类难度表单词匹配分类
df_diff['分类'] = df_diff['单词'].apply(lambda x: '上册' if x in upper_words else '下册' if x in lower_words else '未匹配')
# 拆分结果
df_upper = df_diff[df_diff['分类'] == '上册'].drop(columns=['分类'])
df_lower = df_diff[df_diff['分类'] == '下册'].drop(columns=['分类'])
df_other = df_diff[df_diff['分类'] == '未匹配'].drop(columns=['分类'])
# 写入结果
with pd.ExcelWriter(output_file, engine='openpyxl') as writer:
df_upper.to_excel(writer, sheet_name='上册单词', index=False)
df_lower.to_excel(writer, sheet_name='下册单词', index=False)
if len(df_other) >0:
df_other.to_excel(writer, sheet_name='未匹配单词', index=False)
print(f"\n处理完成!结果已保存到:{output_file}")
print(f"上册匹配到单词数:{len(df_upper)}")
print(f"下册匹配到单词数:{len(df_lower)}")
print(f"未匹配到单词数:{len(df_other)}")

View File

@ -1,53 +0,0 @@
import pandas as pd
from openpyxl import load_workbook
# 文件路径
file1 = "/root/.openclaw/media/inbound/é_¾åº_æ_æ_å_è_ç³_æ_1.0---8b762144-a4a3-481d-bdb8-b3b0dcbf875a.xlsx"
file2 = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---286e16db-d460-460d-95a4-242f28a0429c.xlsx"
output_file = "/root/.openclaw/workspace-xiaoban/单词上下分类结果.xlsx"
# 读取第一个表格(带详细字段的单词表)
df1 = pd.read_excel(file1)
# 读取第二个表格LV1词汇表
df2 = pd.read_excel(file2)
# 给第二份表格添加上下分类
def get_category(unit):
if pd.isna(unit) or unit == '不常见':
return '其他'
unit = unit.strip()
if unit.startswith('S0-'):
return ''
if unit.startswith('S1-U'):
# 提取单元号
unit_num = int(unit.split('-')[1][1:])
if unit_num <= 6:
return ''
else:
return ''
return '其他'
df2['分类'] = df2['占用情况'].apply(get_category)
# 创建单词到分类的映射
word_category_map = df2.drop_duplicates('单词').set_index('单词')['分类'].to_dict()
# 给第一份表格添加分类列
df1['分类'] = df1['单词'].map(word_category_map)
# 拆分分类
df_upper = df1[df1['分类'] == ''].drop(columns=['分类'])
df_lower = df1[df1['分类'] == ''].drop(columns=['分类'])
df_other = df1[df1['分类'] == '其他'].drop(columns=['分类'])
# 写入结果到Excel分三个sheet
with pd.ExcelWriter(output_file, engine='openpyxl') as writer:
df_upper.to_excel(writer, sheet_name='上册单词', index=False)
df_lower.to_excel(writer, sheet_name='下册单词', index=False)
if len(df_other) > 0:
df_other.to_excel(writer, sheet_name='其他分类单词', index=False)
print(f"处理完成!结果已保存到:{output_file}")
print(f"上册单词数量:{len(df_upper)}")
print(f"下册单词数量:{len(df_lower)}")
print(f"其他分类单词数量:{len(df_other)}")

View File

@ -1,28 +0,0 @@
import pandas as pd
# 文件路径
final_lib_file = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---1de9de11-1a6b-45c7-856a-4d69f9b26aa9.xlsx" # 定稿单词库
difficulty_file = "/root/.openclaw/media/inbound/é_¾åº_æ_æ_å_è_ç³_æ_1.0---a5011ea1-5bef-47af-be44-633db83f822e.xlsx" # 难度表
output_file = "/root/.openclaw/workspace-xiaoban/极简版单词上下册分类结果.xlsx"
# 读取表格
df_final = pd.read_excel(final_lib_file)
df_diff = pd.read_excel(difficulty_file)
# 完全按原始顺序拆分前250行上册后250行下册无视内容
final_words_all = df_final['单词'].tolist()
upper_words = final_words_all[:250]
lower_words = final_words_all[250:]
# 直接匹配,无视重复
upper_df = df_diff[df_diff['单词'].isin(upper_words)]
lower_df = df_diff[df_diff['单词'].isin(lower_words)]
# 写入结果
with pd.ExcelWriter(output_file, engine='openpyxl') as writer:
upper_df.to_excel(writer, sheet_name='上册单词', index=False)
lower_df.to_excel(writer, sheet_name='下册单词', index=False)
print(f"处理完成!结果已保存到:{output_file}")
print(f"上册单词数量:{len(upper_df)}")
print(f"下册单词数量:{len(lower_df)}")

View File

@ -1,52 +0,0 @@
import pandas as pd
from openpyxl import load_workbook
# 文件路径
origin_file = "/root/.openclaw/media/inbound/é_¾åº_æ_æ_å_è_ç³_æ_1.0---8b762144-a4a3-481d-bdb8-b3b0dcbf875a.xlsx"
final_lib_file = "/root/.openclaw/media/inbound/â_¼ï_LV1-å_ç_å_è_åº_-ç¼_å_é_è_ç_è_é---23d539f8-33d6-4679-b9ae-91520114ae54.xlsx"
output_file = "/root/.openclaw/workspace-xiaoban/定稿版单词上下册分类结果.xlsx"
# 读取原始单词表(带详细字段)
df_origin = pd.read_excel(origin_file)
# 读取定稿单词库
df_final = pd.read_excel(final_lib_file)
# 给定稿库单词添加上下册分类
def get_category(unit):
if pd.isna(unit) or unit.strip() == '' or unit.strip() == '不常见':
return '不匹配'
unit = unit.strip()
if unit.startswith('S0-'):
return '上册'
if unit.startswith('S1-U'):
unit_num = int(unit.split('-')[1][1:])
if unit_num <=6:
return '上册'
else:
return '下册'
return '不匹配'
df_final['分类'] = df_final['占用情况'].apply(get_category)
# 创建单词到分类的映射(仅包含定稿库中存在的单词)
word_category_map = df_final[df_final['分类'] != '不匹配'].drop_duplicates('单词').set_index('单词')['分类'].to_dict()
# 给原始单词表匹配分类
df_origin['分类'] = df_origin['单词'].map(word_category_map)
# 拆分上下册
df_upper = df_origin[df_origin['分类'] == '上册'].drop(columns=['分类'])
df_lower = df_origin[df_origin['分类'] == '下册'].drop(columns=['分类'])
df_other = df_origin[~df_origin['分类'].isin(['上册', '下册'])].drop(columns=['分类'])
# 写入结果
with pd.ExcelWriter(output_file, engine='openpyxl') as writer:
df_upper.to_excel(writer, sheet_name='上册单词(定稿版)', index=False)
df_lower.to_excel(writer, sheet_name='下册单词(定稿版)', index=False)
if len(df_other) > 0:
df_other.to_excel(writer, sheet_name='未匹配到定稿库的单词', index=False)
print(f"处理完成!结果已保存到:{output_file}")
print(f"上册匹配到单词数量:{len(df_upper)}")
print(f"下册匹配到单词数量:{len(df_lower)}")
print(f"未匹配到定稿库的单词数量:{len(df_other)}")

View File

@ -0,0 +1 @@
show databases;

View File

@ -0,0 +1 @@
show tables like '%order%';

View File

@ -0,0 +1,2 @@
use vala_order;
show tables like '%order%';

1
output/check_table.sql Normal file
View File

@ -0,0 +1 @@
select table_name from information_schema.tables where table_name like '%order%';

View File

@ -0,0 +1,3 @@
show databases;
use vala;
show tables like '%order%';

View File

@ -0,0 +1,2 @@
use vala_order;
show tables;

View File

@ -0,0 +1,2 @@
use vala;
show tables like '%order%';

53
output/gmv_query.sql Normal file
View File

@ -0,0 +1,53 @@
with daily_gmv as (
select date(pay_success_date) as pay_date
,case when sale_channel = 11 then '苹果'
when sale_channel = 12 then '华为'
when sale_channel = 13 then '小米'
when sale_channel = 14 then '荣耀'
when sale_channel = 15 then '应用宝'
when sale_channel = 17 then '魅族'
when sale_channel = 18 then 'VIVO'
when sale_channel = 19 then 'OPPO'
when sale_channel = 21 then '学而思'
when sale_channel = 22 then '讯飞'
when sale_channel = 23 then '步步高'
when sale_channel = 24 then '作业帮'
when sale_channel = 25 then '小度'
when sale_channel = 26 then '希沃'
when sale_channel = 27 then '京东方'
when sale_channel = 41 then '官网'
else '小程序'
end as sale_channel
,sum(pay_amount_int)/100 as amount
from bi_vala_order
where sale_channel in (11,12,13,14,15,17,18,19,21,22,23,24,25,26,27,41,71)
and order_status = 3
and pay_amount_int > 49800
and pay_success_date >= '2026-03-04' and pay_success_date < '2026-03-05'
group by pay_success_date
,case when sale_channel = 11 then '苹果'
when sale_channel = 12 then '华为'
when sale_channel = 13 then '小米'
when sale_channel = 14 then '荣耀'
when sale_channel = 15 then '应用宝'
when sale_channel = 17 then '魅族'
when sale_channel = 18 then 'VIVO'
when sale_channel = 19 then 'OPPO'
when sale_channel = 21 then '学而思'
when sale_channel = 22 then '讯飞'
when sale_channel = 23 then '步步高'
when sale_channel = 24 then '作业帮'
when sale_channel = 25 then '小度'
when sale_channel = 26 then '希沃'
when sale_channel = 27 then '京东方'
when sale_channel = 41 then '官网'
else '小程序'
end
)
select
pay_date,
sale_channel,
amount,
round(amount / sum(amount) over (partition by pay_date) * 100, 2) as ratio
from daily_gmv
order by amount desc;

View File

@ -0,0 +1,2 @@
use vala_order;
show tables;

View File

@ -1,21 +0,0 @@
#!/bin/bash
set -e
# 进入workspace目录
cd /root/.openclaw/workspace-xiaoban
# 配置git信息
git config user.name "xiaoban"
git config user.email "xiaoban@valavala.com"
# 添加所有文件,自动排除.gitignore里的内容包括secrets.md
git add .
# 提交变更
COMMIT_MSG="自动备份 $(date +'%Y-%m-%d %H:%M:%S')"
git commit -m "$COMMIT_MSG" || echo "无变更需要提交"
# 推送到远程仓库
git push https://git.valavala.com/ai_member_only/ai_member_xiaoban main
echo "✅ Workspace备份完成$COMMIT_MSG"

View File

@ -1,48 +0,0 @@
#!/bin/bash
# 每日8点总结执行脚本
WORKSPACE="/root/.openclaw/workspace-xiaoban"
DATE=$(date +%Y%m%d)
YESTERDAY=$(date -d "yesterday" +%Y-%m-%d)
# 1. 生成过去24小时关键经验总结
echo "=== 每日总结 $DATE ===" > $WORKSPACE/tmp_daily_summary.md
echo "## 昨日关键进展" >> $WORKSPACE/tmp_daily_summary.md
# 读取昨日记忆文件内容
if [ -f "$WORKSPACE/memory/$YESTERDAY.md" ]; then
grep -E "(完成|新增|修复|优化|升级|重要)" $WORKSPACE/memory/$YESTERDAY.md >> $WORKSPACE/tmp_daily_summary.md
else
echo "无昨日记忆记录" >> $WORKSPACE/tmp_daily_summary.md
fi
# 2. 提交更新到git仓库
cd $WORKSPACE
git add .
git commit -m "每日总结更新 $DATE"
git push origin main
# 3. 更新飞书个人说明文档
# 调用飞书文档更新接口,将总结追加到个人说明文档末尾
# 文档token从MEMORY.md获取Tn23wQkUQilduAkvgwscTGhgnUd
curl -X POST "https://open.feishu.cn/open-apis/docx/v1/documents/Tn23wQkUQilduAkvgwscTGhgnUd/blocks" \
-H "Authorization: Bearer $(cat $WORKSPACE/.feishu_token)" \
-H "Content-Type: application/json" \
-d "{
\"block_type\": 3,
\"children\": [
{
\"block_type\": 2,
\"text\": {
\"content\": \"### 每日更新 $DATE\n$(cat $WORKSPACE/tmp_daily_summary.md | sed 's/"/\\"/g')\"
}
}
]
}"
# 4. 发送通知给Cris
/home/ubuntu/.nvm/versions/node/v24.14.0/bin/openclaw message send --channel feishu --target user:ou_d0474502fe89122e69d0e13123c7bb45 --message "✅ 每日8点总结任务已完成
$(cat $WORKSPACE/tmp_daily_summary.md)
飞书文档已更新git仓库已同步。"
# 清理临时文件
rm $WORKSPACE/tmp_daily_summary.md

View File

@ -1,55 +0,0 @@
---
name: cron-schedule
description: 定时任务/提醒设置支持一次性定时提醒和周期性cron任务。激活当用户提到"提醒我"、"定时"、"cron任务"、"多久之后通知我"等相关需求时。
---
# 定时任务设置Skill
用于快速创建定时提醒、周期性自动化任务。
## 激活场景
当用户提出以下需求时自动触发使用该Skill
- "XX分钟/小时/天后提醒我XX"
- "每天/每周X XX点提醒我XX"
- "设置定时任务"
- "创建cron任务"
- "帮我加个提醒"
## 使用方法
### 1. 一次性定时提醒(执行后自动删除)
**参数规则:**
- 延迟时间:支持"30分钟"、"2小时"、"1天"等自然语言时间
- 提醒内容:需要通知用户的具体消息
**示例:**
用户需求:"30分钟后提醒我开会"
执行命令:
```bash
openclaw cron add --at +30m --name "30分钟后开会提醒" --message "⏰ 提醒:时间到了,该去开会啦!" --announce --channel feishu --account xiaoban --to ou_d0474502fe89122e69d0e13123c7bb45 --tz Asia/Shanghai --delete-after-run
```
### 2. 周期性定时任务(重复执行)
**参数规则:**
- cron表达式标准cron格式 `分 时 日 月 周`,例如`0 8 * * *`表示每天8点
- 任务名称:便于识别的任务标识
- 执行内容/提醒消息:需要执行的操作或通知内容
**示例:**
用户需求:"每天早上8点提醒我备份数据"
执行命令:
```bash
openclaw cron add --cron "0 8 * * *" --name "每日8点数据备份提醒" --message "⏰ 每日提醒:请执行当日数据备份操作~" --announce --channel feishu --account xiaoban --to ou_d0474502fe89122e69d0e13123c7bb45 --tz Asia/Shanghai
```
## 强制规则(必须遵守)
1. 所有定时任务默认投递到用户飞书账号 `ou_d0474502fe89122e69d0e13123c7bb45`,不允许投递到其他地址
2. 时区强制指定为`Asia/Shanghai`,避免时间计算错误
3. 飞书投递必须加`--account xiaoban`参数指定使用xiaoban bot发送禁止使用默认default bot
4. 一次性提醒必须加`--delete-after-run`参数,执行后自动清理过期任务
5. 创建任务完成后需要将任务ID返回给用户方便后续管理
6. 不允许创建执行破坏性操作的定时任务
## 任务管理常用命令
- 查看所有定时任务:`openclaw cron list`
- 删除指定任务:`openclaw cron rm <任务ID>`
- 手动执行验证任务:`openclaw cron run <任务ID>`
- 查看任务执行状态:`openclaw cron status <任务ID>`

Binary file not shown.