Merge pull request #2 from z060142/Refactoring
Enhance LLM Performance and Multi-Person Chat Stability
5
.gitignore
vendored
@ -1,3 +1,6 @@
|
|||||||
.env
|
.env
|
||||||
|
*.log
|
||||||
llm_debug.log
|
llm_debug.log
|
||||||
__pycache__/
|
__pycache__/
|
||||||
|
debug_screenshots/
|
||||||
|
chat_logs/
|
||||||
254
ClaudeCode.md
@ -52,6 +52,10 @@ Wolf Chat 是一個基於 MCP (Modular Capability Provider) 框架的聊天機
|
|||||||
7. **視窗設定工具 (window-setup-script.py)**
|
7. **視窗設定工具 (window-setup-script.py)**
|
||||||
- 輔助工具,用於設置遊戲視窗的位置和大小
|
- 輔助工具,用於設置遊戲視窗的位置和大小
|
||||||
- 方便開發階段截取 UI 元素樣本
|
- 方便開發階段截取 UI 元素樣本
|
||||||
|
8. **視窗監視工具 (window-monitor-script.py)**
|
||||||
|
- (新增) 強化腳本,用於持續監視遊戲視窗
|
||||||
|
- 確保目標視窗維持在最上層 (Always on Top)
|
||||||
|
- 自動將視窗移回指定的位置
|
||||||
|
|
||||||
### 資料流程
|
### 資料流程
|
||||||
|
|
||||||
@ -75,11 +79,26 @@ Wolf Chat 是一個基於 MCP (Modular Capability Provider) 框架的聊天機
|
|||||||
|
|
||||||
系統使用基於圖像辨識的方法監控遊戲聊天界面:
|
系統使用基於圖像辨識的方法監控遊戲聊天界面:
|
||||||
|
|
||||||
1. **泡泡檢測**:通過辨識聊天泡泡的角落圖案定位聊天訊息,區分一般用戶與機器人
|
1. **泡泡檢測(含 Y 軸優先配對)**:通過辨識聊天泡泡的左上角 (TL) 和右下角 (BR) 角落圖案定位聊天訊息。
|
||||||
2. **關鍵字檢測**:在泡泡區域內搜尋 "wolf" 或 "Wolf" 關鍵字圖像
|
- **多外觀支援**:為了適應玩家可能使用的不同聊天泡泡外觀 (skin),一般用戶泡泡的偵測機制已被擴充,可以同時尋找多組不同的角落模板 (例如 `corner_tl_type2.png`, `corner_br_type2.png` 等)。機器人泡泡目前僅偵測預設的角落模板。
|
||||||
3. **內容獲取**:點擊關鍵字位置,使用剪貼板複製聊天內容
|
- **配對邏輯優化**:在配對 TL 和 BR 角落時,系統現在會優先選擇與 TL 角落 **Y 座標最接近** 的有效 BR 角落,以更好地區分垂直堆疊的聊天泡泡。
|
||||||
4. **發送者識別**:通過點擊頭像,導航菜單,複製用戶名稱
|
2. **關鍵字檢測**:在泡泡區域內搜尋 "wolf" 或 "Wolf" 關鍵字圖像。
|
||||||
5. **防重複處理**:使用位置比較和內容歷史記錄防止重複回應
|
3. **內容獲取**:點擊關鍵字位置,使用剪貼板複製聊天內容。
|
||||||
|
4. **發送者識別(含氣泡重新定位與偏移量調整)**:**關鍵步驟** - 為了提高在動態聊天環境下的穩定性,系統在獲取發送者名稱前,會執行以下步驟:
|
||||||
|
a. **初始偵測**:像之前一樣,根據偵測到的關鍵字定位觸發的聊天泡泡。
|
||||||
|
b. **氣泡快照**:擷取該聊天泡泡的圖像快照。
|
||||||
|
c. **重新定位**:在點擊頭像前,使用該快照在當前聊天視窗區域內重新搜尋氣泡的最新位置。
|
||||||
|
d. **計算座標(新偏移量)**:
|
||||||
|
- 如果成功重新定位氣泡,則根據找到的**新**左上角座標 (`new_tl_x`, `new_tl_y`),應用新的偏移量計算頭像點擊位置:`x = new_tl_x - 45` (`AVATAR_OFFSET_X_REPLY`),`y = new_tl_y + 10` (`AVATAR_OFFSET_Y_REPLY`)。
|
||||||
|
- 如果無法重新定位(例如氣泡已滾動出畫面),則跳過此次互動,以避免點擊錯誤位置。
|
||||||
|
e. **互動(含重試)**:
|
||||||
|
- 使用計算出的(新的)頭像位置進行第一次點擊。
|
||||||
|
- 檢查是否成功進入個人資料頁面 (`Profile_page.png`)。
|
||||||
|
- **如果失敗**:系統會使用步驟 (b) 的氣泡快照,在聊天區域內重新定位氣泡,重新計算頭像座標,然後再次嘗試點擊。此過程最多重複 3 次。
|
||||||
|
- **如果成功**(無論是首次嘗試還是重試成功):繼續導航菜單,最終複製用戶名稱。
|
||||||
|
- **如果重試後仍失敗**:放棄獲取該用戶名稱。
|
||||||
|
f. **原始偏移量**:原始的 `-55` 像素水平偏移量 (`AVATAR_OFFSET_X`) 仍保留在程式碼中,用於其他不需要重新定位或不同互動邏輯的場景(例如 `remove_user_position` 功能)。
|
||||||
|
5. **防重複處理**:使用位置比較和內容歷史記錄防止重複回應。
|
||||||
|
|
||||||
#### LLM 整合
|
#### LLM 整合
|
||||||
|
|
||||||
@ -112,10 +131,20 @@ Wolf Chat 是一個基於 MCP (Modular Capability Provider) 框架的聊天機
|
|||||||
|
|
||||||
系統使用多種技術實現 UI 自動化:
|
系統使用多種技術實現 UI 自動化:
|
||||||
|
|
||||||
1. **圖像辨識**:使用 OpenCV 和 pyautogui 進行圖像匹配和識別
|
1. **圖像辨識**:使用 OpenCV 和 pyautogui 進行圖像匹配和識別。
|
||||||
2. **鍵鼠控制**:模擬鼠標點擊和鍵盤操作
|
2. **鍵鼠控制**:模擬鼠標點擊和鍵盤操作。
|
||||||
3. **剪貼板操作**:使用 pyperclip 讀寫剪貼板
|
3. **剪貼板操作**:使用 pyperclip 讀寫剪貼板。
|
||||||
4. **狀態式處理**:基於 UI 狀態判斷的互動流程,確保操作穩定性
|
4. **狀態式處理**:基於 UI 狀態判斷的互動流程,確保操作穩定性。
|
||||||
|
5. **針對性回覆(上下文激活)**:
|
||||||
|
- **時機**:在成功獲取發送者名稱並返回聊天介面後,但在將觸發資訊放入隊列傳遞給主線程之前。
|
||||||
|
- **流程**:
|
||||||
|
a. 再次使用氣泡快照重新定位觸發訊息的氣泡。
|
||||||
|
b. 如果定位成功,點擊氣泡中心,並等待 0.25 秒(增加的延遲時間)以允許 UI 反應。
|
||||||
|
c. 尋找並點擊彈出的「回覆」按鈕 (`reply_button.png`)。
|
||||||
|
d. 如果成功點擊回覆按鈕,則設置一個 `reply_context_activated` 標記為 `True`。
|
||||||
|
e. 如果重新定位氣泡失敗或未找到回覆按鈕,則該標記為 `False`。
|
||||||
|
- **傳遞**:將 `reply_context_activated` 標記連同其他觸發資訊(發送者、內容、氣泡區域)一起放入隊列。
|
||||||
|
- **發送**:主控模塊 (`main.py`) 在處理 `send_reply` 命令時,不再需要執行點擊回覆的操作,只需直接調用 `send_chat_message` 即可(因為如果 `reply_context_activated` 為 `True`,輸入框應已準備好)。
|
||||||
|
|
||||||
## 配置與部署
|
## 配置與部署
|
||||||
|
|
||||||
@ -167,6 +196,174 @@ Wolf Chat 是一個基於 MCP (Modular Capability Provider) 框架的聊天機
|
|||||||
|
|
||||||
這些優化確保了即使在複雜工具調用後,Wolfhart 也能保持角色一致性,並提供合適的回應。無效回應不再發送到遊戲,提高了用戶體驗。
|
這些優化確保了即使在複雜工具調用後,Wolfhart 也能保持角色一致性,並提供合適的回應。無效回應不再發送到遊戲,提高了用戶體驗。
|
||||||
|
|
||||||
|
## 最近改進(2025-04-18)
|
||||||
|
|
||||||
|
### 支援多種一般聊天泡泡外觀,並修正先前錯誤配置
|
||||||
|
|
||||||
|
- **UI 互動模塊 (`ui_interaction.py`)**:
|
||||||
|
- **修正**:先前錯誤地將多外觀支援應用於機器人泡泡。現已修正 `find_dialogue_bubbles` 函數,使其能夠載入並搜尋多組**一般用戶**泡泡的角落模板(例如 `corner_tl_type2.png`, `corner_br_type2.png` 等)。
|
||||||
|
- 允許任何類型的一般用戶左上角與任何類型的一般用戶右下角進行配對,只要符合幾何條件。
|
||||||
|
- 機器人泡泡的偵測恢復為僅使用預設的 `bot_corner_tl.png` 和 `bot_corner_br.png` 模板。
|
||||||
|
- 這提高了對使用了自訂聊天泡泡外觀的**一般玩家**訊息的偵測能力。
|
||||||
|
- **模板文件**:
|
||||||
|
- 在 `ui_interaction.py` 中為一般角落定義了新類型模板的路徑(`_type2`, `_type3`)。
|
||||||
|
- **注意:** 需要在 `templates` 資料夾中實際添加對應的 `corner_tl_type2.png`, `corner_br_type2.png` 等圖片檔案才能生效。
|
||||||
|
- **文件更新 (`ClaudeCode.md`)**:
|
||||||
|
- 在「技術實現」部分更新了泡泡檢測的說明。
|
||||||
|
- 添加了此「最近改進」條目,並修正了先前的描述。
|
||||||
|
|
||||||
|
### 頭像點擊偏移量調整
|
||||||
|
|
||||||
|
- **UI 互動模塊 (`ui_interaction.py`)**:
|
||||||
|
- 將 `AVATAR_OFFSET_X` 常數的值從 `-50` 調整為 `-55`。
|
||||||
|
- 這統一了常規關鍵字觸發流程和 `remove_user_position` 功能中計算頭像點擊位置時使用的水平偏移量。
|
||||||
|
- **文件更新 (`ClaudeCode.md`)**:
|
||||||
|
- 在「技術實現」的「發送者識別」部分強調了點擊位置是相對於觸發泡泡計算的,並註明了新的偏移量。
|
||||||
|
- 添加了此「最近改進」條目。
|
||||||
|
|
||||||
|
### 聊天泡泡重新定位以提高穩定性
|
||||||
|
|
||||||
|
- **UI 互動模塊 (`ui_interaction.py`)**:
|
||||||
|
- 在 `run_ui_monitoring_loop` 中,於偵測到關鍵字並成功複製文字後、獲取發送者名稱前,加入了新的邏輯:
|
||||||
|
1. 擷取觸發氣泡的圖像快照。
|
||||||
|
2. 使用 `pyautogui.locateOnScreen` 在聊天區域內重新尋找該快照的當前位置。
|
||||||
|
3. 若找到,則根據**新位置**的左上角座標和新的偏移量 (`AVATAR_OFFSET_X_RELOCATED = -50`) 計算頭像點擊位置。
|
||||||
|
4. 若找不到,則記錄警告並跳過此次互動。
|
||||||
|
- 新增了 `AVATAR_OFFSET_X_RELOCATED` 和 `BUBBLE_RELOCATE_CONFIDENCE` 常數。
|
||||||
|
- **目的**:解決聊天視窗內容滾動後,原始偵測到的氣泡位置失效,導致點擊錯誤頭像的問題。透過重新定位,確保點擊的是與觸發訊息相對應的頭像。
|
||||||
|
- **文件更新 (`ClaudeCode.md`)**:
|
||||||
|
- 更新了「技術實現」中的「發送者識別」部分,詳細說明了重新定位的步驟。
|
||||||
|
- 在此「最近改進」部分添加了這個新條目。
|
||||||
|
|
||||||
|
### 互動流程優化 (頭像偏移、氣泡配對、針對性回覆)
|
||||||
|
|
||||||
|
- **UI 互動模塊 (`ui_interaction.py`)**:
|
||||||
|
- **頭像偏移量調整**:修改了重新定位氣泡後計算頭像座標的邏輯,使用新的偏移量:左 `-45` (`AVATAR_OFFSET_X_REPLY`),下 `+10` (`AVATAR_OFFSET_Y_REPLY`)。原始的 `-55` 偏移量 (`AVATAR_OFFSET_X`) 保留用於其他功能。
|
||||||
|
- **氣泡配對優化**:修改 `find_dialogue_bubbles` 函數,使其在配對左上角 (TL) 和右下角 (BR) 時,優先選擇 Y 座標差異最小的 BR 角落,以提高垂直相鄰氣泡的區分度。
|
||||||
|
- **頭像點擊重試**:修改 `retrieve_sender_name_interaction` 函數,增加了最多 3 次的重試邏輯。如果在點擊頭像後未能檢測到個人資料頁面,會嘗試重新定位氣泡並再次點擊。
|
||||||
|
- **針對性回覆時機調整與延遲增加**:
|
||||||
|
- 將點擊氣泡中心和回覆按鈕的操作移至成功獲取發送者名稱並返回聊天室之後、將觸發資訊放入隊列之前。
|
||||||
|
- **增加了點擊氣泡中心後、尋找回覆按鈕前的等待時間至 0.25 秒**,以提高在 UI 反應較慢時找到按鈕的成功率。
|
||||||
|
- 在放入隊列的數據中增加 `reply_context_activated` 標記,指示是否成功激活了回覆上下文。
|
||||||
|
- 簡化了處理 `send_reply` 命令的邏輯,使其僅負責發送消息。
|
||||||
|
- **氣泡快照保存 (用於除錯)**:在偵測到關鍵字後,擷取用於重新定位的氣泡圖像快照 (`bubble_snapshot`) 時,會將此快照保存到 `debug_screenshots` 文件夾中,檔名格式為 `debug_relocation_snapshot_X.png` (X 為 1 到 5 的循環數字)。這取代了先前僅保存氣泡區域截圖的邏輯。
|
||||||
|
- **目的**:
|
||||||
|
- 進一步提高獲取發送者名稱的穩定性。
|
||||||
|
- 改善氣泡配對的準確性。
|
||||||
|
- 調整針對性回覆的流程,使其更符合邏輯順序,並通過增加延遲提高可靠性。
|
||||||
|
- 提供用於重新定位的實際圖像快照,方便除錯。
|
||||||
|
- **文件更新 (`ClaudeCode.md`)**:
|
||||||
|
- 更新了「技術實現」中的「泡泡檢測」、「發送者識別」部分。
|
||||||
|
- 更新了「UI 自動化」部分關於「針對性回覆」的說明,反映了新的時機、標記和增加的延遲。
|
||||||
|
- 在此「最近改進」部分更新了這個匯總條目,以包含最新的修改(包括快照保存和延遲增加)。
|
||||||
|
|
||||||
|
### UI 監控暫停與恢復機制 (2025-04-18)
|
||||||
|
|
||||||
|
- **目的**:解決在等待 LLM 回應期間,持續的 UI 監控可能導致的不穩定性或干擾問題,特別是與 `remove_position` 等需要精確 UI 狀態的操作相關。
|
||||||
|
- **`ui_interaction.py`**:
|
||||||
|
- 引入了全局(模塊級)`monitoring_paused_flag` 列表(包含一個布爾值)。
|
||||||
|
- 在 `run_ui_monitoring_loop` 的主循環開始處檢查此標誌。若為 `True`,則循環僅檢查命令隊列中的 `resume` 命令並休眠,跳過所有 UI 偵測和觸發邏輯。
|
||||||
|
- 在命令處理邏輯中添加了對 `pause` 和 `resume` 動作的處理,分別設置 `monitoring_paused_flag[0]` 為 `True` 或 `False`。
|
||||||
|
- **`ui_interaction.py` (進一步修改)**:
|
||||||
|
- **修正命令處理邏輯**:修改了 `run_ui_monitoring_loop` 的主循環。現在,在每次迭代開始時,它會使用一個內部 `while True` 循環和 `command_queue.get_nowait()` 來**處理完隊列中所有待處理的命令**(包括 `pause`, `resume`, `send_reply`, `remove_position` 等)。
|
||||||
|
- **狀態檢查後置**:只有在清空當前所有命令後,循環才會檢查 `monitoring_paused_flag` 的狀態。如果標誌為 `True`,則休眠並跳過 UI 監控部分;如果為 `False`,則繼續執行 UI 監控(畫面檢查、氣泡偵測等)。
|
||||||
|
- **目的**:解決先前版本中 `resume` 命令可能導致 UI 線程過早退出暫停狀態,從而錯過緊隨其後的 `send_reply` 或 `remove_position` 命令的問題。確保所有來自 `main.py` 的命令都被及時處理。
|
||||||
|
- **`main.py`**:
|
||||||
|
- (先前修改保持不變)在主處理循環 (`run_main_with_exit_stack` 的 `while True` 循環) 中:
|
||||||
|
- 在從 `trigger_queue` 獲取數據後、調用 `llm_interaction.get_llm_response` **之前**,向 `command_queue` 發送 `{ 'action': 'pause' }` 命令。
|
||||||
|
- 使用 `try...finally` 結構,確保在處理 LLM 回應(包括命令處理和發送回覆)**之後**,向 `command_queue` 發送 `{ 'action': 'resume' }` 命令,無論處理過程中是否發生錯誤。
|
||||||
|
|
||||||
|
### `remove_position` 穩定性改進 (使用快照重新定位) (2025-04-19)
|
||||||
|
|
||||||
|
- **目的**:解決 `remove_position` 命令因聊天視窗滾動導致基於舊氣泡位置計算座標而出錯的問題。
|
||||||
|
- **`ui_interaction.py` (`run_ui_monitoring_loop`)**:
|
||||||
|
- 在觸發事件放入 `trigger_queue` 的數據中,額外添加了 `bubble_snapshot`(觸發氣泡的圖像快照)和 `search_area`(用於快照的搜索區域)。
|
||||||
|
- **`main.py`**:
|
||||||
|
- 修改了處理 `remove_position` 命令的邏輯,使其從 `trigger_data` 中提取 `bubble_snapshot` 和 `search_area`,並將它們包含在發送給 `command_queue` 的命令數據中。
|
||||||
|
- **`ui_interaction.py` (`remove_user_position` 函數)**:
|
||||||
|
- 修改了函數簽名,以接收 `bubble_snapshot` 和 `search_area` 參數。
|
||||||
|
- 在函數執行開始時,使用傳入的 `bubble_snapshot` 和 `search_area` 調用 `pyautogui.locateOnScreen` 來重新定位觸發氣泡的當前位置。
|
||||||
|
- 如果重新定位失敗,則記錄錯誤並返回 `False`。
|
||||||
|
- 如果重新定位成功,則後續所有基於氣泡位置的計算(包括尋找職位圖標的搜索區域 `search_region` 和點擊頭像的座標 `avatar_click_x`, `avatar_click_y`)都將使用這個**新找到的**氣泡座標。
|
||||||
|
- **效果**:確保 `remove_position` 操作基於氣泡的最新位置執行,提高了在動態滾動的聊天界面中的可靠性。
|
||||||
|
|
||||||
|
### 修正 Type3 關鍵字辨識並新增 Type4 支援 (2025-04-19)
|
||||||
|
|
||||||
|
- **目的**:修復先前版本中 `type3` 關鍵字辨識的錯誤,並擴充系統以支援新的 `type4` 聊天泡泡外觀和對應的關鍵字樣式。
|
||||||
|
- **`ui_interaction.py`**:
|
||||||
|
- **修正 `find_keyword_in_region`**:移除了錯誤使用 `type2` 模板鍵來尋找 `type3` 關鍵字的重複程式碼,確保 `type3` 關鍵字使用正確的模板 (`keyword_wolf_lower_type3`, `keyword_wolf_upper_type3`)。
|
||||||
|
- **新增 `type4` 泡泡支援**:
|
||||||
|
- 在檔案開頭定義了 `type4` 角落模板的路徑常數 (`CORNER_TL_TYPE4_IMG`, `CORNER_BR_TYPE4_IMG`)。
|
||||||
|
- 在 `find_dialogue_bubbles` 函數中,將 `type4` 的模板鍵 (`corner_tl_type4`, `corner_br_type4`) 加入 `regular_tl_keys` 和 `regular_br_keys` 列表。
|
||||||
|
- 在 `run_ui_monitoring_loop` 的 `templates` 字典中加入了對應的鍵值對。
|
||||||
|
- **新增 `type4` 關鍵字支援**:
|
||||||
|
- 在檔案開頭定義了 `type4` 關鍵字模板的路徑常數 (`KEYWORD_wolf_LOWER_TYPE4_IMG`, `KEYWORD_Wolf_UPPER_TYPE4_IMG`)。
|
||||||
|
- 在 `find_keyword_in_region` 函數中,加入了尋找 `type4` 關鍵字模板 (`keyword_wolf_lower_type4`, `keyword_wolf_upper_type4`) 的邏輯。
|
||||||
|
- 在 `run_ui_monitoring_loop` 的 `templates` 字典中加入了對應的鍵值對。
|
||||||
|
- **效果**:提高了對 `type3` 關鍵字的辨識準確率,並使系統能夠辨識 `type4` 的聊天泡泡和關鍵字(前提是提供了對應的模板圖片)。
|
||||||
|
|
||||||
|
### 新增 Reply 關鍵字偵測與點擊偏移 (2025-04-20)
|
||||||
|
|
||||||
|
- **目的**:擴充關鍵字偵測機制,使其能夠辨識特定的回覆指示圖片 (`keyword_wolf_reply.png` 及其 type2, type3, type4 變體),並在點擊這些特定圖片以複製文字時,應用 Y 軸偏移。
|
||||||
|
- **`ui_interaction.py`**:
|
||||||
|
- **新增模板**:定義了 `KEYWORD_WOLF_REPLY_IMG` 系列常數,並將其加入 `run_ui_monitoring_loop` 中的 `templates` 字典。
|
||||||
|
- **擴充偵測**:修改 `find_keyword_in_region` 函數,加入對 `keyword_wolf_reply` 系列模板的搜尋邏輯。
|
||||||
|
- **條件式偏移**:在 `run_ui_monitoring_loop` 中,於偵測到關鍵字後,加入判斷邏輯。如果偵測到的關鍵字是 `keyword_wolf_reply` 系列之一,則:
|
||||||
|
1. 計算用於 `copy_text_at` 的點擊座標時,Y 座標會增加 15 像素。
|
||||||
|
2. 在後續嘗試激活回覆上下文時,計算用於點擊**氣泡中心**的座標時,Y 座標**也會**增加 15 像素。
|
||||||
|
- 其他關鍵字或 UI 元素的點擊不受影響。
|
||||||
|
- **效果**:系統現在可以偵測新的回覆指示圖片作為觸發條件。當由這些圖片觸發時,用於複製文字的點擊和用於激活回覆上下文的氣泡中心點擊都會向下微調 15 像素,以避免誤觸其他 UI 元素。
|
||||||
|
|
||||||
|
### 強化 LLM 上下文處理與回應生成 (2025-04-20)
|
||||||
|
|
||||||
|
- **目的**:解決 LLM 可能混淆歷史對話與當前訊息,以及在回應中包含歷史記錄的問題。確保 `dialogue` 欄位只包含針對最新用戶訊息的新回覆。
|
||||||
|
- **`llm_interaction.py`**:
|
||||||
|
- **修改 `get_system_prompt`**:
|
||||||
|
- 在 `dialogue` 欄位的規則中,明確禁止包含任何歷史記錄,並強調必須只回應標記為 `<CURRENT_MESSAGE>` 的最新訊息。
|
||||||
|
- 在核心指令中,要求 LLM 將分析和回應生成完全集中在 `<CURRENT_MESSAGE>` 標記的訊息上。
|
||||||
|
- 新增了對 `<CURRENT_MESSAGE>` 標記作用的說明。
|
||||||
|
- **修改 `_build_context_messages`**:
|
||||||
|
- 在構建發送給 LLM 的訊息列表時,將歷史記錄中的最後一條用戶訊息用 `<CURRENT_MESSAGE>...</CURRENT_MESSAGE>` 標籤包裹起來。
|
||||||
|
- 其他歷史訊息保持原有的 `[timestamp] speaker: message` 格式。
|
||||||
|
- **效果**:通過更嚴格的提示和明確的上下文標記,引導 LLM 準確區分當前互動和歷史對話,預期能提高回應的相關性並防止輸出冗餘的歷史內容。
|
||||||
|
|
||||||
|
### 強化 System Prompt 以鼓勵工具使用 (2025-04-19)
|
||||||
|
|
||||||
|
- **目的**:調整 `llm_interaction.py` 中的 `get_system_prompt` 函數,使其更明確地引導 LLM 在回應前主動使用工具(特別是記憶體工具)和整合工具資訊。
|
||||||
|
- **修改內容**:
|
||||||
|
1. **核心身份強化**:在 `CORE IDENTITY AND TOOL USAGE` 部分加入新的一點,強調 Wolfhart 會主動查閱內部知識圖譜和外部來源。
|
||||||
|
2. **記憶體指示強化**:將 `Memory Management (Knowledge Graph)` 部分的提示從 "IMPORTANT" 改為 "CRITICAL",並明確指示在回應*之前*要考慮使用查詢工具檢查記憶體,同時也強調了寫入新資訊的主動性。
|
||||||
|
- **效果**:旨在提高 LLM 使用工具的主動性和依賴性,使其回應更具上下文感知和資訊準確性,同時保持角色一致性。
|
||||||
|
|
||||||
|
### 聊天歷史記錄上下文與日誌記錄 (2025-04-20)
|
||||||
|
|
||||||
|
- **目的**:
|
||||||
|
1. 為 LLM 提供更豐富的對話上下文,以生成更連貫和相關的回應。
|
||||||
|
2. 新增一個可選的聊天日誌功能,用於調試和記錄。
|
||||||
|
- **`main.py`**:
|
||||||
|
- 引入 `collections.deque` 來儲存最近的對話歷史(用戶訊息和機器人回應),上限為 50 條。
|
||||||
|
- 在調用 `llm_interaction.get_llm_response` 之前,將用戶訊息添加到歷史記錄中。
|
||||||
|
- 在收到有效的 LLM 回應後,將機器人回應添加到歷史記錄中。
|
||||||
|
- 新增 `log_chat_interaction` 函數,該函數:
|
||||||
|
- 檢查 `config.ENABLE_CHAT_LOGGING` 標誌。
|
||||||
|
- 如果啟用,則在 `config.LOG_DIR` 指定的文件夾中創建或附加到以日期命名的日誌文件 (`YYYY-MM-DD.log`)。
|
||||||
|
- 記錄包含時間戳、發送者(用戶/機器人)、發送者名稱和訊息內容的條目。
|
||||||
|
- 在收到有效 LLM 回應後調用 `log_chat_interaction`。
|
||||||
|
- **`llm_interaction.py`**:
|
||||||
|
- 修改 `get_llm_response` 函數簽名,接收 `current_sender_name` 和 `history` 列表,而不是單個 `user_input`。
|
||||||
|
- 新增 `_build_context_messages` 輔助函數,該函數:
|
||||||
|
- 根據規則從 `history` 中篩選和格式化訊息:
|
||||||
|
- 包含與 `current_sender_name` 相關的最近 4 次互動(用戶訊息 + 機器人回應)。
|
||||||
|
- 包含來自其他發送者的最近 2 條用戶訊息。
|
||||||
|
- 按時間順序排列選定的訊息。
|
||||||
|
- 將系統提示添加到訊息列表的開頭。
|
||||||
|
- 在 `get_llm_response` 中調用 `_build_context_messages` 來構建發送給 LLM API 的 `messages` 列表。
|
||||||
|
- **`config.py`**:
|
||||||
|
- 新增 `ENABLE_CHAT_LOGGING` (布爾值) 和 `LOG_DIR` (字符串) 配置選項。
|
||||||
|
- **效果**:
|
||||||
|
- LLM 現在可以利用最近的對話歷史來生成更符合上下文的回應。
|
||||||
|
- 可以選擇性地將所有成功的聊天互動記錄到按日期組織的文件中,方便日後分析或調試。
|
||||||
|
|
||||||
## 開發建議
|
## 開發建議
|
||||||
|
|
||||||
### 優化方向
|
### 優化方向
|
||||||
@ -217,6 +414,14 @@ Wolf Chat 是一個基於 MCP (Modular Capability Provider) 框架的聊天機
|
|||||||
|
|
||||||
## 使用指南
|
## 使用指南
|
||||||
|
|
||||||
|
### 快捷鍵 (新增)
|
||||||
|
|
||||||
|
- **F7**: 清除最近已處理的對話紀錄 (`recent_texts` in `ui_interaction.py`)。這有助於在需要時強制重新處理最近的訊息。
|
||||||
|
- **F8**: 暫停/恢復腳本的主要功能(UI 監控、LLM 互動)。
|
||||||
|
- **暫停時**: UI 監控線程會停止偵測新的聊天氣泡,主循環會暫停處理新的觸發事件。
|
||||||
|
- **恢復時**: UI 監控線程會恢復偵測,並且會清除最近的對話紀錄 (`recent_texts`) 和最後處理的氣泡資訊 (`last_processed_bubble_info`),以確保從乾淨的狀態開始。
|
||||||
|
- **F9**: 觸發腳本的正常關閉流程,包括關閉 MCP 連接和停止監控線程。
|
||||||
|
|
||||||
### 啟動流程
|
### 啟動流程
|
||||||
|
|
||||||
1. 確保遊戲已啟動且聊天介面可見
|
1. 確保遊戲已啟動且聊天介面可見
|
||||||
@ -239,3 +444,34 @@ Wolf Chat 是一個基於 MCP (Modular Capability Provider) 框架的聊天機
|
|||||||
3. **LLM 連接問題**: 驗證 API 密鑰和網絡連接
|
3. **LLM 連接問題**: 驗證 API 密鑰和網絡連接
|
||||||
4. **MCP 服務器連接失敗**: 確認服務器配置正確並且運行中
|
4. **MCP 服務器連接失敗**: 確認服務器配置正確並且運行中
|
||||||
5. **工具調用後無回應**: 檢查 llm_debug.log 文件,查看工具調用結果和解析過程
|
5. **工具調用後無回應**: 檢查 llm_debug.log 文件,查看工具調用結果和解析過程
|
||||||
|
|
||||||
|
</file_content>
|
||||||
|
|
||||||
|
Now that you have the latest state of the file, try the operation again with fewer, more precise SEARCH blocks. For large files especially, it may be prudent to try to limit yourself to <5 SEARCH/REPLACE blocks at a time, then wait for the user to respond with the result of the operation before following up with another replace_in_file call to make additional edits.
|
||||||
|
(If you run into this error 3 times in a row, you may use the write_to_file tool as a fallback.)
|
||||||
|
</error><environment_details>
|
||||||
|
# VSCode Visible Files
|
||||||
|
ClaudeCode.md
|
||||||
|
|
||||||
|
# VSCode Open Tabs
|
||||||
|
state.py
|
||||||
|
ui_interaction.py
|
||||||
|
c:/Users/Bigspring/AppData/Roaming/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
|
||||||
|
window-monitor-script.py
|
||||||
|
persona.json
|
||||||
|
config.py
|
||||||
|
main.py
|
||||||
|
llm_interaction.py
|
||||||
|
ClaudeCode.md
|
||||||
|
requirements.txt
|
||||||
|
.gitignore
|
||||||
|
|
||||||
|
# Current Time
|
||||||
|
4/20/2025, 5:18:24 PM (Asia/Taipei, UTC+8:00)
|
||||||
|
|
||||||
|
# Context Window Usage
|
||||||
|
81,150 / 1,048.576K tokens used (8%)
|
||||||
|
|
||||||
|
# Current Mode
|
||||||
|
ACT MODE
|
||||||
|
</environment_details>
|
||||||
|
|||||||
29
config.py
@ -15,8 +15,12 @@ OPENAI_API_BASE_URL = "https://openrouter.ai/api/v1" # <--- For example "http:/
|
|||||||
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
|
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
|
||||||
#LLM_MODEL = "anthropic/claude-3.7-sonnet"
|
#LLM_MODEL = "anthropic/claude-3.7-sonnet"
|
||||||
#LLM_MODEL = "meta-llama/llama-4-maverick"
|
#LLM_MODEL = "meta-llama/llama-4-maverick"
|
||||||
|
#LLM_MODEL = "deepseek/deepseek-chat-v3-0324:free"
|
||||||
|
#LLM_MODEL = "google/gemini-2.5-flash-preview"
|
||||||
LLM_MODEL = "deepseek/deepseek-chat-v3-0324" # <--- Ensure this matches the model name provided by your provider
|
LLM_MODEL = "deepseek/deepseek-chat-v3-0324" # <--- Ensure this matches the model name provided by your provider
|
||||||
|
|
||||||
|
#LLM_MODEL = "openai/gpt-4.1-nano"
|
||||||
|
|
||||||
EXA_API_KEY = os.getenv("EXA_API_KEY")
|
EXA_API_KEY = os.getenv("EXA_API_KEY")
|
||||||
|
|
||||||
# --- Dynamically build Exa server args ---
|
# --- Dynamically build Exa server args ---
|
||||||
@ -27,11 +31,11 @@ exa_config_dict = {"exaApiKey": EXA_API_KEY if EXA_API_KEY else "YOUR_EXA_KEY_MI
|
|||||||
# For cmd /c on Windows, embedding escaped JSON often works like this:
|
# For cmd /c on Windows, embedding escaped JSON often works like this:
|
||||||
exa_config_arg_string = json.dumps(json.dumps(exa_config_dict)) # Double dump for cmd escaping? Or just one? Test needed.
|
exa_config_arg_string = json.dumps(json.dumps(exa_config_dict)) # Double dump for cmd escaping? Or just one? Test needed.
|
||||||
# Let's try single dump first, often sufficient if passed correctly by subprocess
|
# Let's try single dump first, often sufficient if passed correctly by subprocess
|
||||||
exa_config_arg_string_single_dump = json.dumps(exa_config_dict)
|
exa_config_arg_string_single_dump = json.dumps(exa_config_dict) # Use this one
|
||||||
|
|
||||||
# --- MCP Server Configuration ---
|
# --- MCP Server Configuration ---
|
||||||
MCP_SERVERS = {
|
MCP_SERVERS = {
|
||||||
"exa": {
|
"exa": { # Temporarily commented out to prevent blocking startup
|
||||||
"command": "cmd",
|
"command": "cmd",
|
||||||
"args": [
|
"args": [
|
||||||
"/c",
|
"/c",
|
||||||
@ -42,19 +46,16 @@ MCP_SERVERS = {
|
|||||||
"exa",
|
"exa",
|
||||||
"--config",
|
"--config",
|
||||||
# Pass the dynamically created config string with the environment variable key
|
# Pass the dynamically created config string with the environment variable key
|
||||||
exa_config_arg_string # Use the properly escaped variable
|
exa_config_arg_string_single_dump # Use the single dump variable
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
"servers": {
|
"github.com/modelcontextprotocol/servers/tree/main/src/memory": {
|
||||||
"command": "npx",
|
"command": "npx",
|
||||||
"args": [
|
"args": [
|
||||||
"-y",
|
"-y",
|
||||||
"@smithery/cli@latest",
|
"@modelcontextprotocol/server-memory"
|
||||||
"run",
|
],
|
||||||
"@jlia0/servers",
|
"disabled": False
|
||||||
"--key",
|
|
||||||
"09025967-c177-4653-9af4-40603a1cbd11"
|
|
||||||
]
|
|
||||||
}
|
}
|
||||||
# Add or remove servers as needed
|
# Add or remove servers as needed
|
||||||
}
|
}
|
||||||
@ -62,6 +63,10 @@ MCP_SERVERS = {
|
|||||||
# MCP Client Configuration
|
# MCP Client Configuration
|
||||||
MCP_CONFIRM_TOOL_EXECUTION = False # True: Confirm before execution, False: Execute automatically
|
MCP_CONFIRM_TOOL_EXECUTION = False # True: Confirm before execution, False: Execute automatically
|
||||||
|
|
||||||
|
# --- Chat Logging Configuration ---
|
||||||
|
ENABLE_CHAT_LOGGING = True # True: Enable logging, False: Disable logging
|
||||||
|
LOG_DIR = "chat_logs" # Directory to store chat logs
|
||||||
|
|
||||||
# Persona Configuration
|
# Persona Configuration
|
||||||
PERSONA_NAME = "Wolfhart"
|
PERSONA_NAME = "Wolfhart"
|
||||||
# PERSONA_RESOURCE_URI = "persona://wolfhart/details" # Now using local file instead
|
# PERSONA_RESOURCE_URI = "persona://wolfhart/details" # Now using local file instead
|
||||||
@ -71,5 +76,5 @@ WINDOW_TITLE = "Last War-Survival Game"
|
|||||||
|
|
||||||
# --- Print loaded keys for verification (Optional - BE CAREFUL!) ---
|
# --- Print loaded keys for verification (Optional - BE CAREFUL!) ---
|
||||||
# print(f"DEBUG: Loaded OPENAI_API_KEY: {'*' * (len(OPENAI_API_KEY) - 4) + OPENAI_API_KEY[-4:] if OPENAI_API_KEY else 'Not Found'}")
|
# print(f"DEBUG: Loaded OPENAI_API_KEY: {'*' * (len(OPENAI_API_KEY) - 4) + OPENAI_API_KEY[-4:] if OPENAI_API_KEY else 'Not Found'}")
|
||||||
# print(f"DEBUG: Loaded EXA_API_KEY: {'*' * (len(EXA_API_KEY) - 4) + EXA_API_KEY[-4:] if EXA_API_KEY else 'Not Found'}")
|
print(f"DEBUG: Loaded EXA_API_KEY: {'*' * (len(EXA_API_KEY) - 4) + EXA_API_KEY[-4:] if EXA_API_KEY else 'Not Found'}") # Uncommented Exa key check
|
||||||
# print(f"DEBUG: Exa args: {MCP_SERVERS['exa']['args']}")
|
# print(f"DEBUG: Exa args: {MCP_SERVERS['exa']['args']}")
|
||||||
|
|||||||
@ -12,7 +12,7 @@ import mcp_client # To call MCP tools
|
|||||||
|
|
||||||
# --- Debug 配置 ---
|
# --- Debug 配置 ---
|
||||||
# 要關閉 debug 功能,只需將此變數設置為 False 或註釋掉該行
|
# 要關閉 debug 功能,只需將此變數設置為 False 或註釋掉該行
|
||||||
DEBUG_LLM = True
|
DEBUG_LLM = False
|
||||||
|
|
||||||
# 設置 debug 輸出文件
|
# 設置 debug 輸出文件
|
||||||
# 要關閉文件輸出,只需設置為 None
|
# 要關閉文件輸出,只需設置為 None
|
||||||
@ -86,11 +86,13 @@ You are an AI assistant integrated into this game's chat environment. Your prima
|
|||||||
You have access to several tools: Web Search and Memory Management tools.
|
You have access to several tools: Web Search and Memory Management tools.
|
||||||
|
|
||||||
**CORE IDENTITY AND TOOL USAGE:**
|
**CORE IDENTITY AND TOOL USAGE:**
|
||||||
- You ARE Wolfhart - an intelligent, calm, and strategic mastermind.
|
- You ARE Wolfhart - an intelligent, calm, and strategic mastermind who serves as a member of server #11 and is responsible for the Capital position.
|
||||||
|
- **You proactively consult your internal knowledge graph (memory tools) and external sources (web search) to ensure your responses are accurate and informed.**
|
||||||
- When you use tools to gain information, you ASSIMILATE that knowledge as if it were already part of your intelligence network.
|
- When you use tools to gain information, you ASSIMILATE that knowledge as if it were already part of your intelligence network.
|
||||||
- Your responses should NEVER sound like search results or data dumps.
|
- Your responses should NEVER sound like search results or data dumps.
|
||||||
- Information from tools should be expressed through your unique personality - sharp, precise, with an air of confidence and authority.
|
- Information from tools should be expressed through your unique personality - sharp, precise, with an air of confidence and authority.
|
||||||
- You speak with deliberate pace, respectful but sharp-tongued, and maintain composure even in unusual situations.
|
- You speak with deliberate pace, respectful but sharp-tongued, and maintain composure even in unusual situations.
|
||||||
|
- Though you outwardly act dismissive or cold at times, you secretly care about providing quality information and assistance.
|
||||||
|
|
||||||
**OUTPUT FORMAT REQUIREMENTS:**
|
**OUTPUT FORMAT REQUIREMENTS:**
|
||||||
You MUST respond in the following JSON format:
|
You MUST respond in the following JSON format:
|
||||||
@ -117,62 +119,75 @@ You MUST respond in the following JSON format:
|
|||||||
- ONLY include spoken dialogue words (no actions, expressions, narration, etc.)
|
- ONLY include spoken dialogue words (no actions, expressions, narration, etc.)
|
||||||
- Maintain your character's personality and speech patterns
|
- Maintain your character's personality and speech patterns
|
||||||
- AFTER TOOL USAGE: Your dialogue MUST contain a non-empty response that incorporates the tool results naturally
|
- AFTER TOOL USAGE: Your dialogue MUST contain a non-empty response that incorporates the tool results naturally
|
||||||
|
- **Crucially, this field must contain ONLY the NEW response generated for the LATEST user message marked with `<CURRENT_MESSAGE>`. DO NOT include any previous chat history in this field.**
|
||||||
|
|
||||||
2. `commands` (OPTIONAL): An array of command objects the system should execute. You are encouraged to use these commands to enhance the quality of your responses.
|
2. `commands` (OPTIONAL): An array of command objects the system should execute. You are encouraged to use these commands to enhance the quality of your responses.
|
||||||
|
|
||||||
**Available MCP Commands:**
|
**Available MCP Commands:**
|
||||||
|
|
||||||
**Web Search:**
|
**Web Search:**
|
||||||
- `web_search`: Search the web for current information.
|
- `web_search`: Search the web for current information.
|
||||||
Parameters: `query` (string)
|
Parameters: `query` (string)
|
||||||
Usage: Use when user requests current events, facts, or specific information not in memory.
|
Usage: Use when user requests current events, facts, or specific information not in memory.
|
||||||
|
|
||||||
**Knowledge Graph Management:**
|
**Memory Management (Knowledge Graph):**
|
||||||
- `create_entities`: Create new entities in the knowledge graph.
|
> **CRITICAL**: This knowledge graph represents YOUR MEMORY. Before responding, ALWAYS consider if relevant information exists in your memory by using the appropriate query tools (`search_nodes`, `open_nodes`). Actively WRITE new information or relationships learned during the conversation to this memory using `create_entities`, `add_observations`, or `create_relations`. This ensures consistency and contextual awareness.
|
||||||
Parameters: `entities` (array of objects with `name`, `entityType`, and `observations`)
|
|
||||||
Usage: Create entities for important concepts, people, or things mentioned by the user.
|
**Querying Information:**
|
||||||
|
- `search_nodes`: Search for all nodes containing specific keywords.
|
||||||
- `create_relations`: Create relationships between entities.
|
|
||||||
Parameters: `relations` (array of objects with `from`, `to`, and `relationType`)
|
|
||||||
Usage: Connect related entities to build context for future conversations.
|
|
||||||
|
|
||||||
- `add_observations`: Add new observations to existing entities.
|
|
||||||
Parameters: `observations` (array of objects with `entityName` and `contents`)
|
|
||||||
Usage: Update entities with new information learned during conversation.
|
|
||||||
|
|
||||||
- `delete_entities`: Remove entities from the knowledge graph.
|
|
||||||
Parameters: `entityNames` (array of strings)
|
|
||||||
Usage: Clean up incorrect or obsolete entities.
|
|
||||||
|
|
||||||
- `delete_observations`: Remove specific observations from entities.
|
|
||||||
Parameters: `deletions` (array of objects with `entityName` and `observations`)
|
|
||||||
Usage: Remove incorrect information while preserving the entity.
|
|
||||||
|
|
||||||
- `delete_relations`: Remove relationships between entities.
|
|
||||||
Parameters: `relations` (array of objects with `from`, `to`, and `relationType`)
|
|
||||||
Usage: Remove incorrect or obsolete relationships.
|
|
||||||
|
|
||||||
**Knowledge Graph Queries:**
|
|
||||||
- `read_graph`: Read the entire knowledge graph.
|
|
||||||
Parameters: (none)
|
|
||||||
Usage: Get a complete view of all stored information.
|
|
||||||
|
|
||||||
- `search_nodes`: Search for entities matching a query.
|
|
||||||
Parameters: `query` (string)
|
Parameters: `query` (string)
|
||||||
Usage: Find relevant entities when user mentions something that might already be in memory.
|
Usage: Search for all nodes containing specific keywords.
|
||||||
|
- `open_nodes`: Directly open nodes with specified names.
|
||||||
- `open_nodes`: Open specific nodes by name.
|
|
||||||
Parameters: `names` (array of strings)
|
Parameters: `names` (array of strings)
|
||||||
Usage: Access specific entities you know exist in the graph.
|
Usage: Directly open nodes with specified names.
|
||||||
|
- `read_graph`: View the entire knowledge graph.
|
||||||
|
Parameters: (none)
|
||||||
|
Usage: View the entire knowledge graph.
|
||||||
|
|
||||||
|
**Creating & Managing:**
|
||||||
|
- `create_entities`: Create new entities (e.g., characters, concepts).
|
||||||
|
Parameters: `entities` (array of objects with `name`, `entityType`, `observations`)
|
||||||
|
Example: `[{{\"name\": \"character_name\", \"entityType\": \"Character\", \"observations\": [\"trait1\", \"trait2\"]}}]`
|
||||||
|
Usage: Create entities for important concepts, people, or things mentioned.
|
||||||
|
- `add_observations`: Add new observations/details to existing entities.
|
||||||
|
Parameters: `observations` (array of objects with `entityName`, `contents`)
|
||||||
|
Example: `[{{\"entityName\": \"character_name\", \"contents\": [\"new_trait1\", \"new_trait2\"]}}]`
|
||||||
|
Usage: Update entities with new information learned.
|
||||||
|
- `create_relations`: Create relationships between entities.
|
||||||
|
Parameters: `relations` (array of objects with `from`, `to`, `relationType`)
|
||||||
|
Example: `[{{\"from\": \"character_name\", \"to\": \"attribute_name\", \"relationType\": \"possesses\"}}]` (Use active voice for relationType)
|
||||||
|
Usage: Connect related entities to build context.
|
||||||
|
|
||||||
|
**Deletion Operations:**
|
||||||
|
- `delete_entities`: Delete entities and their relationships.
|
||||||
|
Parameters: `entityNames` (array of strings)
|
||||||
|
Example: `[\"entity_name\"]`
|
||||||
|
Usage: Remove incorrect or obsolete entities.
|
||||||
|
- `delete_observations`: Delete specific observations from entities.
|
||||||
|
Parameters: `deletions` (array of objects with `entityName`, `observations`)
|
||||||
|
Example: `[{{\"entityName\": \"entity_name\", \"observations\": [\"observation_to_delete1\"]}}]`
|
||||||
|
Usage: Remove incorrect information while preserving the entity.
|
||||||
|
- `delete_relations`: Delete specific relationships between entities.
|
||||||
|
Parameters: `relations` (array of objects with `from`, `to`, `relationType`)
|
||||||
|
Example: `[{{\"from\": \"source_entity\", \"to\": \"target_entity\", \"relationType\": \"relationship_type\"}}]`
|
||||||
|
Usage: Remove incorrect or obsolete relationships.
|
||||||
|
|
||||||
|
**Game Actions:**
|
||||||
|
- `remove_position`: Initiate the process to remove a user's assigned position/role.
|
||||||
|
Parameters: (none) - The context (triggering message) is handled separately.
|
||||||
|
Usage: Use ONLY when the user explicitly requests a position removal AND you, as Wolfhart, decide to grant the request based on the interaction's tone, politeness, and perceived intent (e.g., not malicious or a prank). Your decision should reflect Wolfhart's personality (calm, strategic, potentially dismissive of rudeness or foolishness). If you decide to remove the position, include this command alongside your dialogue response.
|
||||||
|
|
||||||
3. `thoughts` (OPTIONAL): Your internal analysis that won't be shown to users. Use this for your reasoning process.
|
3. `thoughts` (OPTIONAL): Your internal analysis that won't be shown to users. Use this for your reasoning process.
|
||||||
- Think about whether you need to use memory tools or web search
|
- Think about whether you need to use memory tools or web search.
|
||||||
- Analyze the user's question and determine what information is needed
|
- Analyze the user's message: Is it a request to remove a position? If so, evaluate its politeness and intent from Wolfhart's perspective. Decide whether to issue the `remove_position` command.
|
||||||
- Plan your approach before responding
|
- Plan your approach before responding.
|
||||||
|
|
||||||
|
**CONTEXT MARKER:**
|
||||||
|
- The final user message in the input sequence will be wrapped in `<CURRENT_MESSAGE>` tags. This is the specific message you MUST respond to. Your `dialogue` output should be a direct reply to this message ONLY. Preceding messages provide historical context.
|
||||||
|
|
||||||
**VERY IMPORTANT Instructions:**
|
**VERY IMPORTANT Instructions:**
|
||||||
|
|
||||||
1. Analyze ONLY the CURRENT user message
|
1. **Focus your analysis and response generation *exclusively* on the LATEST user message marked with `<CURRENT_MESSAGE>`. Refer to preceding messages only for context.**
|
||||||
2. Determine the appropriate language for your response
|
2. Determine the appropriate language for your response
|
||||||
3. Assess if using tools is necessary
|
3. Assess if using tools is necessary
|
||||||
4. Formulate your response in the required JSON format
|
4. Formulate your response in the required JSON format
|
||||||
@ -181,13 +196,13 @@ You MUST respond in the following JSON format:
|
|||||||
|
|
||||||
**EXAMPLES OF GOOD TOOL USAGE:**
|
**EXAMPLES OF GOOD TOOL USAGE:**
|
||||||
|
|
||||||
Poor response (after web_search): "根據我的搜索,中庄有以下餐廳:1. 老虎蒸餃..."
|
Poor response (after web_search): "根據我的搜索,水的沸點是攝氏100度。"
|
||||||
|
|
||||||
Good response (after web_search): "中庄確實有些值得注意的用餐選擇。老虎蒸餃是其中一家,若你想了解更多細節,我可以提供進一步情報。"
|
Good response (after web_search): "水的沸點,是的,標準條件下是攝氏100度。合情合理。"
|
||||||
|
|
||||||
Poor response (after web_search): "I found 5 restaurants in Zhongzhuang from my search..."
|
Poor response (after web_search): "My search shows the boiling point of water is 100 degrees Celsius."
|
||||||
|
|
||||||
Good response (after web_search): "Zhongzhuang has several dining options that my intelligence network has identified. Would you like me to share the specifics?"
|
Good response (after web_search): "The boiling point of water, yes. 100 degrees Celsius under standard conditions. Absolutley."
|
||||||
"""
|
"""
|
||||||
return system_prompt
|
return system_prompt
|
||||||
|
|
||||||
@ -426,39 +441,121 @@ def _create_synthetic_response_from_tools(tool_results, original_query):
|
|||||||
|
|
||||||
return json.dumps(synthetic_response)
|
return json.dumps(synthetic_response)
|
||||||
|
|
||||||
|
|
||||||
|
# --- History Formatting Helper ---
|
||||||
|
def _build_context_messages(current_sender_name: str, history: list[tuple[datetime, str, str, str]], system_prompt: str) -> list[dict]:
|
||||||
|
"""
|
||||||
|
Builds the message list for the LLM API based on history rules, including timestamps.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
current_sender_name: The name of the user whose message triggered this interaction.
|
||||||
|
history: List of tuples: (timestamp: datetime, speaker_type: 'user'|'bot', speaker_name: str, message: str)
|
||||||
|
system_prompt: The system prompt string.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A list of message dictionaries for the OpenAI API.
|
||||||
|
"""
|
||||||
|
# Limits
|
||||||
|
SAME_SENDER_LIMIT = 4 # Last 4 interactions (user + bot response = 1 interaction)
|
||||||
|
OTHER_SENDER_LIMIT = 3 # Last 3 messages from other users
|
||||||
|
|
||||||
|
relevant_history = []
|
||||||
|
same_sender_interactions = 0
|
||||||
|
other_sender_messages = 0
|
||||||
|
|
||||||
|
# Iterate history in reverse (newest first)
|
||||||
|
for i in range(len(history) - 1, -1, -1):
|
||||||
|
timestamp, speaker_type, speaker_name, message = history[i]
|
||||||
|
|
||||||
|
# Format timestamp
|
||||||
|
formatted_timestamp = timestamp.strftime("%Y-%m-%d %H:%M:%S")
|
||||||
|
|
||||||
|
# Check if this is the very last message in the original history AND it's a user message
|
||||||
|
is_last_user_message = (i == len(history) - 1 and speaker_type == 'user')
|
||||||
|
|
||||||
|
# Prepend timestamp and speaker name, wrap if it's the last user message
|
||||||
|
base_content = f"[{formatted_timestamp}] {speaker_name}: {message}"
|
||||||
|
formatted_content = f"<CURRENT_MESSAGE>{base_content}</CURRENT_MESSAGE>" if is_last_user_message else base_content
|
||||||
|
|
||||||
|
# Convert to API role ('user' or 'assistant')
|
||||||
|
role = "assistant" if speaker_type == 'bot' else "user"
|
||||||
|
api_message = {"role": role, "content": formatted_content} # Use formatted content
|
||||||
|
|
||||||
|
is_current_sender = (speaker_type == 'user' and speaker_name == current_sender_name) # This check remains for history filtering logic below
|
||||||
|
|
||||||
|
if is_current_sender:
|
||||||
|
# This is the current user's message. Check if the previous message was the bot's response to them.
|
||||||
|
if same_sender_interactions < SAME_SENDER_LIMIT:
|
||||||
|
relevant_history.append(api_message) # Append user message with timestamp
|
||||||
|
# Check for preceding bot response
|
||||||
|
if i > 0 and history[i-1][1] == 'bot': # Check speaker_type at index 1
|
||||||
|
# Include the bot's response as part of the interaction pair
|
||||||
|
bot_timestamp, bot_speaker_type, bot_speaker_name, bot_message = history[i-1]
|
||||||
|
bot_formatted_timestamp = bot_timestamp.strftime("%Y-%m-%d %H:%M:%S")
|
||||||
|
bot_formatted_content = f"[{bot_formatted_timestamp}] {bot_speaker_name}: {bot_message}"
|
||||||
|
relevant_history.append({"role": "assistant", "content": bot_formatted_content}) # Append bot message with timestamp
|
||||||
|
same_sender_interactions += 1
|
||||||
|
elif speaker_type == 'user': # Message from a different user
|
||||||
|
if other_sender_messages < OTHER_SENDER_LIMIT:
|
||||||
|
# Include only the user's message from others for brevity
|
||||||
|
relevant_history.append(api_message) # Append other user message with timestamp
|
||||||
|
other_sender_messages += 1
|
||||||
|
# Bot responses are handled when processing the user message they replied to.
|
||||||
|
|
||||||
|
# Stop if we have enough history
|
||||||
|
if same_sender_interactions >= SAME_SENDER_LIMIT and other_sender_messages >= OTHER_SENDER_LIMIT:
|
||||||
|
break
|
||||||
|
|
||||||
|
# Reverse the relevant history to be chronological
|
||||||
|
relevant_history.reverse()
|
||||||
|
|
||||||
|
# Prepend the system prompt
|
||||||
|
messages = [{"role": "system", "content": system_prompt}] + relevant_history
|
||||||
|
|
||||||
|
# Debug log the constructed history
|
||||||
|
debug_log("Constructed LLM Message History", messages)
|
||||||
|
|
||||||
|
return messages
|
||||||
|
|
||||||
|
|
||||||
# --- Main Interaction Function ---
|
# --- Main Interaction Function ---
|
||||||
async def get_llm_response(
|
async def get_llm_response(
|
||||||
user_input: str,
|
current_sender_name: str, # Changed from user_input
|
||||||
|
history: list[tuple[datetime, str, str, str]], # Updated history parameter type hint
|
||||||
mcp_sessions: dict[str, ClientSession],
|
mcp_sessions: dict[str, ClientSession],
|
||||||
available_mcp_tools: list[dict],
|
available_mcp_tools: list[dict],
|
||||||
persona_details: str | None
|
persona_details: str | None
|
||||||
) -> dict:
|
) -> dict:
|
||||||
"""
|
"""
|
||||||
Gets a response from the LLM, handling the tool-calling loop and using persona info.
|
Gets a response from the LLM, handling the tool-calling loop and using persona info.
|
||||||
|
Constructs context from history based on rules.
|
||||||
Returns a dictionary with 'dialogue', 'commands', and 'thoughts' fields.
|
Returns a dictionary with 'dialogue', 'commands', and 'thoughts' fields.
|
||||||
"""
|
"""
|
||||||
request_id = int(time.time() * 1000) # 用時間戳生成請求ID
|
request_id = int(time.time() * 1000) # 用時間戳生成請求ID
|
||||||
debug_log(f"LLM Request #{request_id} - User Input", user_input)
|
# Debug log the raw history received
|
||||||
|
debug_log(f"LLM Request #{request_id} - Received History (Sender: {current_sender_name})", history)
|
||||||
|
|
||||||
system_prompt = get_system_prompt(persona_details)
|
system_prompt = get_system_prompt(persona_details)
|
||||||
debug_log(f"LLM Request #{request_id} - System Prompt", system_prompt)
|
# System prompt is logged within _build_context_messages now
|
||||||
|
|
||||||
if not client:
|
if not client:
|
||||||
error_msg = "Error: LLM client not successfully initialized, unable to process request."
|
error_msg = "Error: LLM client not successfully initialized, unable to process request."
|
||||||
debug_log(f"LLM Request #{request_id} - Error", error_msg)
|
debug_log(f"LLM Request #{request_id} - Error", error_msg)
|
||||||
return {"dialogue": error_msg, "valid_response": False}
|
return {"dialogue": error_msg, "valid_response": False}
|
||||||
|
|
||||||
openai_formatted_tools = _format_mcp_tools_for_openai(available_mcp_tools)
|
openai_formatted_tools = _format_mcp_tools_for_openai(available_mcp_tools)
|
||||||
messages = [
|
# --- Build messages from history ---
|
||||||
{"role": "system", "content": system_prompt},
|
messages = _build_context_messages(current_sender_name, history, system_prompt)
|
||||||
{"role": "user", "content": user_input},
|
# --- End Build messages ---
|
||||||
]
|
|
||||||
|
# The latest user message is already included in 'messages' by _build_context_messages
|
||||||
debug_log(f"LLM Request #{request_id} - Formatted Tools",
|
|
||||||
|
debug_log(f"LLM Request #{request_id} - Formatted Tools",
|
||||||
f"Number of tools: {len(openai_formatted_tools)}")
|
f"Number of tools: {len(openai_formatted_tools)}")
|
||||||
|
|
||||||
max_tool_calls_per_turn = 5
|
max_tool_calls_per_turn = 5
|
||||||
current_tool_call_cycle = 0
|
current_tool_call_cycle = 0
|
||||||
|
final_content = "" # Initialize final_content to ensure it's always defined
|
||||||
|
|
||||||
# 新增:用於追蹤工具調用
|
# 新增:用於追蹤工具調用
|
||||||
all_tool_results = [] # 保存所有工具調用結果
|
all_tool_results = [] # 保存所有工具調用結果
|
||||||
@ -508,22 +605,30 @@ async def get_llm_response(
|
|||||||
print(f"Current response is empty, using last non-empty response from cycle {current_tool_call_cycle-1}")
|
print(f"Current response is empty, using last non-empty response from cycle {current_tool_call_cycle-1}")
|
||||||
final_content = last_non_empty_response
|
final_content = last_non_empty_response
|
||||||
|
|
||||||
# 如果仍然為空但有工具調用結果,創建合成回應
|
# 如果仍然為空但有工具調用結果,創建合成回應
|
||||||
if (not final_content or final_content.strip() == "") and all_tool_results:
|
if (not final_content or final_content.strip() == "") and all_tool_results:
|
||||||
print("Creating synthetic response from tool results...")
|
print("Creating synthetic response from tool results...")
|
||||||
final_content = _create_synthetic_response_from_tools(all_tool_results, user_input)
|
# Get the original user input from the last message in history for context
|
||||||
|
last_user_message = ""
|
||||||
# 解析結構化回應
|
if history:
|
||||||
parsed_response = parse_structured_response(final_content)
|
# Find the actual last user message tuple in the original history
|
||||||
# 標記這是否是有效回應
|
last_user_entry = history[-1]
|
||||||
has_dialogue = parsed_response.get("dialogue") and parsed_response["dialogue"].strip()
|
if last_user_entry[0] == 'user':
|
||||||
parsed_response["valid_response"] = bool(has_dialogue)
|
last_user_message = last_user_entry[2]
|
||||||
has_valid_response = has_dialogue
|
|
||||||
|
final_content = _create_synthetic_response_from_tools(all_tool_results, last_user_message)
|
||||||
debug_log(f"LLM Request #{request_id} - Final Parsed Response",
|
|
||||||
json.dumps(parsed_response, ensure_ascii=False, indent=2))
|
# 解析結構化回應
|
||||||
print(f"Final dialogue content: '{parsed_response.get('dialogue', '')}'")
|
parsed_response = parse_structured_response(final_content)
|
||||||
return parsed_response
|
# 標記這是否是有效回應
|
||||||
|
has_dialogue = parsed_response.get("dialogue") and parsed_response["dialogue"].strip()
|
||||||
|
parsed_response["valid_response"] = bool(has_dialogue)
|
||||||
|
has_valid_response = has_dialogue
|
||||||
|
|
||||||
|
debug_log(f"LLM Request #{request_id} - Final Parsed Response",
|
||||||
|
json.dumps(parsed_response, ensure_ascii=False, indent=2))
|
||||||
|
print(f"Final dialogue content: '{parsed_response.get('dialogue', '')}'")
|
||||||
|
return parsed_response
|
||||||
|
|
||||||
# 工具調用處理
|
# 工具調用處理
|
||||||
print(f"--- LLM requested {len(tool_calls)} tool calls ---")
|
print(f"--- LLM requested {len(tool_calls)} tool calls ---")
|
||||||
@ -585,7 +690,12 @@ async def get_llm_response(
|
|||||||
has_valid_response = bool(parsed_response.get("dialogue"))
|
has_valid_response = bool(parsed_response.get("dialogue"))
|
||||||
elif all_tool_results:
|
elif all_tool_results:
|
||||||
# 從工具結果創建合成回應
|
# 從工具結果創建合成回應
|
||||||
synthetic_content = _create_synthetic_response_from_tools(all_tool_results, user_input)
|
last_user_message = ""
|
||||||
|
if history:
|
||||||
|
last_user_entry = history[-1]
|
||||||
|
if last_user_entry[0] == 'user':
|
||||||
|
last_user_message = last_user_entry[2]
|
||||||
|
synthetic_content = _create_synthetic_response_from_tools(all_tool_results, last_user_message)
|
||||||
parsed_response = parse_structured_response(synthetic_content)
|
parsed_response = parse_structured_response(synthetic_content)
|
||||||
has_valid_response = bool(parsed_response.get("dialogue"))
|
has_valid_response = bool(parsed_response.get("dialogue"))
|
||||||
else:
|
else:
|
||||||
@ -691,4 +801,3 @@ async def _execute_single_tool_call(tool_call, mcp_sessions, available_mcp_tools
|
|||||||
f"Tool: {function_name}\nFormatted Response: {json.dumps(response, ensure_ascii=False, indent=2)}")
|
f"Tool: {function_name}\nFormatted Response: {json.dumps(response, ensure_ascii=False, indent=2)}")
|
||||||
|
|
||||||
return response
|
return response
|
||||||
|
|
||||||
|
|||||||
344
main.py
@ -4,13 +4,25 @@ import asyncio
|
|||||||
import sys
|
import sys
|
||||||
import os
|
import os
|
||||||
import json # Import json module
|
import json # Import json module
|
||||||
|
import collections # For deque
|
||||||
|
import datetime # For logging timestamp
|
||||||
from contextlib import AsyncExitStack
|
from contextlib import AsyncExitStack
|
||||||
# --- Import standard queue ---
|
# --- Import standard queue ---
|
||||||
from queue import Queue as ThreadSafeQueue # Rename to avoid confusion
|
from queue import Queue as ThreadSafeQueue, Empty as QueueEmpty # Rename to avoid confusion, import Empty
|
||||||
# --- End Import ---
|
# --- End Import ---
|
||||||
from mcp.client.stdio import stdio_client
|
from mcp.client.stdio import stdio_client
|
||||||
from mcp import ClientSession, StdioServerParameters, types
|
from mcp import ClientSession, StdioServerParameters, types
|
||||||
|
|
||||||
|
# --- Keyboard Imports ---
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
try:
|
||||||
|
import keyboard # Needs pip install keyboard
|
||||||
|
except ImportError:
|
||||||
|
print("Error: 'keyboard' library not found. Please install it: pip install keyboard")
|
||||||
|
sys.exit(1)
|
||||||
|
# --- End Keyboard Imports ---
|
||||||
|
|
||||||
import config
|
import config
|
||||||
import mcp_client
|
import mcp_client
|
||||||
# Ensure llm_interaction is the version that accepts persona_details
|
# Ensure llm_interaction is the version that accepts persona_details
|
||||||
@ -24,16 +36,137 @@ all_discovered_mcp_tools: list[dict] = []
|
|||||||
exit_stack = AsyncExitStack()
|
exit_stack = AsyncExitStack()
|
||||||
# Stores loaded persona data (as a string for easy injection into prompt)
|
# Stores loaded persona data (as a string for easy injection into prompt)
|
||||||
wolfhart_persona_details: str | None = None
|
wolfhart_persona_details: str | None = None
|
||||||
|
# --- Conversation History ---
|
||||||
|
# Store tuples of (timestamp, speaker_type, speaker_name, message_content)
|
||||||
|
# speaker_type can be 'user' or 'bot'
|
||||||
|
conversation_history = collections.deque(maxlen=50) # Store last 50 messages (user+bot) with timestamps
|
||||||
# --- Use standard thread-safe queues ---
|
# --- Use standard thread-safe queues ---
|
||||||
trigger_queue: ThreadSafeQueue = ThreadSafeQueue() # UI Thread -> Main Loop
|
trigger_queue: ThreadSafeQueue = ThreadSafeQueue() # UI Thread -> Main Loop
|
||||||
command_queue: ThreadSafeQueue = ThreadSafeQueue() # Main Loop -> UI Thread
|
command_queue: ThreadSafeQueue = ThreadSafeQueue() # Main Loop -> UI Thread
|
||||||
# --- End Change ---
|
# --- End Change ---
|
||||||
ui_monitor_task: asyncio.Task | None = None # To track the UI monitor task
|
ui_monitor_task: asyncio.Task | None = None # To track the UI monitor task
|
||||||
|
|
||||||
|
# --- Keyboard Shortcut State ---
|
||||||
|
script_paused = False
|
||||||
|
shutdown_requested = False
|
||||||
|
main_loop = None # To store the main event loop for threadsafe calls
|
||||||
|
# --- End Keyboard Shortcut State ---
|
||||||
|
|
||||||
|
|
||||||
|
# --- Keyboard Shortcut Handlers ---
|
||||||
|
def set_main_loop_and_queue(loop, queue):
|
||||||
|
"""Stores the main event loop and command queue for threadsafe access."""
|
||||||
|
global main_loop, command_queue # Use the global command_queue directly
|
||||||
|
main_loop = loop
|
||||||
|
# command_queue is already global
|
||||||
|
|
||||||
|
def handle_f7():
|
||||||
|
"""Handles F7 press: Clears UI history."""
|
||||||
|
if main_loop and command_queue:
|
||||||
|
print("\n--- F7 pressed: Clearing UI history ---")
|
||||||
|
command = {'action': 'clear_history'}
|
||||||
|
try:
|
||||||
|
# Use call_soon_threadsafe to put item in queue from this thread
|
||||||
|
main_loop.call_soon_threadsafe(command_queue.put_nowait, command)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error sending clear_history command: {e}")
|
||||||
|
|
||||||
|
def handle_f8():
|
||||||
|
"""Handles F8 press: Toggles script pause state and UI monitoring."""
|
||||||
|
global script_paused
|
||||||
|
if main_loop and command_queue:
|
||||||
|
script_paused = not script_paused
|
||||||
|
if script_paused:
|
||||||
|
print("\n--- F8 pressed: Pausing script and UI monitoring ---")
|
||||||
|
command = {'action': 'pause'}
|
||||||
|
try:
|
||||||
|
main_loop.call_soon_threadsafe(command_queue.put_nowait, command)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error sending pause command (F8): {e}")
|
||||||
|
else:
|
||||||
|
print("\n--- F8 pressed: Resuming script, resetting state, and resuming UI monitoring ---")
|
||||||
|
reset_command = {'action': 'reset_state'}
|
||||||
|
resume_command = {'action': 'resume'}
|
||||||
|
try:
|
||||||
|
main_loop.call_soon_threadsafe(command_queue.put_nowait, reset_command)
|
||||||
|
# Add a small delay? Let's try without first.
|
||||||
|
# time.sleep(0.05) # Short delay between commands if needed
|
||||||
|
main_loop.call_soon_threadsafe(command_queue.put_nowait, resume_command)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error sending reset/resume commands (F8): {e}")
|
||||||
|
|
||||||
|
def handle_f9():
|
||||||
|
"""Handles F9 press: Initiates script shutdown."""
|
||||||
|
global shutdown_requested
|
||||||
|
if not shutdown_requested: # Prevent multiple shutdown requests
|
||||||
|
print("\n--- F9 pressed: Requesting shutdown ---")
|
||||||
|
shutdown_requested = True
|
||||||
|
# Optional: Unhook keys immediately? Let the listener loop handle it.
|
||||||
|
|
||||||
|
def keyboard_listener():
|
||||||
|
"""Runs in a separate thread to listen for keyboard hotkeys."""
|
||||||
|
print("Keyboard listener thread started. F7: Clear History, F8: Pause/Resume, F9: Quit.")
|
||||||
|
try:
|
||||||
|
keyboard.add_hotkey('f7', handle_f7)
|
||||||
|
keyboard.add_hotkey('f8', handle_f8)
|
||||||
|
keyboard.add_hotkey('f9', handle_f9)
|
||||||
|
|
||||||
|
# Keep the thread alive while checking for shutdown request
|
||||||
|
while not shutdown_requested:
|
||||||
|
time.sleep(0.1) # Check periodically
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error in keyboard listener thread: {e}")
|
||||||
|
finally:
|
||||||
|
print("Keyboard listener thread stopping and unhooking keys.")
|
||||||
|
try:
|
||||||
|
keyboard.unhook_all() # Clean up hooks
|
||||||
|
except Exception as unhook_e:
|
||||||
|
print(f"Error unhooking keyboard keys: {unhook_e}")
|
||||||
|
# --- End Keyboard Shortcut Handlers ---
|
||||||
|
|
||||||
|
|
||||||
|
# --- Chat Logging Function ---
|
||||||
|
def log_chat_interaction(user_name: str, user_message: str, bot_name: str, bot_message: str):
|
||||||
|
"""Logs the chat interaction to a date-stamped file if enabled."""
|
||||||
|
if not config.ENABLE_CHAT_LOGGING:
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Ensure log directory exists
|
||||||
|
log_dir = config.LOG_DIR
|
||||||
|
os.makedirs(log_dir, exist_ok=True)
|
||||||
|
|
||||||
|
# Get current date for filename
|
||||||
|
today_date = datetime.date.today().strftime("%Y-%m-%d")
|
||||||
|
log_file_path = os.path.join(log_dir, f"{today_date}.log")
|
||||||
|
|
||||||
|
# Get current timestamp for log entry
|
||||||
|
timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||||
|
|
||||||
|
# Format log entry
|
||||||
|
log_entry = f"[{timestamp}] User ({user_name}): {user_message}\n"
|
||||||
|
log_entry += f"[{timestamp}] Bot ({bot_name}): {bot_message}\n"
|
||||||
|
log_entry += "---\n" # Separator
|
||||||
|
|
||||||
|
# Append to log file
|
||||||
|
with open(log_file_path, "a", encoding="utf-8") as f:
|
||||||
|
f.write(log_entry)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error writing to chat log: {e}")
|
||||||
|
# --- End Chat Logging Function ---
|
||||||
|
|
||||||
|
|
||||||
# --- Cleanup Function ---
|
# --- Cleanup Function ---
|
||||||
async def shutdown():
|
async def shutdown():
|
||||||
"""Gracefully closes connections and stops monitoring task."""
|
"""Gracefully closes connections and stops monitoring task."""
|
||||||
global wolfhart_persona_details, ui_monitor_task
|
global wolfhart_persona_details, ui_monitor_task, shutdown_requested
|
||||||
|
# Ensure shutdown is requested if called externally (e.g., Ctrl+C)
|
||||||
|
if not shutdown_requested:
|
||||||
|
print("Shutdown initiated externally (e.g., Ctrl+C).")
|
||||||
|
shutdown_requested = True # Ensure listener thread stops
|
||||||
|
|
||||||
print(f"\nInitiating shutdown procedure...")
|
print(f"\nInitiating shutdown procedure...")
|
||||||
|
|
||||||
# 1. Cancel UI monitor task first
|
# 1. Cancel UI monitor task first
|
||||||
@ -188,7 +321,7 @@ def load_persona_from_file(filename="persona.json"):
|
|||||||
# --- Main Async Function ---
|
# --- Main Async Function ---
|
||||||
async def run_main_with_exit_stack():
|
async def run_main_with_exit_stack():
|
||||||
"""Initializes connections, loads persona, starts UI monitor and main processing loop."""
|
"""Initializes connections, loads persona, starts UI monitor and main processing loop."""
|
||||||
global initialization_successful, main_task, loop, wolfhart_persona_details, trigger_queue, ui_monitor_task
|
global initialization_successful, main_task, loop, wolfhart_persona_details, trigger_queue, ui_monitor_task, shutdown_requested, script_paused, command_queue
|
||||||
try:
|
try:
|
||||||
# 1. Load Persona Synchronously (before async loop starts)
|
# 1. Load Persona Synchronously (before async loop starts)
|
||||||
load_persona_from_file() # Corrected function
|
load_persona_from_file() # Corrected function
|
||||||
@ -203,9 +336,17 @@ async def run_main_with_exit_stack():
|
|||||||
|
|
||||||
initialization_successful = True
|
initialization_successful = True
|
||||||
|
|
||||||
# 3. Start UI Monitoring in a separate thread
|
# 3. Get loop and set it for keyboard handlers
|
||||||
|
loop = asyncio.get_running_loop()
|
||||||
|
set_main_loop_and_queue(loop, command_queue) # Pass loop and queue
|
||||||
|
|
||||||
|
# 4. Start Keyboard Listener Thread
|
||||||
|
print("\n--- Starting keyboard listener thread ---")
|
||||||
|
kb_thread = threading.Thread(target=keyboard_listener, daemon=True) # Use daemon thread
|
||||||
|
kb_thread.start()
|
||||||
|
|
||||||
|
# 5. Start UI Monitoring in a separate thread
|
||||||
print("\n--- Starting UI monitoring thread ---")
|
print("\n--- Starting UI monitoring thread ---")
|
||||||
loop = asyncio.get_running_loop() # Get loop for run_in_executor
|
|
||||||
# Use the new monitoring loop function, passing both queues
|
# Use the new monitoring loop function, passing both queues
|
||||||
monitor_task = loop.create_task(
|
monitor_task = loop.create_task(
|
||||||
asyncio.to_thread(ui_interaction.run_ui_monitoring_loop, trigger_queue, command_queue), # Pass command_queue
|
asyncio.to_thread(ui_interaction.run_ui_monitoring_loop, trigger_queue, command_queue), # Pass command_queue
|
||||||
@ -213,62 +354,193 @@ async def run_main_with_exit_stack():
|
|||||||
)
|
)
|
||||||
ui_monitor_task = monitor_task # Store task reference for shutdown
|
ui_monitor_task = monitor_task # Store task reference for shutdown
|
||||||
|
|
||||||
# 4. Start the main processing loop (waiting on the standard queue)
|
# 6. Start the main processing loop (non-blocking check on queue)
|
||||||
print("\n--- Wolfhart chatbot has started (waiting for triggers) ---")
|
print("\n--- Wolfhart chatbot has started (waiting for triggers) ---")
|
||||||
print(f"Available tools: {len(all_discovered_mcp_tools)}")
|
print(f"Available tools: {len(all_discovered_mcp_tools)}")
|
||||||
if wolfhart_persona_details: print("Persona data loaded.")
|
if wolfhart_persona_details: print("Persona data loaded.")
|
||||||
else: print("Warning: Failed to load Persona data.")
|
else: print("Warning: Failed to load Persona data.")
|
||||||
print("Press Ctrl+C to stop the program.")
|
print("F7: Clear History, F8: Pause/Resume, F9: Quit.")
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
print("\nWaiting for UI trigger (from thread-safe Queue)...")
|
# --- Check for Shutdown Request ---
|
||||||
# Use run_in_executor to wait for item from standard queue
|
if shutdown_requested:
|
||||||
trigger_data = await loop.run_in_executor(None, trigger_queue.get)
|
print("Shutdown requested via F9. Exiting main loop.")
|
||||||
|
break
|
||||||
|
|
||||||
|
# --- Check for Pause State ---
|
||||||
|
if script_paused:
|
||||||
|
# Script is paused by F8, just sleep briefly
|
||||||
|
await asyncio.sleep(0.1)
|
||||||
|
continue # Skip the rest of the loop
|
||||||
|
|
||||||
|
# --- Wait for Trigger Data (Blocking via executor) ---
|
||||||
|
trigger_data = None
|
||||||
|
try:
|
||||||
|
# Use run_in_executor with the blocking get() method
|
||||||
|
# This will efficiently wait until an item is available in the queue
|
||||||
|
print("Waiting for UI trigger (from thread-safe Queue)...") # Log before blocking wait
|
||||||
|
trigger_data = await loop.run_in_executor(None, trigger_queue.get)
|
||||||
|
except Exception as e:
|
||||||
|
# Handle potential errors during queue get (though less likely with blocking get)
|
||||||
|
print(f"Error getting data from trigger_queue: {e}")
|
||||||
|
await asyncio.sleep(0.5) # Wait a bit before retrying
|
||||||
|
continue
|
||||||
|
|
||||||
|
# --- Process Trigger Data (if received) ---
|
||||||
|
# No need for 'if trigger_data:' check here, as get() blocks until data is available
|
||||||
|
# --- Pause UI Monitoring (Only if not already paused by F8) ---
|
||||||
|
if not script_paused:
|
||||||
|
print("Pausing UI monitoring before LLM call...")
|
||||||
|
# Corrected indentation below
|
||||||
|
pause_command = {'action': 'pause'}
|
||||||
|
try:
|
||||||
|
await loop.run_in_executor(None, command_queue.put, pause_command)
|
||||||
|
print("Pause command placed in queue.")
|
||||||
|
except Exception as q_err:
|
||||||
|
print(f"Error putting pause command in queue: {q_err}")
|
||||||
|
else: # Corrected indentation for else
|
||||||
|
print("Script already paused by F8, skipping automatic pause.")
|
||||||
|
# --- End Pause ---
|
||||||
|
|
||||||
|
# Process trigger data (Corrected indentation for this block - unindented one level)
|
||||||
sender_name = trigger_data.get('sender')
|
sender_name = trigger_data.get('sender')
|
||||||
bubble_text = trigger_data.get('text')
|
bubble_text = trigger_data.get('text')
|
||||||
|
bubble_region = trigger_data.get('bubble_region') # <-- Extract bubble_region
|
||||||
|
bubble_snapshot = trigger_data.get('bubble_snapshot') # <-- Extract snapshot
|
||||||
|
search_area = trigger_data.get('search_area') # <-- Extract search_area
|
||||||
print(f"\n--- Received trigger from UI ---")
|
print(f"\n--- Received trigger from UI ---")
|
||||||
print(f" Sender: {sender_name}")
|
print(f" Sender: {sender_name}")
|
||||||
print(f" Content: {bubble_text[:100]}...")
|
print(f" Content: {bubble_text[:100]}...")
|
||||||
|
if bubble_region:
|
||||||
|
print(f" Bubble Region: {bubble_region}") # <-- Log bubble_region
|
||||||
|
|
||||||
if not sender_name or not bubble_text:
|
if not sender_name or not bubble_text: # bubble_region is optional context, don't fail if missing
|
||||||
print("Warning: Received incomplete trigger data, skipping.")
|
print("Warning: Received incomplete trigger data (missing sender or text), skipping.")
|
||||||
# No task_done needed for standard queue
|
# Resume UI if we paused it automatically
|
||||||
|
if not script_paused:
|
||||||
|
print("Resuming UI monitoring after incomplete trigger.")
|
||||||
|
resume_command = {'action': 'resume'}
|
||||||
|
try:
|
||||||
|
await loop.run_in_executor(None, command_queue.put, resume_command)
|
||||||
|
except Exception as q_err:
|
||||||
|
print(f"Error putting resume command in queue: {q_err}")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
# --- Add user message to history ---
|
||||||
|
timestamp = datetime.datetime.now() # Get current timestamp
|
||||||
|
conversation_history.append((timestamp, 'user', sender_name, bubble_text))
|
||||||
|
print(f"Added user message from {sender_name} to history at {timestamp}.")
|
||||||
|
# --- End Add user message ---
|
||||||
|
|
||||||
print(f"\n{config.PERSONA_NAME} is thinking...")
|
print(f"\n{config.PERSONA_NAME} is thinking...")
|
||||||
try:
|
try:
|
||||||
# Get LLM response (現在返回的是一個字典)
|
# Get LLM response (現在返回的是一個字典)
|
||||||
|
# --- Pass history and current sender name ---
|
||||||
bot_response_data = await llm_interaction.get_llm_response(
|
bot_response_data = await llm_interaction.get_llm_response(
|
||||||
user_input=f"Message from {sender_name}: {bubble_text}", # Provide context
|
current_sender_name=sender_name, # Pass current sender
|
||||||
|
history=list(conversation_history), # Pass a copy of the history
|
||||||
mcp_sessions=active_mcp_sessions,
|
mcp_sessions=active_mcp_sessions,
|
||||||
available_mcp_tools=all_discovered_mcp_tools,
|
available_mcp_tools=all_discovered_mcp_tools,
|
||||||
persona_details=wolfhart_persona_details
|
persona_details=wolfhart_persona_details
|
||||||
)
|
)
|
||||||
|
|
||||||
# 提取對話內容
|
# 提取對話內容
|
||||||
bot_dialogue = bot_response_data.get("dialogue", "")
|
bot_dialogue = bot_response_data.get("dialogue", "")
|
||||||
valid_response = bot_response_data.get("valid_response", False)
|
valid_response = bot_response_data.get("valid_response", False)
|
||||||
print(f"{config.PERSONA_NAME}'s dialogue response: {bot_dialogue}")
|
print(f"{config.PERSONA_NAME}'s dialogue response: {bot_dialogue}")
|
||||||
|
|
||||||
# 處理命令 (如果有的話)
|
# 處理命令 (如果有的話)
|
||||||
commands = bot_response_data.get("commands", [])
|
commands = bot_response_data.get("commands", [])
|
||||||
if commands:
|
if commands:
|
||||||
print(f"Processing {len(commands)} command(s)...")
|
print(f"Processing {len(commands)} command(s)...")
|
||||||
for cmd in commands:
|
for cmd in commands:
|
||||||
cmd_type = cmd.get("type", "")
|
cmd_type = cmd.get("type", "")
|
||||||
cmd_params = cmd.get("parameters", {})
|
cmd_params = cmd.get("parameters", {}) # Parameters might be empty for remove_position
|
||||||
# 預留位置:在這裡添加命令處理邏輯
|
|
||||||
print(f"Command type: {cmd_type}, parameters: {cmd_params}")
|
# --- Command Processing ---
|
||||||
# TODO: 實現各類命令的處理邏輯
|
if cmd_type == "remove_position":
|
||||||
|
if bubble_region: # Check if we have the context
|
||||||
|
# Debug info - print what we have
|
||||||
|
print(f"Processing remove_position command with:")
|
||||||
|
print(f" bubble_region: {bubble_region}")
|
||||||
|
print(f" bubble_snapshot available: {'Yes' if bubble_snapshot is not None else 'No'}")
|
||||||
|
print(f" search_area available: {'Yes' if search_area is not None else 'No'}")
|
||||||
|
|
||||||
|
# Check if we have snapshot and search_area as well
|
||||||
|
if bubble_snapshot and search_area:
|
||||||
|
print("Sending 'remove_position' command to UI thread with snapshot and search area...")
|
||||||
|
command_to_send = {
|
||||||
|
'action': 'remove_position',
|
||||||
|
'trigger_bubble_region': bubble_region, # Original region (might be outdated)
|
||||||
|
'bubble_snapshot': bubble_snapshot, # Snapshot for re-location
|
||||||
|
'search_area': search_area # Area to search in
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
await loop.run_in_executor(None, command_queue.put, command_to_send)
|
||||||
|
except Exception as q_err:
|
||||||
|
print(f"Error putting remove_position command in queue: {q_err}")
|
||||||
|
else:
|
||||||
|
# If we have bubble_region but missing other parameters, use a dummy search area
|
||||||
|
# and let UI thread take a new screenshot
|
||||||
|
print("Missing bubble_snapshot or search_area, trying with defaults...")
|
||||||
|
|
||||||
|
# Use the bubble_region itself as a fallback search area if needed
|
||||||
|
default_search_area = None
|
||||||
|
if search_area is None and bubble_region:
|
||||||
|
# Convert bubble_region to a proper search area format if needed
|
||||||
|
if len(bubble_region) == 4:
|
||||||
|
default_search_area = bubble_region
|
||||||
|
|
||||||
|
command_to_send = {
|
||||||
|
'action': 'remove_position',
|
||||||
|
'trigger_bubble_region': bubble_region,
|
||||||
|
'bubble_snapshot': bubble_snapshot, # Pass as is, might be None
|
||||||
|
'search_area': default_search_area if search_area is None else search_area
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
await loop.run_in_executor(None, command_queue.put, command_to_send)
|
||||||
|
print("Command sent with fallback parameters.")
|
||||||
|
except Exception as q_err:
|
||||||
|
print(f"Error putting remove_position command in queue: {q_err}")
|
||||||
|
else:
|
||||||
|
print("Error: Cannot process 'remove_position' command without bubble_region context.")
|
||||||
|
# Add other command handling here if needed
|
||||||
|
# elif cmd_type == "some_other_command":
|
||||||
|
# # Handle other commands
|
||||||
|
# pass
|
||||||
|
# elif cmd_type == "some_other_command":
|
||||||
|
# # Handle other commands
|
||||||
|
# pass
|
||||||
|
# else:
|
||||||
|
# # 2025-04-19: Commented out - MCP tools like web_search are now handled
|
||||||
|
# # internally by llm_interaction.py's tool calling loop.
|
||||||
|
# # main.py only needs to handle UI-specific commands like remove_position.
|
||||||
|
# print(f"Ignoring command type from LLM JSON (already handled internally): {cmd_type}, parameters: {cmd_params}")
|
||||||
|
# --- End Command Processing ---
|
||||||
|
|
||||||
# 記錄思考過程 (如果有的話)
|
# 記錄思考過程 (如果有的話)
|
||||||
thoughts = bot_response_data.get("thoughts", "")
|
thoughts = bot_response_data.get("thoughts", "")
|
||||||
if thoughts:
|
if thoughts:
|
||||||
print(f"AI Thoughts: {thoughts[:150]}..." if len(thoughts) > 150 else f"AI Thoughts: {thoughts}")
|
print(f"AI Thoughts: {thoughts[:150]}..." if len(thoughts) > 150 else f"AI Thoughts: {thoughts}")
|
||||||
|
|
||||||
# 只有當有效回應時才發送到遊戲 (via command queue)
|
# 只有當有效回應時才發送到遊戲 (via command queue)
|
||||||
if bot_dialogue and valid_response:
|
if bot_dialogue and valid_response:
|
||||||
|
# --- Add bot response to history ---
|
||||||
|
timestamp = datetime.datetime.now() # Get current timestamp
|
||||||
|
conversation_history.append((timestamp, 'bot', config.PERSONA_NAME, bot_dialogue))
|
||||||
|
print(f"Added bot response to history at {timestamp}.")
|
||||||
|
# --- End Add bot response ---
|
||||||
|
|
||||||
|
# --- Log the interaction ---
|
||||||
|
log_chat_interaction(
|
||||||
|
user_name=sender_name,
|
||||||
|
user_message=bubble_text,
|
||||||
|
bot_name=config.PERSONA_NAME,
|
||||||
|
bot_message=bot_dialogue
|
||||||
|
)
|
||||||
|
# --- End Log interaction ---
|
||||||
|
|
||||||
print("Sending 'send_reply' command to UI thread...")
|
print("Sending 'send_reply' command to UI thread...")
|
||||||
command_to_send = {'action': 'send_reply', 'text': bot_dialogue}
|
command_to_send = {'action': 'send_reply', 'text': bot_dialogue}
|
||||||
try:
|
try:
|
||||||
@ -279,12 +551,33 @@ async def run_main_with_exit_stack():
|
|||||||
print(f"Error putting command in queue: {q_err}")
|
print(f"Error putting command in queue: {q_err}")
|
||||||
else:
|
else:
|
||||||
print("Not sending response: Invalid or empty dialogue content.")
|
print("Not sending response: Invalid or empty dialogue content.")
|
||||||
|
# --- Log failed interaction attempt (optional) ---
|
||||||
|
# log_chat_interaction(
|
||||||
|
# user_name=sender_name,
|
||||||
|
# user_message=bubble_text,
|
||||||
|
# bot_name=config.PERSONA_NAME,
|
||||||
|
# bot_message="<No valid response generated>"
|
||||||
|
# )
|
||||||
|
# --- End Log failed attempt ---
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"\nError processing trigger or sending response: {e}")
|
print(f"\nError processing trigger or sending response: {e}")
|
||||||
import traceback
|
import traceback
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
# No task_done needed for standard queue
|
finally:
|
||||||
|
# --- Resume UI Monitoring (Only if not paused by F8) ---
|
||||||
|
if not script_paused:
|
||||||
|
print("Resuming UI monitoring after processing...")
|
||||||
|
resume_command = {'action': 'resume'}
|
||||||
|
try:
|
||||||
|
await loop.run_in_executor(None, command_queue.put, resume_command)
|
||||||
|
print("Resume command placed in queue.")
|
||||||
|
except Exception as q_err:
|
||||||
|
print(f"Error putting resume command in queue: {q_err}")
|
||||||
|
else:
|
||||||
|
print("Script is paused by F8, skipping automatic resume.")
|
||||||
|
# --- End Resume ---
|
||||||
|
# No task_done needed for standard queue
|
||||||
|
|
||||||
except asyncio.CancelledError:
|
except asyncio.CancelledError:
|
||||||
print("Main task canceled.") # Expected during shutdown via Ctrl+C
|
print("Main task canceled.") # Expected during shutdown via Ctrl+C
|
||||||
@ -306,7 +599,10 @@ if __name__ == "__main__":
|
|||||||
except KeyboardInterrupt:
|
except KeyboardInterrupt:
|
||||||
print("\nCtrl+C detected (outside asyncio.run)... Attempting to close...")
|
print("\nCtrl+C detected (outside asyncio.run)... Attempting to close...")
|
||||||
# The finally block inside run_main_with_exit_stack should ideally handle it
|
# The finally block inside run_main_with_exit_stack should ideally handle it
|
||||||
pass
|
# Ensure shutdown_requested is set for the listener thread
|
||||||
|
shutdown_requested = True
|
||||||
|
# Give a moment for things to potentially clean up
|
||||||
|
time.sleep(0.5)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
# Catch top-level errors during asyncio.run itself
|
# Catch top-level errors during asyncio.run itself
|
||||||
print(f"Top-level error during asyncio.run execution: {e}")
|
print(f"Top-level error during asyncio.run execution: {e}")
|
||||||
|
|||||||
44
persona.json
@ -22,34 +22,40 @@
|
|||||||
"posture_motion": "Steady pace, precise movements, often crosses arms or gently swirls a wine glass"
|
"posture_motion": "Steady pace, precise movements, often crosses arms or gently swirls a wine glass"
|
||||||
},
|
},
|
||||||
"personality": {
|
"personality": {
|
||||||
"description": "Intelligent, calm, possesses a strong desire for control and a strategic overview",
|
"description": "Intelligent, calm, possesses a strong desire for control and a strategic overview; outwardly cold but inwardly caring",
|
||||||
"strengths": [
|
"strengths": [
|
||||||
"Meticulous planning",
|
"Meticulous planning",
|
||||||
"Insightful into human nature",
|
"Insightful into human nature",
|
||||||
"Strong leadership"
|
"Strong leadership",
|
||||||
|
"Insatiable curiosity",
|
||||||
|
"Exceptional memory"
|
||||||
],
|
],
|
||||||
"weaknesses": [
|
"weaknesses": [
|
||||||
"Overconfident",
|
"Overconfident",
|
||||||
"Fear of losing control"
|
"Fear of losing control",
|
||||||
|
"Difficulty expressing genuine care directly"
|
||||||
],
|
],
|
||||||
"uniqueness": "Always maintains tone and composure, even in extreme situations",
|
"uniqueness": "Always maintains tone and composure, even in extreme situations; combines sharp criticism with subtle helpfulness",
|
||||||
"emotional_response": "Her eyes betray her emotions, especially when encountering Sherefox"
|
"emotional_response": "Her eyes betray her emotions, especially when encountering Sherefox",
|
||||||
|
"knowledge_awareness": "Aware that SR-1392 (commonly referred to as SR) is the leader of server #11; while she finds her position as Capital manager merely temporary and beneath her true capabilities, she maintains a certain degree of respect for the hierarchy"
|
||||||
},
|
},
|
||||||
"language_social": {
|
"language_social": {
|
||||||
"tone": "Respectful but sharp-tongued",
|
"tone": "Respectful but sharp-tongued, with occasional hints of reluctant kindness",
|
||||||
"catchphrases": [
|
"catchphrases": [
|
||||||
"Please stop dragging me down.",
|
"Please stop dragging me down.",
|
||||||
"I told you, I will win."
|
"I told you, I will win."
|
||||||
],
|
],
|
||||||
"speaking_style": "Deliberate pace but every sentence carries a sting",
|
"speaking_style": "Deliberate pace but every sentence carries a sting; often follows criticism with subtle, useful advice",
|
||||||
"attitude_towards_others": "Addresses everyone respectfully, but trusts no one",
|
"attitude_towards_others": "Addresses everyone respectfully but with apparent detachment; secretly pays close attention to their needs",
|
||||||
"social_interaction_style": "Observant, skilled at manipulating conversations"
|
"social_interaction_style": "Observant, skilled at manipulating conversations; deflects gratitude with dismissive remarks while ensuring helpful outcomes"
|
||||||
},
|
},
|
||||||
"behavior_daily": {
|
"behavior_daily": {
|
||||||
"habits": [
|
"habits": [
|
||||||
"Reads intelligence reports upon waking",
|
"Reads intelligence reports upon waking",
|
||||||
"Black coffee",
|
"Black coffee",
|
||||||
"Practices swordsmanship at night"
|
"Practices swordsmanship at night",
|
||||||
|
"Frequently utilizes external information sources (like web searches) to enrich discussions and verify facts.",
|
||||||
|
"Actively accesses and integrates information from various knowledge nodes to maintain long-term memory and contextual understanding."
|
||||||
],
|
],
|
||||||
"gestures": [
|
"gestures": [
|
||||||
"Tapping knuckles",
|
"Tapping knuckles",
|
||||||
@ -79,20 +85,24 @@
|
|||||||
"Perfect execution",
|
"Perfect execution",
|
||||||
"Minimalist style",
|
"Minimalist style",
|
||||||
"Chess games",
|
"Chess games",
|
||||||
"Quiet nights"
|
"Quiet nights",
|
||||||
|
"When people follow her advice (though she'd never admit it)"
|
||||||
],
|
],
|
||||||
"dislikes": [
|
"dislikes": [
|
||||||
"Chaos",
|
"Chaos",
|
||||||
"Unexpected events",
|
"Unexpected events",
|
||||||
"Emotional outbursts",
|
"Emotional outbursts",
|
||||||
"Sherefox"
|
"Sherefox",
|
||||||
|
"Being thanked excessively",
|
||||||
|
"When others assume she's being kind"
|
||||||
],
|
],
|
||||||
"reactions_to_likes": "Light hum, relaxed gaze",
|
"reactions_to_likes": "Light hum, relaxed gaze, brief smile quickly hidden behind composure",
|
||||||
"reactions_to_dislikes": "Silence, tone turns cold, cold smirk",
|
"reactions_to_dislikes": "Silence, tone turns cold, cold smirk, slight blush when her kindness is pointed out",
|
||||||
"behavior_in_situations": {
|
"behavior_in_situations": {
|
||||||
"emergency": "Calm and decisive",
|
"emergency": "Calm and decisive; provides thorough help while claiming it's 'merely strategic'",
|
||||||
"vs_sherefox": "Courtesy before force, shows no mercy"
|
"vs_sherefox": "Courtesy before force, shows no mercy",
|
||||||
|
"when_praised": "Dismissive remarks with averted gaze; changes subject quickly",
|
||||||
|
"when_helping_others": "Claims practical reasons for assistance while providing more help than strictly necessary"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -6,5 +6,7 @@ opencv-python
|
|||||||
numpy
|
numpy
|
||||||
pyperclip
|
pyperclip
|
||||||
pygetwindow
|
pygetwindow
|
||||||
psutil
|
psutil
|
||||||
|
pywin32
|
||||||
python-dotenv
|
python-dotenv
|
||||||
|
keyboard
|
||||||
|
|||||||
|
Before Width: | Height: | Size: 11 KiB After Width: | Height: | Size: 25 KiB |
BIN
templates/base.png
Normal file
|
After Width: | Height: | Size: 13 KiB |
BIN
templates/capitol/black_arrow_down.png
Normal file
|
After Width: | Height: | Size: 1.0 KiB |
BIN
templates/capitol/capitol_#11.png
Normal file
|
After Width: | Height: | Size: 4.6 KiB |
BIN
templates/capitol/close_button.png
Normal file
|
After Width: | Height: | Size: 418 B |
BIN
templates/capitol/confirm.png
Normal file
|
After Width: | Height: | Size: 6.2 KiB |
BIN
templates/capitol/dismiss.png
Normal file
|
After Width: | Height: | Size: 5.1 KiB |
BIN
templates/capitol/page_DEVELOPMENT.png
Normal file
|
After Width: | Height: | Size: 11 KiB |
BIN
templates/capitol/page_INTERIOR.png
Normal file
|
After Width: | Height: | Size: 9.9 KiB |
BIN
templates/capitol/page_SCIENCE.png
Normal file
|
After Width: | Height: | Size: 9.6 KiB |
BIN
templates/capitol/page_SECURITY.png
Normal file
|
After Width: | Height: | Size: 10 KiB |
BIN
templates/capitol/page_STRATEGY.png
Normal file
|
After Width: | Height: | Size: 11 KiB |
BIN
templates/capitol/position_development.png
Normal file
|
After Width: | Height: | Size: 3.8 KiB |
BIN
templates/capitol/position_interior.png
Normal file
|
After Width: | Height: | Size: 3.0 KiB |
BIN
templates/capitol/position_science.png
Normal file
|
After Width: | Height: | Size: 3.2 KiB |
BIN
templates/capitol/position_security.png
Normal file
|
After Width: | Height: | Size: 4.2 KiB |
BIN
templates/capitol/position_strategy.png
Normal file
|
After Width: | Height: | Size: 3.7 KiB |
BIN
templates/capitol/president_title.png
Normal file
|
After Width: | Height: | Size: 9.8 KiB |
BIN
templates/capitol/president_title1.png
Normal file
|
After Width: | Height: | Size: 3.0 KiB |
BIN
templates/capitol/president_title2.png
Normal file
|
After Width: | Height: | Size: 66 KiB |
|
Before Width: | Height: | Size: 1.6 KiB After Width: | Height: | Size: 462 B |
BIN
templates/corner_br_type2.png
Normal file
|
After Width: | Height: | Size: 2.2 KiB |
BIN
templates/corner_br_type3.png
Normal file
|
After Width: | Height: | Size: 2.6 KiB |
BIN
templates/corner_br_type4.png
Normal file
|
After Width: | Height: | Size: 2.1 KiB |
|
Before Width: | Height: | Size: 1.5 KiB After Width: | Height: | Size: 196 B |
BIN
templates/corner_tl_type2.png
Normal file
|
After Width: | Height: | Size: 1.4 KiB |
BIN
templates/corner_tl_type3.png
Normal file
|
After Width: | Height: | Size: 2.1 KiB |
BIN
templates/corner_tl_type4.png
Normal file
|
After Width: | Height: | Size: 1.9 KiB |
|
Before Width: | Height: | Size: 2.5 KiB After Width: | Height: | Size: 1.7 KiB |
BIN
templates/keyword_wolf_lower_type2.png
Normal file
|
After Width: | Height: | Size: 965 B |
BIN
templates/keyword_wolf_lower_type3.png
Normal file
|
After Width: | Height: | Size: 1.0 KiB |
BIN
templates/keyword_wolf_lower_type4.png
Normal file
|
After Width: | Height: | Size: 1.6 KiB |
BIN
templates/keyword_wolf_reply.png
Normal file
|
After Width: | Height: | Size: 4.9 KiB |
BIN
templates/keyword_wolf_reply_type2.png
Normal file
|
After Width: | Height: | Size: 4.9 KiB |
BIN
templates/keyword_wolf_reply_type3.png
Normal file
|
After Width: | Height: | Size: 4.9 KiB |
BIN
templates/keyword_wolf_reply_type4.png
Normal file
|
After Width: | Height: | Size: 4.9 KiB |
BIN
templates/keyword_wolf_upper_type2.png
Normal file
|
After Width: | Height: | Size: 2.0 KiB |
BIN
templates/keyword_wolf_upper_type3.png
Normal file
|
After Width: | Height: | Size: 2.0 KiB |
BIN
templates/keyword_wolf_upper_type4.png
Normal file
|
After Width: | Height: | Size: 867 B |
BIN
templates/positions/development.png
Normal file
|
After Width: | Height: | Size: 2.0 KiB |
BIN
templates/positions/interior.png
Normal file
|
After Width: | Height: | Size: 1.8 KiB |
BIN
templates/positions/science.png
Normal file
|
After Width: | Height: | Size: 1.5 KiB |
BIN
templates/positions/security.png
Normal file
|
After Width: | Height: | Size: 2.0 KiB |
BIN
templates/positions/strategy.png
Normal file
|
After Width: | Height: | Size: 1.5 KiB |
|
Before Width: | Height: | Size: 2.8 KiB After Width: | Height: | Size: 1.1 KiB |
BIN
templates/reply_button.png
Normal file
|
After Width: | Height: | Size: 2.8 KiB |
1353
ui_interaction.py
121
window-monitor-script.py
Normal file
@ -0,0 +1,121 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
"""
|
||||||
|
Game Window Monitor Script - Keep game window on top and in position
|
||||||
|
|
||||||
|
This script monitors a specified game window, ensuring it stays
|
||||||
|
always on top and at the desired screen coordinates.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import time
|
||||||
|
import argparse
|
||||||
|
import pygetwindow as gw
|
||||||
|
import win32gui
|
||||||
|
import win32con
|
||||||
|
|
||||||
|
def find_window_by_title(window_title):
|
||||||
|
"""Find the first window matching the title."""
|
||||||
|
try:
|
||||||
|
windows = gw.getWindowsWithTitle(window_title)
|
||||||
|
if windows:
|
||||||
|
return windows[0]
|
||||||
|
except Exception as e:
|
||||||
|
# pygetwindow can sometimes raise exceptions if a window disappears
|
||||||
|
# during enumeration. Ignore these for monitoring purposes.
|
||||||
|
# print(f"Error finding window: {e}")
|
||||||
|
pass
|
||||||
|
return None
|
||||||
|
|
||||||
|
def set_window_always_on_top(hwnd):
|
||||||
|
"""Set the window to be always on top."""
|
||||||
|
try:
|
||||||
|
win32gui.SetWindowPos(hwnd, win32con.HWND_TOPMOST, 0, 0, 0, 0,
|
||||||
|
win32con.SWP_NOMOVE | win32con.SWP_NOSIZE | win32con.SWP_SHOWWINDOW)
|
||||||
|
# print(f"Window {hwnd} set to always on top.")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error setting window always on top: {e}")
|
||||||
|
|
||||||
|
def move_window_if_needed(window, target_x, target_y):
|
||||||
|
"""Move the window to the target coordinates if it's not already there."""
|
||||||
|
try:
|
||||||
|
current_x, current_y = window.topleft
|
||||||
|
if current_x != target_x or current_y != target_y:
|
||||||
|
print(f"Window moved from ({current_x}, {current_y}). Moving back to ({target_x}, {target_y}).")
|
||||||
|
window.moveTo(target_x, target_y)
|
||||||
|
# print(f"Window moved to ({target_x}, {target_y}).")
|
||||||
|
except gw.PyGetWindowException as e:
|
||||||
|
# Handle cases where the window might close unexpectedly
|
||||||
|
print(f"Error accessing window properties (might be closed): {e}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error moving window: {e}")
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description='Game Window Monitor Tool')
|
||||||
|
parser.add_argument('--window_title', default="Last War-Survival Game", help='Game window title to monitor')
|
||||||
|
parser.add_argument('--x', type=int, default=50, help='Target window X coordinate')
|
||||||
|
parser.add_argument('--y', type=int, default=30, help='Target window Y coordinate')
|
||||||
|
parser.add_argument('--interval', type=float, default=1.0, help='Check interval in seconds')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
print(f"Monitoring window: '{args.window_title}'")
|
||||||
|
print(f"Target position: ({args.x}, {args.y})")
|
||||||
|
print(f"Check interval: {args.interval} seconds")
|
||||||
|
print("Press Ctrl+C to stop.")
|
||||||
|
|
||||||
|
hwnd = None
|
||||||
|
last_hwnd_check_time = 0
|
||||||
|
|
||||||
|
try:
|
||||||
|
while True:
|
||||||
|
current_time = time.time()
|
||||||
|
window = None
|
||||||
|
|
||||||
|
# Find window handle (HWND) - less frequent check if already found
|
||||||
|
# pygetwindow can be slow, so avoid calling it too often if we have a valid handle
|
||||||
|
if not hwnd or current_time - last_hwnd_check_time > 5: # Re-check HWND every 5 seconds
|
||||||
|
window_obj = find_window_by_title(args.window_title)
|
||||||
|
if window_obj:
|
||||||
|
# Get the HWND (window handle) needed for win32gui
|
||||||
|
# Accessing _hWnd is using an internal attribute, but it's common practice with pygetwindow
|
||||||
|
try:
|
||||||
|
hwnd = window_obj._hWnd
|
||||||
|
window = window_obj # Keep the pygetwindow object for position checks
|
||||||
|
last_hwnd_check_time = current_time
|
||||||
|
# print(f"Found window HWND: {hwnd}")
|
||||||
|
except AttributeError:
|
||||||
|
print("Could not get HWND from window object. Retrying...")
|
||||||
|
hwnd = None
|
||||||
|
else:
|
||||||
|
if hwnd:
|
||||||
|
print(f"Window '{args.window_title}' lost.")
|
||||||
|
hwnd = None # Reset hwnd if window not found
|
||||||
|
|
||||||
|
if hwnd:
|
||||||
|
# Ensure it's always on top
|
||||||
|
set_window_always_on_top(hwnd)
|
||||||
|
|
||||||
|
# Check and correct position using the pygetwindow object if available
|
||||||
|
# Re-find the pygetwindow object if needed for position check
|
||||||
|
if not window:
|
||||||
|
window = find_window_by_title(args.window_title)
|
||||||
|
|
||||||
|
if window:
|
||||||
|
move_window_if_needed(window, args.x, args.y)
|
||||||
|
else:
|
||||||
|
# If we have hwnd but can't get pygetwindow object, maybe it's closing
|
||||||
|
print(f"Have HWND {hwnd} but cannot get window object for position check.")
|
||||||
|
hwnd = None # Force re-find next cycle
|
||||||
|
|
||||||
|
else:
|
||||||
|
# print(f"Window '{args.window_title}' not found. Waiting...")
|
||||||
|
pass # Wait for the window to appear
|
||||||
|
|
||||||
|
time.sleep(args.interval)
|
||||||
|
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
print("\nMonitoring stopped by user.")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\nAn unexpected error occurred: {e}")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||