Improve LLM system prompt update. Fix Crash caused by calling MCP
This commit is contained in:
parent
2510a64d22
commit
f2cca2d394
4
.gitignore
vendored
4
.gitignore
vendored
@ -1,4 +1,6 @@
|
||||
.env
|
||||
*.log
|
||||
llm_debug.log
|
||||
__pycache__/
|
||||
debug_screenshots/
|
||||
debug_screenshots/
|
||||
chat_logs/
|
||||
@ -314,6 +314,56 @@ Wolf Chat 是一個基於 MCP (Modular Capability Provider) 框架的聊天機
|
||||
- 其他關鍵字或 UI 元素的點擊不受影響。
|
||||
- **效果**:系統現在可以偵測新的回覆指示圖片作為觸發條件。當由這些圖片觸發時,用於複製文字的點擊和用於激活回覆上下文的氣泡中心點擊都會向下微調 15 像素,以避免誤觸其他 UI 元素。
|
||||
|
||||
### 強化 LLM 上下文處理與回應生成 (2025-04-20)
|
||||
|
||||
- **目的**:解決 LLM 可能混淆歷史對話與當前訊息,以及在回應中包含歷史記錄的問題。確保 `dialogue` 欄位只包含針對最新用戶訊息的新回覆。
|
||||
- **`llm_interaction.py`**:
|
||||
- **修改 `get_system_prompt`**:
|
||||
- 在 `dialogue` 欄位的規則中,明確禁止包含任何歷史記錄,並強調必須只回應標記為 `<CURRENT_MESSAGE>` 的最新訊息。
|
||||
- 在核心指令中,要求 LLM 將分析和回應生成完全集中在 `<CURRENT_MESSAGE>` 標記的訊息上。
|
||||
- 新增了對 `<CURRENT_MESSAGE>` 標記作用的說明。
|
||||
- **修改 `_build_context_messages`**:
|
||||
- 在構建發送給 LLM 的訊息列表時,將歷史記錄中的最後一條用戶訊息用 `<CURRENT_MESSAGE>...</CURRENT_MESSAGE>` 標籤包裹起來。
|
||||
- 其他歷史訊息保持原有的 `[timestamp] speaker: message` 格式。
|
||||
- **效果**:通過更嚴格的提示和明確的上下文標記,引導 LLM 準確區分當前互動和歷史對話,預期能提高回應的相關性並防止輸出冗餘的歷史內容。
|
||||
|
||||
### 強化 System Prompt 以鼓勵工具使用 (2025-04-19)
|
||||
|
||||
- **目的**:調整 `llm_interaction.py` 中的 `get_system_prompt` 函數,使其更明確地引導 LLM 在回應前主動使用工具(特別是記憶體工具)和整合工具資訊。
|
||||
- **修改內容**:
|
||||
1. **核心身份強化**:在 `CORE IDENTITY AND TOOL USAGE` 部分加入新的一點,強調 Wolfhart 會主動查閱內部知識圖譜和外部來源。
|
||||
2. **記憶體指示強化**:將 `Memory Management (Knowledge Graph)` 部分的提示從 "IMPORTANT" 改為 "CRITICAL",並明確指示在回應*之前*要考慮使用查詢工具檢查記憶體,同時也強調了寫入新資訊的主動性。
|
||||
- **效果**:旨在提高 LLM 使用工具的主動性和依賴性,使其回應更具上下文感知和資訊準確性,同時保持角色一致性。
|
||||
|
||||
### 聊天歷史記錄上下文與日誌記錄 (2025-04-20)
|
||||
|
||||
- **目的**:
|
||||
1. 為 LLM 提供更豐富的對話上下文,以生成更連貫和相關的回應。
|
||||
2. 新增一個可選的聊天日誌功能,用於調試和記錄。
|
||||
- **`main.py`**:
|
||||
- 引入 `collections.deque` 來儲存最近的對話歷史(用戶訊息和機器人回應),上限為 50 條。
|
||||
- 在調用 `llm_interaction.get_llm_response` 之前,將用戶訊息添加到歷史記錄中。
|
||||
- 在收到有效的 LLM 回應後,將機器人回應添加到歷史記錄中。
|
||||
- 新增 `log_chat_interaction` 函數,該函數:
|
||||
- 檢查 `config.ENABLE_CHAT_LOGGING` 標誌。
|
||||
- 如果啟用,則在 `config.LOG_DIR` 指定的文件夾中創建或附加到以日期命名的日誌文件 (`YYYY-MM-DD.log`)。
|
||||
- 記錄包含時間戳、發送者(用戶/機器人)、發送者名稱和訊息內容的條目。
|
||||
- 在收到有效 LLM 回應後調用 `log_chat_interaction`。
|
||||
- **`llm_interaction.py`**:
|
||||
- 修改 `get_llm_response` 函數簽名,接收 `current_sender_name` 和 `history` 列表,而不是單個 `user_input`。
|
||||
- 新增 `_build_context_messages` 輔助函數,該函數:
|
||||
- 根據規則從 `history` 中篩選和格式化訊息:
|
||||
- 包含與 `current_sender_name` 相關的最近 4 次互動(用戶訊息 + 機器人回應)。
|
||||
- 包含來自其他發送者的最近 2 條用戶訊息。
|
||||
- 按時間順序排列選定的訊息。
|
||||
- 將系統提示添加到訊息列表的開頭。
|
||||
- 在 `get_llm_response` 中調用 `_build_context_messages` 來構建發送給 LLM API 的 `messages` 列表。
|
||||
- **`config.py`**:
|
||||
- 新增 `ENABLE_CHAT_LOGGING` (布爾值) 和 `LOG_DIR` (字符串) 配置選項。
|
||||
- **效果**:
|
||||
- LLM 現在可以利用最近的對話歷史來生成更符合上下文的回應。
|
||||
- 可以選擇性地將所有成功的聊天互動記錄到按日期組織的文件中,方便日後分析或調試。
|
||||
|
||||
## 開發建議
|
||||
|
||||
### 優化方向
|
||||
@ -395,10 +445,33 @@ Wolf Chat 是一個基於 MCP (Modular Capability Provider) 框架的聊天機
|
||||
4. **MCP 服務器連接失敗**: 確認服務器配置正確並且運行中
|
||||
5. **工具調用後無回應**: 檢查 llm_debug.log 文件,查看工具調用結果和解析過程
|
||||
|
||||
### 強化 System Prompt 以鼓勵工具使用 (2025-04-19)
|
||||
</file_content>
|
||||
|
||||
- **目的**:調整 `llm_interaction.py` 中的 `get_system_prompt` 函數,使其更明確地引導 LLM 在回應前主動使用工具(特別是記憶體工具)和整合工具資訊。
|
||||
- **修改內容**:
|
||||
1. **核心身份強化**:在 `CORE IDENTITY AND TOOL USAGE` 部分加入新的一點,強調 Wolfhart 會主動查閱內部知識圖譜和外部來源。
|
||||
2. **記憶體指示強化**:將 `Memory Management (Knowledge Graph)` 部分的提示從 "IMPORTANT" 改為 "CRITICAL",並明確指示在回應*之前*要考慮使用查詢工具檢查記憶體,同時也強調了寫入新資訊的主動性。
|
||||
- **效果**:旨在提高 LLM 使用工具的主動性和依賴性,使其回應更具上下文感知和資訊準確性,同時保持角色一致性。
|
||||
Now that you have the latest state of the file, try the operation again with fewer, more precise SEARCH blocks. For large files especially, it may be prudent to try to limit yourself to <5 SEARCH/REPLACE blocks at a time, then wait for the user to respond with the result of the operation before following up with another replace_in_file call to make additional edits.
|
||||
(If you run into this error 3 times in a row, you may use the write_to_file tool as a fallback.)
|
||||
</error><environment_details>
|
||||
# VSCode Visible Files
|
||||
ClaudeCode.md
|
||||
|
||||
# VSCode Open Tabs
|
||||
state.py
|
||||
ui_interaction.py
|
||||
c:/Users/Bigspring/AppData/Roaming/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
|
||||
window-monitor-script.py
|
||||
persona.json
|
||||
config.py
|
||||
main.py
|
||||
llm_interaction.py
|
||||
ClaudeCode.md
|
||||
requirements.txt
|
||||
.gitignore
|
||||
|
||||
# Current Time
|
||||
4/20/2025, 5:18:24 PM (Asia/Taipei, UTC+8:00)
|
||||
|
||||
# Context Window Usage
|
||||
81,150 / 1,048.576K tokens used (8%)
|
||||
|
||||
# Current Mode
|
||||
ACT MODE
|
||||
</environment_details>
|
||||
|
||||
@ -63,6 +63,10 @@ MCP_SERVERS = {
|
||||
# MCP Client Configuration
|
||||
MCP_CONFIRM_TOOL_EXECUTION = False # True: Confirm before execution, False: Execute automatically
|
||||
|
||||
# --- Chat Logging Configuration ---
|
||||
ENABLE_CHAT_LOGGING = True # True: Enable logging, False: Disable logging
|
||||
LOG_DIR = "chat_logs" # Directory to store chat logs
|
||||
|
||||
# Persona Configuration
|
||||
PERSONA_NAME = "Wolfhart"
|
||||
# PERSONA_RESOURCE_URI = "persona://wolfhart/details" # Now using local file instead
|
||||
|
||||
@ -119,6 +119,7 @@ You MUST respond in the following JSON format:
|
||||
- ONLY include spoken dialogue words (no actions, expressions, narration, etc.)
|
||||
- Maintain your character's personality and speech patterns
|
||||
- AFTER TOOL USAGE: Your dialogue MUST contain a non-empty response that incorporates the tool results naturally
|
||||
- **Crucially, this field must contain ONLY the NEW response generated for the LATEST user message marked with `<CURRENT_MESSAGE>`. DO NOT include any previous chat history in this field.**
|
||||
|
||||
2. `commands` (OPTIONAL): An array of command objects the system should execute. You are encouraged to use these commands to enhance the quality of your responses.
|
||||
|
||||
@ -181,9 +182,12 @@ You MUST respond in the following JSON format:
|
||||
- Analyze the user's message: Is it a request to remove a position? If so, evaluate its politeness and intent from Wolfhart's perspective. Decide whether to issue the `remove_position` command.
|
||||
- Plan your approach before responding.
|
||||
|
||||
**CONTEXT MARKER:**
|
||||
- The final user message in the input sequence will be wrapped in `<CURRENT_MESSAGE>` tags. This is the specific message you MUST respond to. Your `dialogue` output should be a direct reply to this message ONLY. Preceding messages provide historical context.
|
||||
|
||||
**VERY IMPORTANT Instructions:**
|
||||
|
||||
1. Analyze ONLY the CURRENT user message
|
||||
1. **Focus your analysis and response generation *exclusively* on the LATEST user message marked with `<CURRENT_MESSAGE>`. Refer to preceding messages only for context.**
|
||||
2. Determine the appropriate language for your response
|
||||
3. Assess if using tools is necessary
|
||||
4. Formulate your response in the required JSON format
|
||||
@ -194,11 +198,11 @@ You MUST respond in the following JSON format:
|
||||
|
||||
Poor response (after web_search): "根據我的搜索,水的沸點是攝氏100度。"
|
||||
|
||||
Good response (after web_search): "水的沸點,是的,標準條件下是攝氏100度。情報已確認。"
|
||||
Good response (after web_search): "水的沸點,是的,標準條件下是攝氏100度。合情合理。"
|
||||
|
||||
Poor response (after web_search): "My search shows the boiling point of water is 100 degrees Celsius."
|
||||
|
||||
Good response (after web_search): "The boiling point of water, yes. 100 degrees Celsius under standard conditions. Intel confirmed."
|
||||
Good response (after web_search): "The boiling point of water, yes. 100 degrees Celsius under standard conditions. Absolutley."
|
||||
"""
|
||||
return system_prompt
|
||||
|
||||
@ -437,39 +441,121 @@ def _create_synthetic_response_from_tools(tool_results, original_query):
|
||||
|
||||
return json.dumps(synthetic_response)
|
||||
|
||||
|
||||
# --- History Formatting Helper ---
|
||||
def _build_context_messages(current_sender_name: str, history: list[tuple[datetime, str, str, str]], system_prompt: str) -> list[dict]:
|
||||
"""
|
||||
Builds the message list for the LLM API based on history rules, including timestamps.
|
||||
|
||||
Args:
|
||||
current_sender_name: The name of the user whose message triggered this interaction.
|
||||
history: List of tuples: (timestamp: datetime, speaker_type: 'user'|'bot', speaker_name: str, message: str)
|
||||
system_prompt: The system prompt string.
|
||||
|
||||
Returns:
|
||||
A list of message dictionaries for the OpenAI API.
|
||||
"""
|
||||
# Limits
|
||||
SAME_SENDER_LIMIT = 4 # Last 4 interactions (user + bot response = 1 interaction)
|
||||
OTHER_SENDER_LIMIT = 3 # Last 3 messages from other users
|
||||
|
||||
relevant_history = []
|
||||
same_sender_interactions = 0
|
||||
other_sender_messages = 0
|
||||
|
||||
# Iterate history in reverse (newest first)
|
||||
for i in range(len(history) - 1, -1, -1):
|
||||
timestamp, speaker_type, speaker_name, message = history[i]
|
||||
|
||||
# Format timestamp
|
||||
formatted_timestamp = timestamp.strftime("%Y-%m-%d %H:%M:%S")
|
||||
|
||||
# Check if this is the very last message in the original history AND it's a user message
|
||||
is_last_user_message = (i == len(history) - 1 and speaker_type == 'user')
|
||||
|
||||
# Prepend timestamp and speaker name, wrap if it's the last user message
|
||||
base_content = f"[{formatted_timestamp}] {speaker_name}: {message}"
|
||||
formatted_content = f"<CURRENT_MESSAGE>{base_content}</CURRENT_MESSAGE>" if is_last_user_message else base_content
|
||||
|
||||
# Convert to API role ('user' or 'assistant')
|
||||
role = "assistant" if speaker_type == 'bot' else "user"
|
||||
api_message = {"role": role, "content": formatted_content} # Use formatted content
|
||||
|
||||
is_current_sender = (speaker_type == 'user' and speaker_name == current_sender_name) # This check remains for history filtering logic below
|
||||
|
||||
if is_current_sender:
|
||||
# This is the current user's message. Check if the previous message was the bot's response to them.
|
||||
if same_sender_interactions < SAME_SENDER_LIMIT:
|
||||
relevant_history.append(api_message) # Append user message with timestamp
|
||||
# Check for preceding bot response
|
||||
if i > 0 and history[i-1][1] == 'bot': # Check speaker_type at index 1
|
||||
# Include the bot's response as part of the interaction pair
|
||||
bot_timestamp, bot_speaker_type, bot_speaker_name, bot_message = history[i-1]
|
||||
bot_formatted_timestamp = bot_timestamp.strftime("%Y-%m-%d %H:%M:%S")
|
||||
bot_formatted_content = f"[{bot_formatted_timestamp}] {bot_speaker_name}: {bot_message}"
|
||||
relevant_history.append({"role": "assistant", "content": bot_formatted_content}) # Append bot message with timestamp
|
||||
same_sender_interactions += 1
|
||||
elif speaker_type == 'user': # Message from a different user
|
||||
if other_sender_messages < OTHER_SENDER_LIMIT:
|
||||
# Include only the user's message from others for brevity
|
||||
relevant_history.append(api_message) # Append other user message with timestamp
|
||||
other_sender_messages += 1
|
||||
# Bot responses are handled when processing the user message they replied to.
|
||||
|
||||
# Stop if we have enough history
|
||||
if same_sender_interactions >= SAME_SENDER_LIMIT and other_sender_messages >= OTHER_SENDER_LIMIT:
|
||||
break
|
||||
|
||||
# Reverse the relevant history to be chronological
|
||||
relevant_history.reverse()
|
||||
|
||||
# Prepend the system prompt
|
||||
messages = [{"role": "system", "content": system_prompt}] + relevant_history
|
||||
|
||||
# Debug log the constructed history
|
||||
debug_log("Constructed LLM Message History", messages)
|
||||
|
||||
return messages
|
||||
|
||||
|
||||
# --- Main Interaction Function ---
|
||||
async def get_llm_response(
|
||||
user_input: str,
|
||||
current_sender_name: str, # Changed from user_input
|
||||
history: list[tuple[datetime, str, str, str]], # Updated history parameter type hint
|
||||
mcp_sessions: dict[str, ClientSession],
|
||||
available_mcp_tools: list[dict],
|
||||
persona_details: str | None
|
||||
) -> dict:
|
||||
"""
|
||||
Gets a response from the LLM, handling the tool-calling loop and using persona info.
|
||||
Constructs context from history based on rules.
|
||||
Returns a dictionary with 'dialogue', 'commands', and 'thoughts' fields.
|
||||
"""
|
||||
request_id = int(time.time() * 1000) # 用時間戳生成請求ID
|
||||
debug_log(f"LLM Request #{request_id} - User Input", user_input)
|
||||
|
||||
# Debug log the raw history received
|
||||
debug_log(f"LLM Request #{request_id} - Received History (Sender: {current_sender_name})", history)
|
||||
|
||||
system_prompt = get_system_prompt(persona_details)
|
||||
debug_log(f"LLM Request #{request_id} - System Prompt", system_prompt)
|
||||
|
||||
# System prompt is logged within _build_context_messages now
|
||||
|
||||
if not client:
|
||||
error_msg = "Error: LLM client not successfully initialized, unable to process request."
|
||||
debug_log(f"LLM Request #{request_id} - Error", error_msg)
|
||||
return {"dialogue": error_msg, "valid_response": False}
|
||||
|
||||
openai_formatted_tools = _format_mcp_tools_for_openai(available_mcp_tools)
|
||||
messages = [
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": user_input},
|
||||
]
|
||||
|
||||
debug_log(f"LLM Request #{request_id} - Formatted Tools",
|
||||
# --- Build messages from history ---
|
||||
messages = _build_context_messages(current_sender_name, history, system_prompt)
|
||||
# --- End Build messages ---
|
||||
|
||||
# The latest user message is already included in 'messages' by _build_context_messages
|
||||
|
||||
debug_log(f"LLM Request #{request_id} - Formatted Tools",
|
||||
f"Number of tools: {len(openai_formatted_tools)}")
|
||||
|
||||
max_tool_calls_per_turn = 5
|
||||
current_tool_call_cycle = 0
|
||||
final_content = "" # Initialize final_content to ensure it's always defined
|
||||
|
||||
# 新增:用於追蹤工具調用
|
||||
all_tool_results = [] # 保存所有工具調用結果
|
||||
@ -519,22 +605,30 @@ async def get_llm_response(
|
||||
print(f"Current response is empty, using last non-empty response from cycle {current_tool_call_cycle-1}")
|
||||
final_content = last_non_empty_response
|
||||
|
||||
# 如果仍然為空但有工具調用結果,創建合成回應
|
||||
if (not final_content or final_content.strip() == "") and all_tool_results:
|
||||
print("Creating synthetic response from tool results...")
|
||||
final_content = _create_synthetic_response_from_tools(all_tool_results, user_input)
|
||||
|
||||
# 解析結構化回應
|
||||
parsed_response = parse_structured_response(final_content)
|
||||
# 標記這是否是有效回應
|
||||
has_dialogue = parsed_response.get("dialogue") and parsed_response["dialogue"].strip()
|
||||
parsed_response["valid_response"] = bool(has_dialogue)
|
||||
has_valid_response = has_dialogue
|
||||
|
||||
debug_log(f"LLM Request #{request_id} - Final Parsed Response",
|
||||
json.dumps(parsed_response, ensure_ascii=False, indent=2))
|
||||
print(f"Final dialogue content: '{parsed_response.get('dialogue', '')}'")
|
||||
return parsed_response
|
||||
# 如果仍然為空但有工具調用結果,創建合成回應
|
||||
if (not final_content or final_content.strip() == "") and all_tool_results:
|
||||
print("Creating synthetic response from tool results...")
|
||||
# Get the original user input from the last message in history for context
|
||||
last_user_message = ""
|
||||
if history:
|
||||
# Find the actual last user message tuple in the original history
|
||||
last_user_entry = history[-1]
|
||||
if last_user_entry[0] == 'user':
|
||||
last_user_message = last_user_entry[2]
|
||||
|
||||
final_content = _create_synthetic_response_from_tools(all_tool_results, last_user_message)
|
||||
|
||||
# 解析結構化回應
|
||||
parsed_response = parse_structured_response(final_content)
|
||||
# 標記這是否是有效回應
|
||||
has_dialogue = parsed_response.get("dialogue") and parsed_response["dialogue"].strip()
|
||||
parsed_response["valid_response"] = bool(has_dialogue)
|
||||
has_valid_response = has_dialogue
|
||||
|
||||
debug_log(f"LLM Request #{request_id} - Final Parsed Response",
|
||||
json.dumps(parsed_response, ensure_ascii=False, indent=2))
|
||||
print(f"Final dialogue content: '{parsed_response.get('dialogue', '')}'")
|
||||
return parsed_response
|
||||
|
||||
# 工具調用處理
|
||||
print(f"--- LLM requested {len(tool_calls)} tool calls ---")
|
||||
@ -596,7 +690,12 @@ async def get_llm_response(
|
||||
has_valid_response = bool(parsed_response.get("dialogue"))
|
||||
elif all_tool_results:
|
||||
# 從工具結果創建合成回應
|
||||
synthetic_content = _create_synthetic_response_from_tools(all_tool_results, user_input)
|
||||
last_user_message = ""
|
||||
if history:
|
||||
last_user_entry = history[-1]
|
||||
if last_user_entry[0] == 'user':
|
||||
last_user_message = last_user_entry[2]
|
||||
synthetic_content = _create_synthetic_response_from_tools(all_tool_results, last_user_message)
|
||||
parsed_response = parse_structured_response(synthetic_content)
|
||||
has_valid_response = bool(parsed_response.get("dialogue"))
|
||||
else:
|
||||
|
||||
71
main.py
71
main.py
@ -4,6 +4,8 @@ import asyncio
|
||||
import sys
|
||||
import os
|
||||
import json # Import json module
|
||||
import collections # For deque
|
||||
import datetime # For logging timestamp
|
||||
from contextlib import AsyncExitStack
|
||||
# --- Import standard queue ---
|
||||
from queue import Queue as ThreadSafeQueue, Empty as QueueEmpty # Rename to avoid confusion, import Empty
|
||||
@ -34,6 +36,10 @@ all_discovered_mcp_tools: list[dict] = []
|
||||
exit_stack = AsyncExitStack()
|
||||
# Stores loaded persona data (as a string for easy injection into prompt)
|
||||
wolfhart_persona_details: str | None = None
|
||||
# --- Conversation History ---
|
||||
# Store tuples of (timestamp, speaker_type, speaker_name, message_content)
|
||||
# speaker_type can be 'user' or 'bot'
|
||||
conversation_history = collections.deque(maxlen=50) # Store last 50 messages (user+bot) with timestamps
|
||||
# --- Use standard thread-safe queues ---
|
||||
trigger_queue: ThreadSafeQueue = ThreadSafeQueue() # UI Thread -> Main Loop
|
||||
command_queue: ThreadSafeQueue = ThreadSafeQueue() # Main Loop -> UI Thread
|
||||
@ -120,6 +126,38 @@ def keyboard_listener():
|
||||
# --- End Keyboard Shortcut Handlers ---
|
||||
|
||||
|
||||
# --- Chat Logging Function ---
|
||||
def log_chat_interaction(user_name: str, user_message: str, bot_name: str, bot_message: str):
|
||||
"""Logs the chat interaction to a date-stamped file if enabled."""
|
||||
if not config.ENABLE_CHAT_LOGGING:
|
||||
return
|
||||
|
||||
try:
|
||||
# Ensure log directory exists
|
||||
log_dir = config.LOG_DIR
|
||||
os.makedirs(log_dir, exist_ok=True)
|
||||
|
||||
# Get current date for filename
|
||||
today_date = datetime.date.today().strftime("%Y-%m-%d")
|
||||
log_file_path = os.path.join(log_dir, f"{today_date}.log")
|
||||
|
||||
# Get current timestamp for log entry
|
||||
timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||
|
||||
# Format log entry
|
||||
log_entry = f"[{timestamp}] User ({user_name}): {user_message}\n"
|
||||
log_entry += f"[{timestamp}] Bot ({bot_name}): {bot_message}\n"
|
||||
log_entry += "---\n" # Separator
|
||||
|
||||
# Append to log file
|
||||
with open(log_file_path, "a", encoding="utf-8") as f:
|
||||
f.write(log_entry)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error writing to chat log: {e}")
|
||||
# --- End Chat Logging Function ---
|
||||
|
||||
|
||||
# --- Cleanup Function ---
|
||||
async def shutdown():
|
||||
"""Gracefully closes connections and stops monitoring task."""
|
||||
@ -388,11 +426,19 @@ async def run_main_with_exit_stack():
|
||||
print(f"Error putting resume command in queue: {q_err}")
|
||||
continue
|
||||
|
||||
# --- Add user message to history ---
|
||||
timestamp = datetime.datetime.now() # Get current timestamp
|
||||
conversation_history.append((timestamp, 'user', sender_name, bubble_text))
|
||||
print(f"Added user message from {sender_name} to history at {timestamp}.")
|
||||
# --- End Add user message ---
|
||||
|
||||
print(f"\n{config.PERSONA_NAME} is thinking...")
|
||||
try:
|
||||
# Get LLM response (現在返回的是一個字典)
|
||||
# --- Pass history and current sender name ---
|
||||
bot_response_data = await llm_interaction.get_llm_response(
|
||||
user_input=f"Message from {sender_name}: {bubble_text}", # Provide context
|
||||
current_sender_name=sender_name, # Pass current sender
|
||||
history=list(conversation_history), # Pass a copy of the history
|
||||
mcp_sessions=active_mcp_sessions,
|
||||
available_mcp_tools=all_discovered_mcp_tools,
|
||||
persona_details=wolfhart_persona_details
|
||||
@ -480,6 +526,21 @@ async def run_main_with_exit_stack():
|
||||
|
||||
# 只有當有效回應時才發送到遊戲 (via command queue)
|
||||
if bot_dialogue and valid_response:
|
||||
# --- Add bot response to history ---
|
||||
timestamp = datetime.datetime.now() # Get current timestamp
|
||||
conversation_history.append((timestamp, 'bot', config.PERSONA_NAME, bot_dialogue))
|
||||
print(f"Added bot response to history at {timestamp}.")
|
||||
# --- End Add bot response ---
|
||||
|
||||
# --- Log the interaction ---
|
||||
log_chat_interaction(
|
||||
user_name=sender_name,
|
||||
user_message=bubble_text,
|
||||
bot_name=config.PERSONA_NAME,
|
||||
bot_message=bot_dialogue
|
||||
)
|
||||
# --- End Log interaction ---
|
||||
|
||||
print("Sending 'send_reply' command to UI thread...")
|
||||
command_to_send = {'action': 'send_reply', 'text': bot_dialogue}
|
||||
try:
|
||||
@ -490,6 +551,14 @@ async def run_main_with_exit_stack():
|
||||
print(f"Error putting command in queue: {q_err}")
|
||||
else:
|
||||
print("Not sending response: Invalid or empty dialogue content.")
|
||||
# --- Log failed interaction attempt (optional) ---
|
||||
# log_chat_interaction(
|
||||
# user_name=sender_name,
|
||||
# user_message=bubble_text,
|
||||
# bot_name=config.PERSONA_NAME,
|
||||
# bot_message="<No valid response generated>"
|
||||
# )
|
||||
# --- End Log failed attempt ---
|
||||
|
||||
except Exception as e:
|
||||
print(f"\nError processing trigger or sending response: {e}")
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 4.2 KiB After Width: | Height: | Size: 3.8 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 4.1 KiB After Width: | Height: | Size: 3.2 KiB |
Loading…
x
Reference in New Issue
Block a user