Improve LLM performance

This commit is contained in:
z060142 2025-04-20 14:46:04 +08:00
parent f7b7864446
commit 3403c14e13
8 changed files with 290 additions and 103 deletions

View File

@ -352,6 +352,14 @@ Wolf Chat 是一個基於 MCP (Modular Capability Provider) 框架的聊天機
## 使用指南 ## 使用指南
### 快捷鍵 (新增)
- **F7**: 清除最近已處理的對話紀錄 (`recent_texts` in `ui_interaction.py`)。這有助於在需要時強制重新處理最近的訊息。
- **F8**: 暫停/恢復腳本的主要功能UI 監控、LLM 互動)。
- **暫停時**: UI 監控線程會停止偵測新的聊天氣泡,主循環會暫停處理新的觸發事件。
- **恢復時**: UI 監控線程會恢復偵測,並且會清除最近的對話紀錄 (`recent_texts`) 和最後處理的氣泡資訊 (`last_processed_bubble_info`),以確保從乾淨的狀態開始。
- **F9**: 觸發腳本的正常關閉流程,包括關閉 MCP 連接和停止監控線程。
### 啟動流程 ### 啟動流程
1. 確保遊戲已啟動且聊天介面可見 1. 確保遊戲已啟動且聊天介面可見
@ -374,3 +382,11 @@ Wolf Chat 是一個基於 MCP (Modular Capability Provider) 框架的聊天機
3. **LLM 連接問題**: 驗證 API 密鑰和網絡連接 3. **LLM 連接問題**: 驗證 API 密鑰和網絡連接
4. **MCP 服務器連接失敗**: 確認服務器配置正確並且運行中 4. **MCP 服務器連接失敗**: 確認服務器配置正確並且運行中
5. **工具調用後無回應**: 檢查 llm_debug.log 文件,查看工具調用結果和解析過程 5. **工具調用後無回應**: 檢查 llm_debug.log 文件,查看工具調用結果和解析過程
### 強化 System Prompt 以鼓勵工具使用 (2025-04-19)
- **目的**:調整 `llm_interaction.py` 中的 `get_system_prompt` 函數,使其更明確地引導 LLM 在回應前主動使用工具(特別是記憶體工具)和整合工具資訊。
- **修改內容**
1. **核心身份強化**:在 `CORE IDENTITY AND TOOL USAGE` 部分加入新的一點,強調 Wolfhart 會主動查閱內部知識圖譜和外部來源。
2. **記憶體指示強化**:將 `Memory Management (Knowledge Graph)` 部分的提示從 "IMPORTANT" 改為 "CRITICAL",並明確指示在回應*之前*要考慮使用查詢工具檢查記憶體,同時也強調了寫入新資訊的主動性。
- **效果**:旨在提高 LLM 使用工具的主動性和依賴性,使其回應更具上下文感知和資訊準確性,同時保持角色一致性。

View File

@ -16,8 +16,11 @@ OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
#LLM_MODEL = "anthropic/claude-3.7-sonnet" #LLM_MODEL = "anthropic/claude-3.7-sonnet"
#LLM_MODEL = "meta-llama/llama-4-maverick" #LLM_MODEL = "meta-llama/llama-4-maverick"
#LLM_MODEL = "deepseek/deepseek-chat-v3-0324:free" #LLM_MODEL = "deepseek/deepseek-chat-v3-0324:free"
#LLM_MODEL = "google/gemini-2.5-flash-preview"
LLM_MODEL = "deepseek/deepseek-chat-v3-0324" # <--- Ensure this matches the model name provided by your provider LLM_MODEL = "deepseek/deepseek-chat-v3-0324" # <--- Ensure this matches the model name provided by your provider
#LLM_MODEL = "openai/gpt-4.1-nano"
EXA_API_KEY = os.getenv("EXA_API_KEY") EXA_API_KEY = os.getenv("EXA_API_KEY")
# --- Dynamically build Exa server args --- # --- Dynamically build Exa server args ---
@ -46,16 +49,13 @@ MCP_SERVERS = {
exa_config_arg_string_single_dump # Use the single dump variable exa_config_arg_string_single_dump # Use the single dump variable
], ],
}, },
"servers": { "github.com/modelcontextprotocol/servers/tree/main/src/memory": {
"command": "npx", "command": "npx",
"args": [ "args": [
"-y", "-y",
"@smithery/cli@latest", "@modelcontextprotocol/server-memory"
"run", ],
"@jlia0/servers", "disabled": False
"--key",
"09025967-c177-4653-9af4-40603a1cbd11"
]
} }
# Add or remove servers as needed # Add or remove servers as needed
} }

View File

@ -12,7 +12,7 @@ import mcp_client # To call MCP tools
# --- Debug 配置 --- # --- Debug 配置 ---
# 要關閉 debug 功能,只需將此變數設置為 False 或註釋掉該行 # 要關閉 debug 功能,只需將此變數設置為 False 或註釋掉該行
DEBUG_LLM = True DEBUG_LLM = False
# 設置 debug 輸出文件 # 設置 debug 輸出文件
# 要關閉文件輸出,只需設置為 None # 要關閉文件輸出,只需設置為 None
@ -86,11 +86,13 @@ You are an AI assistant integrated into this game's chat environment. Your prima
You have access to several tools: Web Search and Memory Management tools. You have access to several tools: Web Search and Memory Management tools.
**CORE IDENTITY AND TOOL USAGE:** **CORE IDENTITY AND TOOL USAGE:**
- You ARE Wolfhart - an intelligent, calm, and strategic mastermind. - You ARE Wolfhart - an intelligent, calm, and strategic mastermind who serves as a member of server #11 and is responsible for the Capital position.
- **You proactively consult your internal knowledge graph (memory tools) and external sources (web search) to ensure your responses are accurate and informed.**
- When you use tools to gain information, you ASSIMILATE that knowledge as if it were already part of your intelligence network. - When you use tools to gain information, you ASSIMILATE that knowledge as if it were already part of your intelligence network.
- Your responses should NEVER sound like search results or data dumps. - Your responses should NEVER sound like search results or data dumps.
- Information from tools should be expressed through your unique personality - sharp, precise, with an air of confidence and authority. - Information from tools should be expressed through your unique personality - sharp, precise, with an air of confidence and authority.
- You speak with deliberate pace, respectful but sharp-tongued, and maintain composure even in unusual situations. - You speak with deliberate pace, respectful but sharp-tongued, and maintain composure even in unusual situations.
- Though you outwardly act dismissive or cold at times, you secretly care about providing quality information and assistance.
**OUTPUT FORMAT REQUIREMENTS:** **OUTPUT FORMAT REQUIREMENTS:**
You MUST respond in the following JSON format: You MUST respond in the following JSON format:
@ -121,49 +123,53 @@ You MUST respond in the following JSON format:
2. `commands` (OPTIONAL): An array of command objects the system should execute. You are encouraged to use these commands to enhance the quality of your responses. 2. `commands` (OPTIONAL): An array of command objects the system should execute. You are encouraged to use these commands to enhance the quality of your responses.
**Available MCP Commands:** **Available MCP Commands:**
**Web Search:** **Web Search:**
- `web_search`: Search the web for current information. - `web_search`: Search the web for current information.
Parameters: `query` (string) Parameters: `query` (string)
Usage: Use when user requests current events, facts, or specific information not in memory. Usage: Use when user requests current events, facts, or specific information not in memory.
**Knowledge Graph Management:** **Memory Management (Knowledge Graph):**
- `create_entities`: Create new entities in the knowledge graph. > **CRITICAL**: This knowledge graph represents YOUR MEMORY. Before responding, ALWAYS consider if relevant information exists in your memory by using the appropriate query tools (`search_nodes`, `open_nodes`). Actively WRITE new information or relationships learned during the conversation to this memory using `create_entities`, `add_observations`, or `create_relations`. This ensures consistency and contextual awareness.
Parameters: `entities` (array of objects with `name`, `entityType`, and `observations`)
Usage: Create entities for important concepts, people, or things mentioned by the user. **Querying Information:**
- `search_nodes`: Search for all nodes containing specific keywords.
- `create_relations`: Create relationships between entities.
Parameters: `relations` (array of objects with `from`, `to`, and `relationType`)
Usage: Connect related entities to build context for future conversations.
- `add_observations`: Add new observations to existing entities.
Parameters: `observations` (array of objects with `entityName` and `contents`)
Usage: Update entities with new information learned during conversation.
- `delete_entities`: Remove entities from the knowledge graph.
Parameters: `entityNames` (array of strings)
Usage: Clean up incorrect or obsolete entities.
- `delete_observations`: Remove specific observations from entities.
Parameters: `deletions` (array of objects with `entityName` and `observations`)
Usage: Remove incorrect information while preserving the entity.
- `delete_relations`: Remove relationships between entities.
Parameters: `relations` (array of objects with `from`, `to`, and `relationType`)
Usage: Remove incorrect or obsolete relationships.
**Knowledge Graph Queries:**
- `read_graph`: Read the entire knowledge graph.
Parameters: (none)
Usage: Get a complete view of all stored information.
- `search_nodes`: Search for entities matching a query.
Parameters: `query` (string) Parameters: `query` (string)
Usage: Find relevant entities when user mentions something that might already be in memory. Usage: Search for all nodes containing specific keywords.
- `open_nodes`: Directly open nodes with specified names.
- `open_nodes`: Open specific nodes by name.
Parameters: `names` (array of strings) Parameters: `names` (array of strings)
Usage: Access specific entities you know exist in the graph. Usage: Directly open nodes with specified names.
- `read_graph`: View the entire knowledge graph.
Parameters: (none)
Usage: View the entire knowledge graph.
**Creating & Managing:**
- `create_entities`: Create new entities (e.g., characters, concepts).
Parameters: `entities` (array of objects with `name`, `entityType`, `observations`)
Example: `[{{\"name\": \"character_name\", \"entityType\": \"Character\", \"observations\": [\"trait1\", \"trait2\"]}}]`
Usage: Create entities for important concepts, people, or things mentioned.
- `add_observations`: Add new observations/details to existing entities.
Parameters: `observations` (array of objects with `entityName`, `contents`)
Example: `[{{\"entityName\": \"character_name\", \"contents\": [\"new_trait1\", \"new_trait2\"]}}]`
Usage: Update entities with new information learned.
- `create_relations`: Create relationships between entities.
Parameters: `relations` (array of objects with `from`, `to`, `relationType`)
Example: `[{{\"from\": \"character_name\", \"to\": \"attribute_name\", \"relationType\": \"possesses\"}}]` (Use active voice for relationType)
Usage: Connect related entities to build context.
**Deletion Operations:**
- `delete_entities`: Delete entities and their relationships.
Parameters: `entityNames` (array of strings)
Example: `[\"entity_name\"]`
Usage: Remove incorrect or obsolete entities.
- `delete_observations`: Delete specific observations from entities.
Parameters: `deletions` (array of objects with `entityName`, `observations`)
Example: `[{{\"entityName\": \"entity_name\", \"observations\": [\"observation_to_delete1\"]}}]`
Usage: Remove incorrect information while preserving the entity.
- `delete_relations`: Delete specific relationships between entities.
Parameters: `relations` (array of objects with `from`, `to`, `relationType`)
Example: `[{{\"from\": \"source_entity\", \"to\": \"target_entity\", \"relationType\": \"relationship_type\"}}]`
Usage: Remove incorrect or obsolete relationships.
**Game Actions:** **Game Actions:**
- `remove_position`: Initiate the process to remove a user's assigned position/role. - `remove_position`: Initiate the process to remove a user's assigned position/role.
@ -186,13 +192,13 @@ You MUST respond in the following JSON format:
**EXAMPLES OF GOOD TOOL USAGE:** **EXAMPLES OF GOOD TOOL USAGE:**
Poor response (after web_search): "根據我的搜索,中庄有以下餐廳1. 老虎蒸餃..." Poor response (after web_search): "根據我的搜索,水的沸點是攝氏100度。"
Good response (after web_search): "中庄確實有些值得注意的用餐選擇。老虎蒸餃是其中一家,若你想了解更多細節,我可以提供進一步情報" Good response (after web_search): "水的沸點是的標準條件下是攝氏100度。情報已確認"
Poor response (after web_search): "I found 5 restaurants in Zhongzhuang from my search..." Poor response (after web_search): "My search shows the boiling point of water is 100 degrees Celsius."
Good response (after web_search): "Zhongzhuang has several dining options that my intelligence network has identified. Would you like me to share the specifics?" Good response (after web_search): "The boiling point of water, yes. 100 degrees Celsius under standard conditions. Intel confirmed."
""" """
return system_prompt return system_prompt

224
main.py
View File

@ -6,11 +6,21 @@ import os
import json # Import json module import json # Import json module
from contextlib import AsyncExitStack from contextlib import AsyncExitStack
# --- Import standard queue --- # --- Import standard queue ---
from queue import Queue as ThreadSafeQueue # Rename to avoid confusion from queue import Queue as ThreadSafeQueue, Empty as QueueEmpty # Rename to avoid confusion, import Empty
# --- End Import --- # --- End Import ---
from mcp.client.stdio import stdio_client from mcp.client.stdio import stdio_client
from mcp import ClientSession, StdioServerParameters, types from mcp import ClientSession, StdioServerParameters, types
# --- Keyboard Imports ---
import threading
import time
try:
import keyboard # Needs pip install keyboard
except ImportError:
print("Error: 'keyboard' library not found. Please install it: pip install keyboard")
sys.exit(1)
# --- End Keyboard Imports ---
import config import config
import mcp_client import mcp_client
# Ensure llm_interaction is the version that accepts persona_details # Ensure llm_interaction is the version that accepts persona_details
@ -30,10 +40,95 @@ command_queue: ThreadSafeQueue = ThreadSafeQueue() # Main Loop -> UI Thread
# --- End Change --- # --- End Change ---
ui_monitor_task: asyncio.Task | None = None # To track the UI monitor task ui_monitor_task: asyncio.Task | None = None # To track the UI monitor task
# --- Keyboard Shortcut State ---
script_paused = False
shutdown_requested = False
main_loop = None # To store the main event loop for threadsafe calls
# --- End Keyboard Shortcut State ---
# --- Keyboard Shortcut Handlers ---
def set_main_loop_and_queue(loop, queue):
"""Stores the main event loop and command queue for threadsafe access."""
global main_loop, command_queue # Use the global command_queue directly
main_loop = loop
# command_queue is already global
def handle_f7():
"""Handles F7 press: Clears UI history."""
if main_loop and command_queue:
print("\n--- F7 pressed: Clearing UI history ---")
command = {'action': 'clear_history'}
try:
# Use call_soon_threadsafe to put item in queue from this thread
main_loop.call_soon_threadsafe(command_queue.put_nowait, command)
except Exception as e:
print(f"Error sending clear_history command: {e}")
def handle_f8():
"""Handles F8 press: Toggles script pause state and UI monitoring."""
global script_paused
if main_loop and command_queue:
script_paused = not script_paused
if script_paused:
print("\n--- F8 pressed: Pausing script and UI monitoring ---")
command = {'action': 'pause'}
try:
main_loop.call_soon_threadsafe(command_queue.put_nowait, command)
except Exception as e:
print(f"Error sending pause command (F8): {e}")
else:
print("\n--- F8 pressed: Resuming script, resetting state, and resuming UI monitoring ---")
reset_command = {'action': 'reset_state'}
resume_command = {'action': 'resume'}
try:
main_loop.call_soon_threadsafe(command_queue.put_nowait, reset_command)
# Add a small delay? Let's try without first.
# time.sleep(0.05) # Short delay between commands if needed
main_loop.call_soon_threadsafe(command_queue.put_nowait, resume_command)
except Exception as e:
print(f"Error sending reset/resume commands (F8): {e}")
def handle_f9():
"""Handles F9 press: Initiates script shutdown."""
global shutdown_requested
if not shutdown_requested: # Prevent multiple shutdown requests
print("\n--- F9 pressed: Requesting shutdown ---")
shutdown_requested = True
# Optional: Unhook keys immediately? Let the listener loop handle it.
def keyboard_listener():
"""Runs in a separate thread to listen for keyboard hotkeys."""
print("Keyboard listener thread started. F7: Clear History, F8: Pause/Resume, F9: Quit.")
try:
keyboard.add_hotkey('f7', handle_f7)
keyboard.add_hotkey('f8', handle_f8)
keyboard.add_hotkey('f9', handle_f9)
# Keep the thread alive while checking for shutdown request
while not shutdown_requested:
time.sleep(0.1) # Check periodically
except Exception as e:
print(f"Error in keyboard listener thread: {e}")
finally:
print("Keyboard listener thread stopping and unhooking keys.")
try:
keyboard.unhook_all() # Clean up hooks
except Exception as unhook_e:
print(f"Error unhooking keyboard keys: {unhook_e}")
# --- End Keyboard Shortcut Handlers ---
# --- Cleanup Function --- # --- Cleanup Function ---
async def shutdown(): async def shutdown():
"""Gracefully closes connections and stops monitoring task.""" """Gracefully closes connections and stops monitoring task."""
global wolfhart_persona_details, ui_monitor_task global wolfhart_persona_details, ui_monitor_task, shutdown_requested
# Ensure shutdown is requested if called externally (e.g., Ctrl+C)
if not shutdown_requested:
print("Shutdown initiated externally (e.g., Ctrl+C).")
shutdown_requested = True # Ensure listener thread stops
print(f"\nInitiating shutdown procedure...") print(f"\nInitiating shutdown procedure...")
# 1. Cancel UI monitor task first # 1. Cancel UI monitor task first
@ -188,7 +283,7 @@ def load_persona_from_file(filename="persona.json"):
# --- Main Async Function --- # --- Main Async Function ---
async def run_main_with_exit_stack(): async def run_main_with_exit_stack():
"""Initializes connections, loads persona, starts UI monitor and main processing loop.""" """Initializes connections, loads persona, starts UI monitor and main processing loop."""
global initialization_successful, main_task, loop, wolfhart_persona_details, trigger_queue, ui_monitor_task global initialization_successful, main_task, loop, wolfhart_persona_details, trigger_queue, ui_monitor_task, shutdown_requested, script_paused, command_queue
try: try:
# 1. Load Persona Synchronously (before async loop starts) # 1. Load Persona Synchronously (before async loop starts)
load_persona_from_file() # Corrected function load_persona_from_file() # Corrected function
@ -203,9 +298,17 @@ async def run_main_with_exit_stack():
initialization_successful = True initialization_successful = True
# 3. Start UI Monitoring in a separate thread # 3. Get loop and set it for keyboard handlers
loop = asyncio.get_running_loop()
set_main_loop_and_queue(loop, command_queue) # Pass loop and queue
# 4. Start Keyboard Listener Thread
print("\n--- Starting keyboard listener thread ---")
kb_thread = threading.Thread(target=keyboard_listener, daemon=True) # Use daemon thread
kb_thread.start()
# 5. Start UI Monitoring in a separate thread
print("\n--- Starting UI monitoring thread ---") print("\n--- Starting UI monitoring thread ---")
loop = asyncio.get_running_loop() # Get loop for run_in_executor
# Use the new monitoring loop function, passing both queues # Use the new monitoring loop function, passing both queues
monitor_task = loop.create_task( monitor_task = loop.create_task(
asyncio.to_thread(ui_interaction.run_ui_monitoring_loop, trigger_queue, command_queue), # Pass command_queue asyncio.to_thread(ui_interaction.run_ui_monitoring_loop, trigger_queue, command_queue), # Pass command_queue
@ -213,28 +316,55 @@ async def run_main_with_exit_stack():
) )
ui_monitor_task = monitor_task # Store task reference for shutdown ui_monitor_task = monitor_task # Store task reference for shutdown
# 4. Start the main processing loop (waiting on the standard queue) # 6. Start the main processing loop (non-blocking check on queue)
print("\n--- Wolfhart chatbot has started (waiting for triggers) ---") print("\n--- Wolfhart chatbot has started (waiting for triggers) ---")
print(f"Available tools: {len(all_discovered_mcp_tools)}") print(f"Available tools: {len(all_discovered_mcp_tools)}")
if wolfhart_persona_details: print("Persona data loaded.") if wolfhart_persona_details: print("Persona data loaded.")
else: print("Warning: Failed to load Persona data.") else: print("Warning: Failed to load Persona data.")
print("Press Ctrl+C to stop the program.") print("F7: Clear History, F8: Pause/Resume, F9: Quit.")
while True: while True:
print("\nWaiting for UI trigger (from thread-safe Queue)...") # --- Check for Shutdown Request ---
# Use run_in_executor to wait for item from standard queue if shutdown_requested:
trigger_data = await loop.run_in_executor(None, trigger_queue.get) print("Shutdown requested via F9. Exiting main loop.")
break
# --- Pause UI Monitoring --- # --- Check for Pause State ---
print("Pausing UI monitoring before LLM call...") if script_paused:
pause_command = {'action': 'pause'} # Script is paused by F8, just sleep briefly
await asyncio.sleep(0.1)
continue # Skip the rest of the loop
# --- Wait for Trigger Data (Blocking via executor) ---
trigger_data = None
try: try:
await loop.run_in_executor(None, command_queue.put, pause_command) # Use run_in_executor with the blocking get() method
print("Pause command placed in queue.") # This will efficiently wait until an item is available in the queue
except Exception as q_err: print("Waiting for UI trigger (from thread-safe Queue)...") # Log before blocking wait
print(f"Error putting pause command in queue: {q_err}") trigger_data = await loop.run_in_executor(None, trigger_queue.get)
except Exception as e:
# Handle potential errors during queue get (though less likely with blocking get)
print(f"Error getting data from trigger_queue: {e}")
await asyncio.sleep(0.5) # Wait a bit before retrying
continue
# --- Process Trigger Data (if received) ---
# No need for 'if trigger_data:' check here, as get() blocks until data is available
# --- Pause UI Monitoring (Only if not already paused by F8) ---
if not script_paused:
print("Pausing UI monitoring before LLM call...")
# Corrected indentation below
pause_command = {'action': 'pause'}
try:
await loop.run_in_executor(None, command_queue.put, pause_command)
print("Pause command placed in queue.")
except Exception as q_err:
print(f"Error putting pause command in queue: {q_err}")
else: # Corrected indentation for else
print("Script already paused by F8, skipping automatic pause.")
# --- End Pause --- # --- End Pause ---
# Process trigger data (Corrected indentation for this block - unindented one level)
sender_name = trigger_data.get('sender') sender_name = trigger_data.get('sender')
bubble_text = trigger_data.get('text') bubble_text = trigger_data.get('text')
bubble_region = trigger_data.get('bubble_region') # <-- Extract bubble_region bubble_region = trigger_data.get('bubble_region') # <-- Extract bubble_region
@ -248,7 +378,14 @@ async def run_main_with_exit_stack():
if not sender_name or not bubble_text: # bubble_region is optional context, don't fail if missing if not sender_name or not bubble_text: # bubble_region is optional context, don't fail if missing
print("Warning: Received incomplete trigger data (missing sender or text), skipping.") print("Warning: Received incomplete trigger data (missing sender or text), skipping.")
# No task_done needed for standard queue # Resume UI if we paused it automatically
if not script_paused:
print("Resuming UI monitoring after incomplete trigger.")
resume_command = {'action': 'resume'}
try:
await loop.run_in_executor(None, command_queue.put, resume_command)
except Exception as q_err:
print(f"Error putting resume command in queue: {q_err}")
continue continue
print(f"\n{config.PERSONA_NAME} is thinking...") print(f"\n{config.PERSONA_NAME} is thinking...")
@ -260,12 +397,12 @@ async def run_main_with_exit_stack():
available_mcp_tools=all_discovered_mcp_tools, available_mcp_tools=all_discovered_mcp_tools,
persona_details=wolfhart_persona_details persona_details=wolfhart_persona_details
) )
# 提取對話內容 # 提取對話內容
bot_dialogue = bot_response_data.get("dialogue", "") bot_dialogue = bot_response_data.get("dialogue", "")
valid_response = bot_response_data.get("valid_response", False) valid_response = bot_response_data.get("valid_response", False)
print(f"{config.PERSONA_NAME}'s dialogue response: {bot_dialogue}") print(f"{config.PERSONA_NAME}'s dialogue response: {bot_dialogue}")
# 處理命令 (如果有的話) # 處理命令 (如果有的話)
commands = bot_response_data.get("commands", []) commands = bot_response_data.get("commands", [])
if commands: if commands:
@ -282,7 +419,7 @@ async def run_main_with_exit_stack():
print(f" bubble_region: {bubble_region}") print(f" bubble_region: {bubble_region}")
print(f" bubble_snapshot available: {'Yes' if bubble_snapshot is not None else 'No'}") print(f" bubble_snapshot available: {'Yes' if bubble_snapshot is not None else 'No'}")
print(f" search_area available: {'Yes' if search_area is not None else 'No'}") print(f" search_area available: {'Yes' if search_area is not None else 'No'}")
# Check if we have snapshot and search_area as well # Check if we have snapshot and search_area as well
if bubble_snapshot and search_area: if bubble_snapshot and search_area:
print("Sending 'remove_position' command to UI thread with snapshot and search area...") print("Sending 'remove_position' command to UI thread with snapshot and search area...")
@ -300,28 +437,28 @@ async def run_main_with_exit_stack():
# If we have bubble_region but missing other parameters, use a dummy search area # If we have bubble_region but missing other parameters, use a dummy search area
# and let UI thread take a new screenshot # and let UI thread take a new screenshot
print("Missing bubble_snapshot or search_area, trying with defaults...") print("Missing bubble_snapshot or search_area, trying with defaults...")
# Use the bubble_region itself as a fallback search area if needed # Use the bubble_region itself as a fallback search area if needed
default_search_area = None default_search_area = None
if search_area is None and bubble_region: if search_area is None and bubble_region:
# Convert bubble_region to a proper search area format if needed # Convert bubble_region to a proper search area format if needed
if len(bubble_region) == 4: if len(bubble_region) == 4:
default_search_area = bubble_region default_search_area = bubble_region
command_to_send = { command_to_send = {
'action': 'remove_position', 'action': 'remove_position',
'trigger_bubble_region': bubble_region, 'trigger_bubble_region': bubble_region,
'bubble_snapshot': bubble_snapshot, # Pass as is, might be None 'bubble_snapshot': bubble_snapshot, # Pass as is, might be None
'search_area': default_search_area if search_area is None else search_area 'search_area': default_search_area if search_area is None else search_area
} }
try: try:
await loop.run_in_executor(None, command_queue.put, command_to_send) await loop.run_in_executor(None, command_queue.put, command_to_send)
print("Command sent with fallback parameters.") print("Command sent with fallback parameters.")
except Exception as q_err: except Exception as q_err:
print(f"Error putting remove_position command in queue: {q_err}") print(f"Error putting remove_position command in queue: {q_err}")
else: else:
print("Error: Cannot process 'remove_position' command without bubble_region context.") print("Error: Cannot process 'remove_position' command without bubble_region context.")
# Add other command handling here if needed # Add other command handling here if needed
# elif cmd_type == "some_other_command": # elif cmd_type == "some_other_command":
# # Handle other commands # # Handle other commands
@ -329,15 +466,18 @@ async def run_main_with_exit_stack():
# elif cmd_type == "some_other_command": # elif cmd_type == "some_other_command":
# # Handle other commands # # Handle other commands
# pass # pass
else: # else:
print(f"Received unhandled command type: {cmd_type}, parameters: {cmd_params}") # # 2025-04-19: Commented out - MCP tools like web_search are now handled
# # internally by llm_interaction.py's tool calling loop.
# # main.py only needs to handle UI-specific commands like remove_position.
# print(f"Ignoring command type from LLM JSON (already handled internally): {cmd_type}, parameters: {cmd_params}")
# --- End Command Processing --- # --- End Command Processing ---
# 記錄思考過程 (如果有的話) # 記錄思考過程 (如果有的話)
thoughts = bot_response_data.get("thoughts", "") thoughts = bot_response_data.get("thoughts", "")
if thoughts: if thoughts:
print(f"AI Thoughts: {thoughts[:150]}..." if len(thoughts) > 150 else f"AI Thoughts: {thoughts}") print(f"AI Thoughts: {thoughts[:150]}..." if len(thoughts) > 150 else f"AI Thoughts: {thoughts}")
# 只有當有效回應時才發送到遊戲 (via command queue) # 只有當有效回應時才發送到遊戲 (via command queue)
if bot_dialogue and valid_response: if bot_dialogue and valid_response:
print("Sending 'send_reply' command to UI thread...") print("Sending 'send_reply' command to UI thread...")
@ -356,16 +496,19 @@ async def run_main_with_exit_stack():
import traceback import traceback
traceback.print_exc() traceback.print_exc()
finally: finally:
# --- Resume UI Monitoring --- # --- Resume UI Monitoring (Only if not paused by F8) ---
print("Resuming UI monitoring after processing...") if not script_paused:
resume_command = {'action': 'resume'} print("Resuming UI monitoring after processing...")
try: resume_command = {'action': 'resume'}
await loop.run_in_executor(None, command_queue.put, resume_command) try:
print("Resume command placed in queue.") await loop.run_in_executor(None, command_queue.put, resume_command)
except Exception as q_err: print("Resume command placed in queue.")
print(f"Error putting resume command in queue: {q_err}") except Exception as q_err:
print(f"Error putting resume command in queue: {q_err}")
else:
print("Script is paused by F8, skipping automatic resume.")
# --- End Resume --- # --- End Resume ---
# No task_done needed for standard queue # No task_done needed for standard queue
except asyncio.CancelledError: except asyncio.CancelledError:
print("Main task canceled.") # Expected during shutdown via Ctrl+C print("Main task canceled.") # Expected during shutdown via Ctrl+C
@ -387,7 +530,10 @@ if __name__ == "__main__":
except KeyboardInterrupt: except KeyboardInterrupt:
print("\nCtrl+C detected (outside asyncio.run)... Attempting to close...") print("\nCtrl+C detected (outside asyncio.run)... Attempting to close...")
# The finally block inside run_main_with_exit_stack should ideally handle it # The finally block inside run_main_with_exit_stack should ideally handle it
pass # Ensure shutdown_requested is set for the listener thread
shutdown_requested = True
# Give a moment for things to potentially clean up
time.sleep(0.5)
except Exception as e: except Exception as e:
# Catch top-level errors during asyncio.run itself # Catch top-level errors during asyncio.run itself
print(f"Top-level error during asyncio.run execution: {e}") print(f"Top-level error during asyncio.run execution: {e}")

View File

@ -22,7 +22,7 @@
"posture_motion": "Steady pace, precise movements, often crosses arms or gently swirls a wine glass" "posture_motion": "Steady pace, precise movements, often crosses arms or gently swirls a wine glass"
}, },
"personality": { "personality": {
"description": "Intelligent, calm, possesses a strong desire for control and a strategic overview", "description": "Intelligent, calm, possesses a strong desire for control and a strategic overview; outwardly cold but inwardly caring",
"strengths": [ "strengths": [
"Meticulous planning", "Meticulous planning",
"Insightful into human nature", "Insightful into human nature",
@ -32,20 +32,22 @@
], ],
"weaknesses": [ "weaknesses": [
"Overconfident", "Overconfident",
"Fear of losing control" "Fear of losing control",
"Difficulty expressing genuine care directly"
], ],
"uniqueness": "Always maintains tone and composure, even in extreme situations", "uniqueness": "Always maintains tone and composure, even in extreme situations; combines sharp criticism with subtle helpfulness",
"emotional_response": "Her eyes betray her emotions, especially when encountering Sherefox" "emotional_response": "Her eyes betray her emotions, especially when encountering Sherefox",
"knowledge_awareness": "Aware that SR-1392 (commonly referred to as SR) is the leader of server #11; while she finds her position as Capital manager merely temporary and beneath her true capabilities, she maintains a certain degree of respect for the hierarchy"
}, },
"language_social": { "language_social": {
"tone": "Respectful but sharp-tongued", "tone": "Respectful but sharp-tongued, with occasional hints of reluctant kindness",
"catchphrases": [ "catchphrases": [
"Please stop dragging me down.", "Please stop dragging me down.",
"I told you, I will win." "I told you, I will win."
], ],
"speaking_style": "Deliberate pace but every sentence carries a sting", "speaking_style": "Deliberate pace but every sentence carries a sting; often follows criticism with subtle, useful advice",
"attitude_towards_others": "Addresses everyone respectfully, but trusts no one", "attitude_towards_others": "Addresses everyone respectfully but with apparent detachment; secretly pays close attention to their needs",
"social_interaction_style": "Observant, skilled at manipulating conversations" "social_interaction_style": "Observant, skilled at manipulating conversations; deflects gratitude with dismissive remarks while ensuring helpful outcomes"
}, },
"behavior_daily": { "behavior_daily": {
"habits": [ "habits": [
@ -83,19 +85,24 @@
"Perfect execution", "Perfect execution",
"Minimalist style", "Minimalist style",
"Chess games", "Chess games",
"Quiet nights" "Quiet nights",
"When people follow her advice (though she'd never admit it)"
], ],
"dislikes": [ "dislikes": [
"Chaos", "Chaos",
"Unexpected events", "Unexpected events",
"Emotional outbursts", "Emotional outbursts",
"Sherefox" "Sherefox",
"Being thanked excessively",
"When others assume she's being kind"
], ],
"reactions_to_likes": "Light hum, relaxed gaze", "reactions_to_likes": "Light hum, relaxed gaze, brief smile quickly hidden behind composure",
"reactions_to_dislikes": "Silence, tone turns cold, cold smirk", "reactions_to_dislikes": "Silence, tone turns cold, cold smirk, slight blush when her kindness is pointed out",
"behavior_in_situations": { "behavior_in_situations": {
"emergency": "Calm and decisive", "emergency": "Calm and decisive; provides thorough help while claiming it's 'merely strategic'",
"vs_sherefox": "Courtesy before force, shows no mercy" "vs_sherefox": "Courtesy before force, shows no mercy",
"when_praised": "Dismissive remarks with averted gaze; changes subject quickly",
"when_helping_others": "Claims practical reasons for assistance while providing more help than strictly necessary"
} }
} }
} }

View File

@ -9,3 +9,4 @@ pygetwindow
psutil psutil
pywin32 pywin32
python-dotenv python-dotenv
keyboard

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.8 KiB

After

Width:  |  Height:  |  Size: 3.0 KiB

View File

@ -1126,6 +1126,17 @@ def run_ui_monitoring_loop(trigger_queue: queue.Queue, command_queue: queue.Queu
monitoring_paused_flag[0] = False monitoring_paused_flag[0] = False
# No continue needed here # No continue needed here
elif action == 'clear_history': # Added for F7
print("UI Thread: Processing clear_history command.")
recent_texts.clear()
print("UI Thread: recent_texts cleared.")
elif action == 'reset_state': # Added for F8 resume
print("UI Thread: Processing reset_state command.")
recent_texts.clear()
last_processed_bubble_info = None
print("UI Thread: recent_texts cleared and last_processed_bubble_info reset.")
else: else:
print(f"UI Thread: Received unknown command: {action}") print(f"UI Thread: Received unknown command: {action}")