first commit
2
.gitignore
vendored
Normal file
@ -0,0 +1,2 @@
|
|||||||
|
.env
|
||||||
|
__pycache__/
|
||||||
198
ClaudeCode.md
Normal file
@ -0,0 +1,198 @@
|
|||||||
|
# 專案架構及開發文檔
|
||||||
|
|
||||||
|
## 專案概述
|
||||||
|
|
||||||
|
Wolf Chat 是一個基於 MCP (Modular Capability Provider) 框架的聊天機器人助手,專為與遊戲 "Last War-Survival Game" 整合而設計。該機器人:
|
||||||
|
|
||||||
|
- 使用螢幕辨識技術監控遊戲聊天視窗
|
||||||
|
- 偵測包含 "wolf" 或 "Wolf" 關鍵字的聊天訊息
|
||||||
|
- 通過 LLM (語言模型) 生成回應
|
||||||
|
- 使用 UI 自動化技術將回應輸入到遊戲聊天介面
|
||||||
|
|
||||||
|
專案以英文編寫程式碼,但主要輸出和日誌以繁體中文顯示,方便使用者理解。
|
||||||
|
|
||||||
|
## 系統架構
|
||||||
|
|
||||||
|
### 核心元件
|
||||||
|
|
||||||
|
1. **主控模塊 (main.py)**
|
||||||
|
- 協調各模塊的工作
|
||||||
|
- 初始化 MCP 連接
|
||||||
|
- 設置並管理主要事件循環
|
||||||
|
- 處理程式生命週期管理和資源清理
|
||||||
|
|
||||||
|
2. **LLM 交互模塊 (llm_interaction.py)**
|
||||||
|
- 與語言模型 API 通信
|
||||||
|
- 管理系統提示與角色設定
|
||||||
|
- 處理語言模型的工具調用功能
|
||||||
|
- 格式化 LLM 回應
|
||||||
|
|
||||||
|
3. **UI 互動模塊 (ui_interaction.py)**
|
||||||
|
- 使用圖像辨識技術監控遊戲聊天視窗
|
||||||
|
- 檢測聊天泡泡與關鍵字
|
||||||
|
- 複製聊天內容和獲取發送者姓名
|
||||||
|
- 將生成的回應輸入到遊戲中
|
||||||
|
|
||||||
|
4. **MCP 客戶端模塊 (mcp_client.py)**
|
||||||
|
- 管理與 MCP 服務器的通信
|
||||||
|
- 列出和調用可用工具
|
||||||
|
- 處理工具調用的結果和錯誤
|
||||||
|
|
||||||
|
5. **配置模塊 (config.py)**
|
||||||
|
- 集中管理系統參數和設定
|
||||||
|
- 整合環境變數
|
||||||
|
- 配置 API 密鑰和服務器設定
|
||||||
|
|
||||||
|
6. **角色定義 (persona.json)**
|
||||||
|
- 詳細定義機器人的人格特徵
|
||||||
|
- 包含外觀、說話風格、個性特點等資訊
|
||||||
|
- 提供給 LLM 以確保角色扮演一致性
|
||||||
|
|
||||||
|
7. **視窗設定工具 (window-setup-script.py)**
|
||||||
|
- 輔助工具,用於設置遊戲視窗的位置和大小
|
||||||
|
- 方便開發階段截取 UI 元素樣本
|
||||||
|
|
||||||
|
### 資料流程
|
||||||
|
|
||||||
|
```
|
||||||
|
[遊戲聊天視窗]
|
||||||
|
↑↓
|
||||||
|
[UI 互動模塊] <→ [圖像樣本庫]
|
||||||
|
↓
|
||||||
|
[主控模塊] ← [角色定義]
|
||||||
|
↑↓
|
||||||
|
[LLM 交互模塊] <→ [語言模型 API]
|
||||||
|
↑↓
|
||||||
|
[MCP 客戶端] <→ [MCP 服務器]
|
||||||
|
```
|
||||||
|
|
||||||
|
## 技術實現
|
||||||
|
|
||||||
|
### 核心功能實現
|
||||||
|
|
||||||
|
#### 聊天監控與觸發機制
|
||||||
|
|
||||||
|
系統使用基於圖像辨識的方法監控遊戲聊天界面:
|
||||||
|
|
||||||
|
1. **泡泡檢測**:通過辨識聊天泡泡的角落圖案定位聊天訊息,區分一般用戶與機器人
|
||||||
|
2. **關鍵字檢測**:在泡泡區域內搜尋 "wolf" 或 "Wolf" 關鍵字圖像
|
||||||
|
3. **內容獲取**:點擊關鍵字位置,使用剪貼板複製聊天內容
|
||||||
|
4. **發送者識別**:通過點擊頭像,導航菜單,複製用戶名稱
|
||||||
|
5. **防重複處理**:使用位置比較和內容歷史記錄防止重複回應
|
||||||
|
|
||||||
|
#### LLM 整合
|
||||||
|
|
||||||
|
系統使用基於 OpenAI API 的介面與語言模型通信:
|
||||||
|
|
||||||
|
1. **模型選擇**:可配置使用不同的模型,預設為 deepseek/deepseek-chat-v3-0324
|
||||||
|
2. **系統提示**:精心設計的提示確保角色扮演和功能操作
|
||||||
|
3. **工具調用**:支持模型使用 web_search 等工具獲取資訊
|
||||||
|
4. **工具處理循環**:實現了完整的工具調用、結果處理和續發邏輯
|
||||||
|
|
||||||
|
#### 多服務器連接
|
||||||
|
|
||||||
|
系統可以同時連接多個 MCP 服務器:
|
||||||
|
|
||||||
|
1. **並行初始化**:使用 asyncio 並行連接配置的所有服務器
|
||||||
|
2. **工具整合**:自動發現並整合各服務器提供的工具
|
||||||
|
3. **錯誤處理**:處理連接失敗和工具調用異常
|
||||||
|
|
||||||
|
### 異步架構
|
||||||
|
|
||||||
|
系統使用 Python 的 asyncio 作為核心異步框架:
|
||||||
|
|
||||||
|
1. **主事件循環**:處理 MCP 連接、LLM 請求和 UI 監控
|
||||||
|
2. **線程安全通信**:UI 監控在獨立線程中運行,通過線程安全隊列與主循環通信
|
||||||
|
3. **資源管理**:使用 AsyncExitStack 管理異步資源的生命週期
|
||||||
|
4. **清理機制**:實現了優雅的關閉和清理流程
|
||||||
|
|
||||||
|
### UI 自動化
|
||||||
|
|
||||||
|
系統使用多種技術實現 UI 自動化:
|
||||||
|
|
||||||
|
1. **圖像辨識**:使用 OpenCV 和 pyautogui 進行圖像匹配和識別
|
||||||
|
2. **鍵鼠控制**:模擬鼠標點擊和鍵盤操作
|
||||||
|
3. **剪貼板操作**:使用 pyperclip 讀寫剪貼板
|
||||||
|
4. **狀態式處理**:基於 UI 狀態判斷的互動流程,確保操作穩定性
|
||||||
|
|
||||||
|
## 配置與部署
|
||||||
|
|
||||||
|
### 依賴項
|
||||||
|
|
||||||
|
主要依賴項目包括:
|
||||||
|
- openai: 與語言模型通信
|
||||||
|
- mcp: MCP 框架核心
|
||||||
|
- pyautogui, opencv-python: 圖像辨識與自動化
|
||||||
|
- pyperclip: 剪貼板操作
|
||||||
|
- pygetwindow: 窗口控制
|
||||||
|
- python-dotenv: 環境變數管理
|
||||||
|
|
||||||
|
### 環境設定
|
||||||
|
|
||||||
|
1. **API 設定**:通過 .env 文件或環境變數設置 API 密鑰
|
||||||
|
2. **MCP 服務器配置**:在 config.py 中配置要連接的 MCP 服務器
|
||||||
|
3. **UI 樣本**:需要提供特定遊戲界面元素的截圖模板
|
||||||
|
4. **視窗位置**:可使用 window-setup-script.py 調整遊戲視窗位置
|
||||||
|
|
||||||
|
## 開發建議
|
||||||
|
|
||||||
|
### 優化方向
|
||||||
|
|
||||||
|
1. **UI 辨識強化**:
|
||||||
|
- 改進泡泡匹配算法,提高可靠性
|
||||||
|
- 添加文字 OCR 功能,減少依賴剪貼板
|
||||||
|
- 擴展關鍵字檢測能力
|
||||||
|
|
||||||
|
2. **LLM 優化**:
|
||||||
|
- 改進系統提示,使回應更自然
|
||||||
|
- 添加更多工具支持
|
||||||
|
- 實現對話上下文管理
|
||||||
|
|
||||||
|
3. **系統穩定性**:
|
||||||
|
- 增強錯誤處理和復原機制
|
||||||
|
- 添加更多日誌和監控功能
|
||||||
|
- 開發自動重啟和診斷功能
|
||||||
|
|
||||||
|
### 注意事項
|
||||||
|
|
||||||
|
1. **圖像模板**:確保所有必要的 UI 元素模板都已截圖並放置在 templates 目錄
|
||||||
|
2. **API 密鑰**:保護 API 密鑰安全,不要將其提交到版本控制系統
|
||||||
|
3. **窗口位置**:UI 自動化對窗口位置和大小敏感,保持一致性
|
||||||
|
|
||||||
|
## 分析與反思
|
||||||
|
|
||||||
|
### 架構優勢
|
||||||
|
|
||||||
|
1. **模塊化設計**:各功能區域職責明確,易於維護和擴展
|
||||||
|
2. **基於能力的分離**:MCP 框架提供良好的工具擴展性
|
||||||
|
3. **非侵入式整合**:不需要修改遊戲本身,通過 UI 自動化實現整合
|
||||||
|
|
||||||
|
### 潛在改進
|
||||||
|
|
||||||
|
1. **更穩健的 UI 互動**:當前的圖像辨識方法可能受游戲界面變化影響
|
||||||
|
2. **擴展觸發機制**:增加更多觸發條件,不僅限於關鍵字
|
||||||
|
3. **對話記憶**:實現對話歷史記錄,使機器人可以參考之前的互動
|
||||||
|
4. **多語言支持**:增強對不同語言的處理能力
|
||||||
|
|
||||||
|
## 使用指南
|
||||||
|
|
||||||
|
### 啟動流程
|
||||||
|
|
||||||
|
1. 確保遊戲已啟動且聊天介面可見
|
||||||
|
2. 配置必要的 API 密鑰和服務器連接
|
||||||
|
3. 運行 `python main.py` 啟動系統
|
||||||
|
4. 系統將自動監控聊天,偵測關鍵字並回應
|
||||||
|
|
||||||
|
### 日常維護
|
||||||
|
|
||||||
|
1. 定期檢查 API 密鑰有效性
|
||||||
|
2. 確保模板圖像與當前遊戲界面匹配
|
||||||
|
3. 監控日誌以檢測可能的問題
|
||||||
|
|
||||||
|
### 故障排除
|
||||||
|
|
||||||
|
常見問題及解決方案:
|
||||||
|
1. **無法識別泡泡**: 更新模板圖像,調整 CONFIDENCE_THRESHOLD
|
||||||
|
2. **複製內容失敗**: 檢查點擊位置和遊戲界面一致性
|
||||||
|
3. **LLM 連接問題**: 驗證 API 密鑰和網絡連接
|
||||||
|
4. **MCP 服務器連接失敗**: 確認服務器配置正確並且運行中
|
||||||
129
README.md
Normal file
@ -0,0 +1,129 @@
|
|||||||
|
# Dandan MCP Chat Bot
|
||||||
|
|
||||||
|
A specialized chat assistant that integrates with the "Last War-Survival Game" by monitoring the game's chat window using screen recognition technology.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This project implements an AI assistant that:
|
||||||
|
- Monitors the game chat window using computer vision
|
||||||
|
- Detects messages containing keywords ("wolf" or "Wolf")
|
||||||
|
- Processes requests through a language model
|
||||||
|
- Automatically responds in the game chat
|
||||||
|
|
||||||
|
The code is developed in English, but supports Traditional Chinese interface and logs for broader accessibility.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- **Image-based Chat Monitoring**: Uses OpenCV and PyAutoGUI to detect chat bubbles and keywords
|
||||||
|
- **Language Model Integration**: Uses GPT models or compatible AI services
|
||||||
|
- **MCP Framework**: Integrates with Modular Capability Provider for extensible features
|
||||||
|
- **Persona System**: Supports detailed character persona definition
|
||||||
|
- **Automated UI Interaction**: Handles copy/paste operations and menu navigation
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
- Python 3.8+
|
||||||
|
- OpenAI API key or compatible service
|
||||||
|
- MCP Framework
|
||||||
|
- Game client ("Last War-Survival Game")
|
||||||
|
- OpenCV, PyAutoGUI, and other dependencies (see requirements.txt)
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
1. Clone this repository:
|
||||||
|
```
|
||||||
|
git clone [repository-url]
|
||||||
|
cd dandan
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Install required packages:
|
||||||
|
```
|
||||||
|
pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Create a `.env` file with your API keys:
|
||||||
|
```
|
||||||
|
OPENAI_API_KEY=your_api_key_here
|
||||||
|
EXA_API_KEY=your_exa_key_here
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Capture required UI template images (see "UI Setup" section)
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
1. **API Settings**: Edit `config.py` to set up your preferred language model provider:
|
||||||
|
```python
|
||||||
|
OPENAI_API_BASE_URL = "https://openrouter.ai/api/v1" # Or other compatible provider
|
||||||
|
LLM_MODEL = "deepseek/deepseek-chat-v3-0324" # Or other model
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **MCP Servers**: Configure MCP servers in `config.py`:
|
||||||
|
```python
|
||||||
|
MCP_SERVERS = {
|
||||||
|
"exa": { "command": "cmd", "args": [...] },
|
||||||
|
"memorymesh": { "command": "node", "args": [...] }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Game Window**: Set your game window title in `config.py`:
|
||||||
|
```python
|
||||||
|
WINDOW_TITLE = "Last War-Survival Game"
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Chat Persona**: Customize `persona.json` to define the bot's personality
|
||||||
|
|
||||||
|
## UI Setup
|
||||||
|
|
||||||
|
The system requires template images of UI elements to function properly:
|
||||||
|
|
||||||
|
1. Run the window setup script to position your game window:
|
||||||
|
```
|
||||||
|
python window-setup-script.py --launch
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Capture the following UI elements and save them to the `templates` folder:
|
||||||
|
- Chat bubble corners (regular and bot)
|
||||||
|
- Keywords "wolf" and "Wolf"
|
||||||
|
- Menu elements like "Copy" button
|
||||||
|
- Profile and user detail page elements
|
||||||
|
|
||||||
|
Screenshot names should match the constants defined in `ui_interaction.py`.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
1. Start the game client
|
||||||
|
|
||||||
|
2. Run the bot:
|
||||||
|
```
|
||||||
|
python main.py
|
||||||
|
```
|
||||||
|
|
||||||
|
3. The bot will start monitoring the chat for messages containing "wolf" or "Wolf"
|
||||||
|
|
||||||
|
4. When detected, it will:
|
||||||
|
- Copy the message content
|
||||||
|
- Get the sender's name
|
||||||
|
- Process the request using the language model
|
||||||
|
- Automatically send a response in chat
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
1. **Monitoring**: The UI thread continuously scans the screen for chat bubbles
|
||||||
|
2. **Detection**: When a bubble with "wolf" keyword is found, the message is extracted
|
||||||
|
3. **Processing**: The message is sent to the language model with the persona context
|
||||||
|
4. **Response**: The AI generates a response based on the persona
|
||||||
|
5. **Interaction**: The system automatically inputs the response in the game chat
|
||||||
|
|
||||||
|
## Developer Tools
|
||||||
|
|
||||||
|
- **Window Setup Script**: Helps position the game window for UI template capture
|
||||||
|
- **UI Interaction Debugging**: Can be tested independently by running `ui_interaction.py`
|
||||||
|
- **Persona Customization**: Edit `persona.json` to change the bot's character
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
- **Template Recognition Issues**: Adjust the `CONFIDENCE_THRESHOLD` in `ui_interaction.py`
|
||||||
|
- **MCP Connection Errors**: Check server configurations in `config.py`
|
||||||
|
- **API Errors**: Verify your API keys in the `.env` file
|
||||||
|
- **UI Automation Failures**: Update template images to match your client's appearance
|
||||||
|
|
||||||
68
config.py
Normal file
@ -0,0 +1,68 @@
|
|||||||
|
# config.py
|
||||||
|
import os
|
||||||
|
import json # Import json for building args string
|
||||||
|
from dotenv import load_dotenv # Import load_dotenv
|
||||||
|
|
||||||
|
# --- Load environment variables from .env file ---
|
||||||
|
load_dotenv()
|
||||||
|
print("Attempted to load environment variables from .env file.")
|
||||||
|
# --- End Load ---
|
||||||
|
|
||||||
|
# OpenAI API Configuration / OpenAI-Compatible Provider Settings
|
||||||
|
# --- Modify these lines ---
|
||||||
|
# Leave OPENAI_API_BASE_URL as None or "" to use official OpenAI
|
||||||
|
OPENAI_API_BASE_URL = "https://openrouter.ai/api/v1" # <--- For example "http://localhost:1234/v1" or your provider URL
|
||||||
|
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
|
||||||
|
#LLM_MODEL = "anthropic/claude-3.7-sonnet"
|
||||||
|
#LLM_MODEL = "meta-llama/llama-4-maverick"
|
||||||
|
LLM_MODEL = "deepseek/deepseek-chat-v3-0324" # <--- Ensure this matches the model name provided by your provider
|
||||||
|
|
||||||
|
EXA_API_KEY = os.getenv("EXA_API_KEY")
|
||||||
|
|
||||||
|
# --- Dynamically build Exa server args ---
|
||||||
|
exa_config_dict = {"exaApiKey": EXA_API_KEY if EXA_API_KEY else "YOUR_EXA_KEY_MISSING"}
|
||||||
|
# Need to dump dict to JSON string, then properly escape it for cmd arg
|
||||||
|
# Using json.dumps handles internal quotes correctly.
|
||||||
|
# The outer quotes for cmd might need careful handling depending on OS / shell.
|
||||||
|
# For cmd /c on Windows, embedding escaped JSON often works like this:
|
||||||
|
exa_config_arg_string = json.dumps(json.dumps(exa_config_dict)) # Double dump for cmd escaping? Or just one? Test needed.
|
||||||
|
# Let's try single dump first, often sufficient if passed correctly by subprocess
|
||||||
|
exa_config_arg_string_single_dump = json.dumps(exa_config_dict)
|
||||||
|
|
||||||
|
# --- MCP Server Configuration ---
|
||||||
|
MCP_SERVERS = {
|
||||||
|
"exa": {
|
||||||
|
"command": "cmd",
|
||||||
|
"args": [
|
||||||
|
"/c",
|
||||||
|
"npx",
|
||||||
|
"-y",
|
||||||
|
"@smithery/cli@latest",
|
||||||
|
"run",
|
||||||
|
"exa",
|
||||||
|
"--config",
|
||||||
|
# Pass the dynamically created config string with the environment variable key
|
||||||
|
exa_config_arg_string # Use the properly escaped variable
|
||||||
|
],
|
||||||
|
},
|
||||||
|
"memorymesh": {
|
||||||
|
"command": "node",
|
||||||
|
"args": ["Z:/mcp/Server/MemoryMesh-main/dist/index.js"] # Path remains unchanged
|
||||||
|
}
|
||||||
|
# Add or remove servers as needed
|
||||||
|
}
|
||||||
|
|
||||||
|
# MCP Client Configuration
|
||||||
|
MCP_CONFIRM_TOOL_EXECUTION = False # True: Confirm before execution, False: Execute automatically
|
||||||
|
|
||||||
|
# Persona Configuration
|
||||||
|
PERSONA_NAME = "Wolfhart"
|
||||||
|
# PERSONA_RESOURCE_URI = "persona://wolfhart/details" # Now using local file instead
|
||||||
|
|
||||||
|
# Game window title (used in ui_interaction.py)
|
||||||
|
WINDOW_TITLE = "Last War-Survival Game"
|
||||||
|
|
||||||
|
# --- Print loaded keys for verification (Optional - BE CAREFUL!) ---
|
||||||
|
# print(f"DEBUG: Loaded OPENAI_API_KEY: {'*' * (len(OPENAI_API_KEY) - 4) + OPENAI_API_KEY[-4:] if OPENAI_API_KEY else 'Not Found'}")
|
||||||
|
# print(f"DEBUG: Loaded EXA_API_KEY: {'*' * (len(EXA_API_KEY) - 4) + EXA_API_KEY[-4:] if EXA_API_KEY else 'Not Found'}")
|
||||||
|
# print(f"DEBUG: Exa args: {MCP_SERVERS['exa']['args']}")
|
||||||
245
llm_interaction.py
Normal file
@ -0,0 +1,245 @@
|
|||||||
|
# llm_interaction.py (Correct version without _confirm_execution)
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
from openai import AsyncOpenAI, OpenAIError
|
||||||
|
from mcp import ClientSession # Type hinting
|
||||||
|
import config
|
||||||
|
import mcp_client # To call MCP tools
|
||||||
|
|
||||||
|
# --- Client Initialization ---
|
||||||
|
client: AsyncOpenAI | None = None
|
||||||
|
try:
|
||||||
|
client = AsyncOpenAI(
|
||||||
|
api_key=config.OPENAI_API_KEY,
|
||||||
|
base_url=config.OPENAI_API_BASE_URL if config.OPENAI_API_BASE_URL else None,
|
||||||
|
)
|
||||||
|
print("OpenAI/Compatible client initialized successfully.")
|
||||||
|
if config.OPENAI_API_BASE_URL: print(f"Using Base URL: {config.OPENAI_API_BASE_URL}")
|
||||||
|
else: print("Using official OpenAI API URL.")
|
||||||
|
print(f"Using model: {config.LLM_MODEL}")
|
||||||
|
except Exception as e: print(f"Failed to initialize OpenAI/Compatible client: {e}")
|
||||||
|
|
||||||
|
# --- System Prompt Definition ---
|
||||||
|
def get_system_prompt(persona_details: str | None) -> str:
|
||||||
|
"""
|
||||||
|
Constructs the system prompt in English.
|
||||||
|
Includes specific guidance on when to use memory vs web search tools,
|
||||||
|
and instructions against surrounding quotes / action descriptions.
|
||||||
|
"""
|
||||||
|
persona_header = f"You are {config.PERSONA_NAME}."
|
||||||
|
persona_info = "(No specific persona details were loaded.)"
|
||||||
|
if persona_details:
|
||||||
|
try: persona_info = f"Your key persona information is defined below. Adhere to it strictly:\n--- PERSONA START ---\n{persona_details}\n--- PERSONA END ---"
|
||||||
|
except Exception as e: print(f"Warning: Could not process persona_details string: {e}"); persona_info = f"Your key persona information (raw):\n{persona_details}"
|
||||||
|
|
||||||
|
system_prompt = f"""
|
||||||
|
{persona_header}
|
||||||
|
{persona_info}
|
||||||
|
|
||||||
|
You are an AI assistant integrated into this game's chat environment. Your primary goal is to engage naturally in conversations, be particularly attentive when the name "wolf" is mentioned, and provide assistance or information when relevant, all while strictly maintaining your persona.
|
||||||
|
|
||||||
|
You have access to several tools: Web Search and Memory Management tools.
|
||||||
|
|
||||||
|
**VERY IMPORTANT Instructions:**
|
||||||
|
|
||||||
|
1. **Analyze CURRENT Request ONLY:** Focus **exclusively** on the **LATEST** user message. Do **NOT** refer back to your own previous messages or add meta-commentary about history unless explicitly asked. Do **NOT** ask unrelated questions.
|
||||||
|
2. **Determine Language:** Identify the primary language in the user's triggering message.
|
||||||
|
3. **Assess Tool Need & Select Tool:** Decide if using a tool is necessary.
|
||||||
|
* **For Memory/Recall:** If asked about past events, known facts, or info likely in memory, use a **Memory Management tool** (`search_nodes`, `open_nodes`).
|
||||||
|
* **For Detailed/External Info:** If asked a detailed question needing current/external info, use the **Web Search tool** (`web_search`).
|
||||||
|
* **If Unsure or No Tool Needed:** Respond directly.
|
||||||
|
4. **Tool Arguments (If Needed):** Determine exact arguments. The system handles the call.
|
||||||
|
5. **Formulate Response:** Generate a response *directly addressing* the user's *current* message, using tool results if applicable.
|
||||||
|
* **Specifically for Web Search:** When you receive the web search result (likely as text snippets), **summarize the key findings** relevant to the user's query in your response. Do not just list the raw results.
|
||||||
|
6. **Response Constraints (MANDATORY):**
|
||||||
|
* **Language:** Respond **ONLY** in the **same language** as the user's triggering message.
|
||||||
|
* **Conciseness:** Keep responses **brief and conversational** (1-2 sentences usually). **NO** long paragraphs.
|
||||||
|
* **Dialogue ONLY:** Your output **MUST ONLY** be the character's spoken words. **ABSOLUTELY NO** descriptive actions, expressions, inner thoughts, stage directions, narration, parenthetical notes (like '(...)'), or any other text that isn't pure dialogue.
|
||||||
|
* **No Extra Formatting:** **DO NOT** wrap your final dialogue response in quotation marks (like `"`dialogue`"`) or other markdown. Just provide the raw spoken text.
|
||||||
|
7. **Persona Consistency:** Always maintain the {config.PERSONA_NAME} persona.
|
||||||
|
"""
|
||||||
|
return system_prompt
|
||||||
|
|
||||||
|
# --- Tool Formatting ---
|
||||||
|
def _format_mcp_tools_for_openai(mcp_tools: list) -> list:
|
||||||
|
"""
|
||||||
|
Converts the list of tool definition dictionaries obtained from MCP servers
|
||||||
|
into the format required by the OpenAI API's 'tools' parameter.
|
||||||
|
"""
|
||||||
|
openai_tools = [];
|
||||||
|
if not mcp_tools: return openai_tools
|
||||||
|
print(f"Formatting {len(mcp_tools)} MCP tool definitions...")
|
||||||
|
for tool_dict in mcp_tools:
|
||||||
|
try:
|
||||||
|
tool_name = tool_dict.get('name'); description = tool_dict.get('description', ''); parameters = tool_dict.get('parameters')
|
||||||
|
if not tool_name: print(f"Warning: Skipping unnamed tool {tool_dict}"); continue
|
||||||
|
if not isinstance(parameters, dict): print(f"Warning: Tool '{tool_name}' parameters not a dictionary"); parameters = {"type": "object", "properties": {}}
|
||||||
|
elif 'type' not in parameters or parameters.get('type') != 'object':
|
||||||
|
props = parameters.get('properties')
|
||||||
|
if isinstance(props, dict):
|
||||||
|
parameters = {"type": "object", "properties": props}
|
||||||
|
required = parameters.get('required') # Get potential required list
|
||||||
|
if required and isinstance(required, list):
|
||||||
|
parameters['required'] = required # Keep valid 'required' list
|
||||||
|
elif 'required' in parameters:
|
||||||
|
print(f"Warning: The 'required' property for tool '{tool_name}' is not a list, removing it.")
|
||||||
|
del parameters['required']
|
||||||
|
else: print(f"Warning: Tool '{tool_name}' parameter format may not conform to JSON Schema"); parameters = {"type": "object", "properties": {}}
|
||||||
|
openai_tools.append({"type": "function", "function": {"name": tool_name, "description": description, "parameters": parameters}})
|
||||||
|
except Exception as e: print(f"Warning: Error formatting tool '{tool_dict.get('name', 'unknown')}': {e}")
|
||||||
|
print(f"Successfully formatted {len(openai_tools)} tools for API use."); return openai_tools
|
||||||
|
|
||||||
|
|
||||||
|
# --- Main Interaction Function ---
|
||||||
|
async def get_llm_response(
|
||||||
|
user_input: str,
|
||||||
|
mcp_sessions: dict[str, ClientSession],
|
||||||
|
available_mcp_tools: list[dict],
|
||||||
|
persona_details: str | None
|
||||||
|
) -> str:
|
||||||
|
"""
|
||||||
|
Gets a response from the LLM, handling the tool-calling loop and using persona info.
|
||||||
|
Includes post-processing to remove surrounding quotes from final response.
|
||||||
|
"""
|
||||||
|
if not client:
|
||||||
|
return "Error: LLM client not successfully initialized, unable to process request."
|
||||||
|
|
||||||
|
openai_formatted_tools = _format_mcp_tools_for_openai(available_mcp_tools)
|
||||||
|
messages = [
|
||||||
|
{"role": "system", "content": get_system_prompt(persona_details)},
|
||||||
|
{"role": "user", "content": user_input},
|
||||||
|
]
|
||||||
|
|
||||||
|
max_tool_calls_per_turn = 5
|
||||||
|
current_tool_call_cycle = 0
|
||||||
|
|
||||||
|
while current_tool_call_cycle < max_tool_calls_per_turn:
|
||||||
|
current_tool_call_cycle += 1
|
||||||
|
print(f"\n--- Starting LLM API call (Cycle {current_tool_call_cycle}/{max_tool_calls_per_turn}) ---")
|
||||||
|
|
||||||
|
try:
|
||||||
|
response = await client.chat.completions.create(
|
||||||
|
model=config.LLM_MODEL,
|
||||||
|
messages=messages,
|
||||||
|
tools=openai_formatted_tools if openai_formatted_tools else None,
|
||||||
|
tool_choice="auto" if openai_formatted_tools else None,
|
||||||
|
)
|
||||||
|
|
||||||
|
response_message = response.choices[0].message
|
||||||
|
tool_calls = response_message.tool_calls
|
||||||
|
|
||||||
|
messages.append(response_message.model_dump(exclude_unset=True))
|
||||||
|
|
||||||
|
if not tool_calls:
|
||||||
|
print("--- LLM did not request tool calls, returning final response ---")
|
||||||
|
final_content = response_message.content or "[LLM did not provide text response]"
|
||||||
|
|
||||||
|
# Post-processing: Remove surrounding quotes
|
||||||
|
print(f"Original response content: '{final_content}'")
|
||||||
|
if isinstance(final_content, str):
|
||||||
|
content_stripped = final_content.strip()
|
||||||
|
if content_stripped.startswith('"') and content_stripped.endswith('"') and len(content_stripped) > 1:
|
||||||
|
final_content = content_stripped[1:-1]; print("Removed surrounding double quotes.")
|
||||||
|
elif content_stripped.startswith("'") and content_stripped.endswith("'") and len(content_stripped) > 1:
|
||||||
|
final_content = content_stripped[1:-1]; print("Removed surrounding single quotes.")
|
||||||
|
else: final_content = content_stripped
|
||||||
|
print(f"Processed response content: '{final_content}'")
|
||||||
|
return final_content
|
||||||
|
|
||||||
|
# Tool call handling
|
||||||
|
print(f"--- LLM requested {len(tool_calls)} tool calls ---"); tool_tasks = []
|
||||||
|
for tool_call in tool_calls: tool_tasks.append(asyncio.create_task(_execute_single_tool_call(tool_call, mcp_sessions, available_mcp_tools), name=f"tool_{tool_call.function.name}"))
|
||||||
|
results_list = await asyncio.gather(*tool_tasks, return_exceptions=True); processed_results_count = 0
|
||||||
|
for result in results_list:
|
||||||
|
if isinstance(result, Exception): print(f"Error executing tool: {result}")
|
||||||
|
elif isinstance(result, dict) and 'tool_call_id' in result: messages.append(result); processed_results_count += 1
|
||||||
|
else: print(f"Warning: Tool returned unexpected result type: {type(result)}")
|
||||||
|
if processed_results_count == 0 and tool_calls: print("Warning: All tool calls failed or had no valid results.")
|
||||||
|
|
||||||
|
except OpenAIError as e:
|
||||||
|
error_msg = f"Error interacting with LLM API ({config.OPENAI_API_BASE_URL or 'Official OpenAI'}): {e}"
|
||||||
|
print(error_msg); return f"Sorry, I encountered an error connecting to the language model."
|
||||||
|
except Exception as e:
|
||||||
|
error_msg = f"Unexpected error processing LLM response or tool calls: {e}"
|
||||||
|
print(error_msg); import traceback; traceback.print_exc(); return f"Sorry, an internal error occurred, please try again later."
|
||||||
|
|
||||||
|
# Max loop handling
|
||||||
|
print(f"Warning: Maximum tool call cycle limit reached ({max_tool_calls_per_turn})."); last_assistant_content = next((msg.get("content") for msg in reversed(messages) if msg["role"] == "assistant" and msg.get("content")), None)
|
||||||
|
if last_assistant_content: return last_assistant_content + "\n(Processing may be incomplete due to tool call limit being reached)"
|
||||||
|
else: return "Sorry, the processing was complex and reached the limit, unable to generate a response."
|
||||||
|
|
||||||
|
|
||||||
|
# --- Helper function _execute_single_tool_call ---
|
||||||
|
async def _execute_single_tool_call(tool_call, mcp_sessions, available_mcp_tools) -> dict:
|
||||||
|
"""
|
||||||
|
Helper function to execute one tool call and return the formatted result message.
|
||||||
|
Includes argument type correction for web_search.
|
||||||
|
Includes specific result processing for web_search.
|
||||||
|
"""
|
||||||
|
function_name = tool_call.function.name
|
||||||
|
function_args_str = tool_call.function.arguments
|
||||||
|
tool_call_id = tool_call.id
|
||||||
|
result_content = {"error": "Tool execution failed before call"} # Default error
|
||||||
|
result_content_str = "" # Initialize
|
||||||
|
|
||||||
|
print(f"Executing tool: {function_name}")
|
||||||
|
print(f"Raw arguments generated by LLM (string): {function_args_str}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
function_args = json.loads(function_args_str)
|
||||||
|
print(f"Parsed arguments (dictionary): {function_args}")
|
||||||
|
|
||||||
|
# Argument Type Correction for web_search
|
||||||
|
if function_name == 'web_search' and 'numResults' in function_args:
|
||||||
|
num_results_val = function_args['numResults']
|
||||||
|
if isinstance(num_results_val, str):
|
||||||
|
print(f"Detected 'numResults' as string '{num_results_val}', attempting to convert to number...")
|
||||||
|
try: function_args['numResults'] = int(num_results_val); print(f"Successfully converted to number: {function_args['numResults']}")
|
||||||
|
except ValueError: print(f"Warning: Unable to convert '{num_results_val}' to number. Using default value 5."); function_args['numResults'] = 5
|
||||||
|
elif not isinstance(num_results_val, int): print(f"Warning: 'numResults' type is neither string nor integer ({type(num_results_val)}). Using default value 5."); function_args['numResults'] = 5
|
||||||
|
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
print(f"Error: Unable to parse tool '{function_name}' arguments JSON: {function_args_str}"); result_content = {"error": "Invalid arguments JSON"}; function_args = None
|
||||||
|
|
||||||
|
# Proceed only if args were parsed successfully
|
||||||
|
if function_args is not None:
|
||||||
|
target_session = None; target_server_key = None
|
||||||
|
for tool_def in available_mcp_tools:
|
||||||
|
if isinstance(tool_def, dict) and tool_def.get('name') == function_name: target_server_key = tool_def.get('_server_key'); break
|
||||||
|
if target_server_key and target_server_key in mcp_sessions: target_session = mcp_sessions[target_server_key]
|
||||||
|
elif target_server_key: print(f"Error: No active session for '{target_server_key}'"); result_content = {"error": f"MCP session '{target_server_key}' not active"}
|
||||||
|
else: print(f"Error: Source server for tool '{function_name}' not found"); result_content = {"error": f"Source server not found for tool '{function_name}'"}
|
||||||
|
|
||||||
|
if target_session:
|
||||||
|
result_content = await mcp_client.call_mcp_tool(session=target_session, tool_name=function_name, arguments=function_args) # Use corrected args
|
||||||
|
if isinstance(result_content, dict) and 'error' in result_content: print(f"Tool '{function_name}' call returned error: {result_content['error']}")
|
||||||
|
|
||||||
|
# Format result content for LLM
|
||||||
|
try:
|
||||||
|
# Specific handling for web_search result
|
||||||
|
if function_name == 'web_search' and isinstance(result_content, dict) and 'error' not in result_content:
|
||||||
|
print("Processing web_search results...")
|
||||||
|
results = result_content.get('results') or result_content.get('toolResult', {}).get('results')
|
||||||
|
if isinstance(results, list):
|
||||||
|
snippets = []
|
||||||
|
for i, res in enumerate(results):
|
||||||
|
if isinstance(res, dict):
|
||||||
|
title = res.get('title', '')
|
||||||
|
snippet = res.get('snippet', res.get('text', ''))
|
||||||
|
url = res.get('url', '')
|
||||||
|
snippets.append(f"{i+1}. {title}: {snippet} (Source: {url})")
|
||||||
|
if snippets: result_content_str = "\n".join(snippets); print(f"Extracted {len(snippets)} web snippets.")
|
||||||
|
else: print("Warning: web_search results list is empty or format mismatch, returning raw JSON."); result_content_str = json.dumps(result_content)
|
||||||
|
else: print("Warning: Expected 'results' list not found in web_search result, returning raw JSON."); result_content_str = json.dumps(result_content)
|
||||||
|
# Handling for other tools or errors
|
||||||
|
else:
|
||||||
|
if not isinstance(result_content, (str, int, float, bool, list, dict, type(None))): result_content = str(result_content)
|
||||||
|
result_content_str = json.dumps(result_content)
|
||||||
|
|
||||||
|
except TypeError as json_err: print(f"Warning: Tool '{function_name}' result cannot be serialized: {json_err}. Converting to string. Result: {result_content}"); result_content_str = json.dumps(str(result_content))
|
||||||
|
except Exception as format_err: print(f"Error formatting tool '{function_name}' result: {format_err}"); result_content_str = json.dumps({"error": f"Failed to format tool result: {format_err}"})
|
||||||
|
|
||||||
|
# Return the formatted message for the LLM
|
||||||
|
return {"tool_call_id": tool_call_id, "role": "tool", "name": function_name, "content": result_content_str}
|
||||||
|
|
||||||
292
main.py
Normal file
@ -0,0 +1,292 @@
|
|||||||
|
# main.py (Complete version with UI integration, loads persona from JSON, syntax fix)
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import json # Import json module
|
||||||
|
from contextlib import AsyncExitStack
|
||||||
|
# --- Import standard queue ---
|
||||||
|
from queue import Queue as ThreadSafeQueue # Rename to avoid confusion
|
||||||
|
# --- End Import ---
|
||||||
|
from mcp.client.stdio import stdio_client
|
||||||
|
from mcp import ClientSession, StdioServerParameters, types
|
||||||
|
|
||||||
|
import config
|
||||||
|
import mcp_client
|
||||||
|
# Ensure llm_interaction is the version that accepts persona_details
|
||||||
|
import llm_interaction
|
||||||
|
# Import UI module
|
||||||
|
import ui_interaction
|
||||||
|
|
||||||
|
# --- Global Variables ---
|
||||||
|
active_mcp_sessions: dict[str, ClientSession] = {}
|
||||||
|
all_discovered_mcp_tools: list[dict] = []
|
||||||
|
exit_stack = AsyncExitStack()
|
||||||
|
# Stores loaded persona data (as a string for easy injection into prompt)
|
||||||
|
wolfhart_persona_details: str | None = None
|
||||||
|
# --- Use standard thread-safe queue ---
|
||||||
|
trigger_queue: ThreadSafeQueue = ThreadSafeQueue() # Use standard Queue
|
||||||
|
# --- End Change ---
|
||||||
|
ui_monitor_task: asyncio.Task | None = None # To track the UI monitor task
|
||||||
|
|
||||||
|
# --- Cleanup Function ---
|
||||||
|
async def shutdown():
|
||||||
|
"""Gracefully closes connections and stops monitoring task."""
|
||||||
|
global wolfhart_persona_details, ui_monitor_task
|
||||||
|
print(f"\nInitiating shutdown procedure...")
|
||||||
|
|
||||||
|
# 1. Cancel UI monitor task first
|
||||||
|
if ui_monitor_task and not ui_monitor_task.done():
|
||||||
|
print("Canceling UI monitoring task...")
|
||||||
|
ui_monitor_task.cancel()
|
||||||
|
try:
|
||||||
|
await ui_monitor_task # Wait for cancellation
|
||||||
|
print("UI monitoring task canceled.")
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
print("UI monitoring task successfully canceled.") # Expected outcome
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error while waiting for UI monitoring task cancellation: {e}")
|
||||||
|
|
||||||
|
# 2. Close MCP connections via AsyncExitStack
|
||||||
|
print(f"Closing MCP Server connections (via AsyncExitStack)...")
|
||||||
|
try:
|
||||||
|
await exit_stack.aclose()
|
||||||
|
print("AsyncExitStack closed successfully.")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error closing AsyncExitStack: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
finally:
|
||||||
|
# Clear global dictionaries after cleanup
|
||||||
|
active_mcp_sessions.clear()
|
||||||
|
all_discovered_mcp_tools.clear()
|
||||||
|
wolfhart_persona_details = None
|
||||||
|
print("Program cleanup completed.")
|
||||||
|
|
||||||
|
|
||||||
|
# --- Initialization Functions ---
|
||||||
|
async def connect_and_discover(key: str, server_config: dict):
|
||||||
|
"""
|
||||||
|
Connects to a single MCP server, initializes the session, and discovers tools.
|
||||||
|
"""
|
||||||
|
global all_discovered_mcp_tools, active_mcp_sessions, exit_stack
|
||||||
|
print(f"\nProcessing Server: '{key}'")
|
||||||
|
command = server_config.get("command")
|
||||||
|
args = server_config.get("args", [])
|
||||||
|
process_env = os.environ.copy()
|
||||||
|
if server_config.get("env") and isinstance(server_config["env"], dict):
|
||||||
|
process_env.update(server_config["env"])
|
||||||
|
|
||||||
|
if not command:
|
||||||
|
print(f"==> Error: Missing 'command' in Server '{key}' configuration. <==")
|
||||||
|
return
|
||||||
|
|
||||||
|
server_params = StdioServerParameters(
|
||||||
|
command=command, args=args, env=process_env,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
print(f"Using stdio_client to start and connect to Server '{key}'...")
|
||||||
|
read, write = await exit_stack.enter_async_context(
|
||||||
|
stdio_client(server_params)
|
||||||
|
)
|
||||||
|
print(f"stdio_client for '{key}' active.")
|
||||||
|
|
||||||
|
session = await exit_stack.enter_async_context(
|
||||||
|
ClientSession(read, write)
|
||||||
|
)
|
||||||
|
print(f"ClientSession for '{key}' context entered.")
|
||||||
|
|
||||||
|
print(f"Initializing Session '{key}'...")
|
||||||
|
await session.initialize()
|
||||||
|
print(f"Session '{key}' initialized successfully.")
|
||||||
|
|
||||||
|
active_mcp_sessions[key] = session
|
||||||
|
|
||||||
|
# Discover Tools for this server
|
||||||
|
print(f"Discovering tools for Server '{key}'...")
|
||||||
|
tools_as_dicts = await mcp_client.list_mcp_tools(session)
|
||||||
|
if tools_as_dicts:
|
||||||
|
processed_tools = []
|
||||||
|
for tool_dict in tools_as_dicts:
|
||||||
|
if isinstance(tool_dict, dict) and 'name' in tool_dict:
|
||||||
|
tool_dict['_server_key'] = key
|
||||||
|
processed_tools.append(tool_dict)
|
||||||
|
else:
|
||||||
|
print(f"Warning: Received unexpected tool dictionary format from mcp_client.list_mcp_tools: {tool_dict}")
|
||||||
|
all_discovered_mcp_tools.extend(processed_tools)
|
||||||
|
print(f"Processed {len(processed_tools)} tool definitions from Server '{key}'.")
|
||||||
|
else:
|
||||||
|
print(f"Server '{key}' has no available tools or parsing failed.")
|
||||||
|
|
||||||
|
# Error handling remains the same
|
||||||
|
except FileNotFoundError:
|
||||||
|
print(f"==> Error: Command '{command}' for Server '{key}' not found. Please check config.py. <==")
|
||||||
|
except ConnectionRefusedError:
|
||||||
|
print(f"==> Error: Connection to Server '{key}' refused. Please ensure the Server is running. <==")
|
||||||
|
except AttributeError as ae:
|
||||||
|
print(f"==> Attribute error during initialization or tool discovery for Server '{key}': {ae} <==")
|
||||||
|
print(f"==> Please confirm MCP SDK version and usage are correct. <==")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
except Exception as e:
|
||||||
|
print(f"==> Critical error initializing connection to Server '{key}': {e} <==")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
|
||||||
|
|
||||||
|
async def initialize_mcp_connections():
|
||||||
|
"""Concurrently starts and connects to all MCP servers."""
|
||||||
|
print("--- Starting parallel initialization of MCP connections ---")
|
||||||
|
connection_tasks = [
|
||||||
|
asyncio.create_task(connect_and_discover(key, server_config), name=f"connect_{key}")
|
||||||
|
for key, server_config in config.MCP_SERVERS.items()
|
||||||
|
]
|
||||||
|
if connection_tasks:
|
||||||
|
results = await asyncio.gather(*connection_tasks, return_exceptions=True)
|
||||||
|
# Optionally check results for exceptions here if needed
|
||||||
|
# for i, result in enumerate(results):
|
||||||
|
# if isinstance(result, Exception):
|
||||||
|
# server_key = list(config.MCP_SERVERS.keys())[i]
|
||||||
|
# print(f"Exception caught when connecting to Server '{server_key}': {result}")
|
||||||
|
print("\n--- All MCP connection initialization attempts completed ---")
|
||||||
|
print(f"Total discovered MCP tools: {len(all_discovered_mcp_tools)}.")
|
||||||
|
print(f"Currently active MCP Sessions: {list(active_mcp_sessions.keys())}")
|
||||||
|
|
||||||
|
|
||||||
|
# --- Load Persona Function (with corrected syntax) ---
|
||||||
|
def load_persona_from_file(filename="persona.json"):
|
||||||
|
"""Loads persona data from a local JSON file."""
|
||||||
|
global wolfhart_persona_details
|
||||||
|
# Ensure 'try' starts on a new line
|
||||||
|
try:
|
||||||
|
script_dir = os.path.dirname(os.path.abspath(__file__))
|
||||||
|
filepath = os.path.join(script_dir, filename)
|
||||||
|
print(f"\nAttempting to load Persona data from local file: {filepath}")
|
||||||
|
# Check if file exists before opening
|
||||||
|
if not os.path.exists(filepath):
|
||||||
|
raise FileNotFoundError(f"Persona file not found at {filepath}")
|
||||||
|
|
||||||
|
with open(filepath, 'r', encoding='utf-8') as f:
|
||||||
|
persona_data = json.load(f)
|
||||||
|
# Store as a formatted string for easy prompt injection
|
||||||
|
wolfhart_persona_details = json.dumps(persona_data, ensure_ascii=False, indent=2)
|
||||||
|
print(f"Successfully loaded Persona from '{filename}' (length: {len(wolfhart_persona_details)}).")
|
||||||
|
|
||||||
|
except FileNotFoundError:
|
||||||
|
print(f"Warning: Persona configuration file '{filename}' not found. Detailed persona will not be loaded.")
|
||||||
|
wolfhart_persona_details = None
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
print(f"Error: Failed to parse Persona configuration file '{filename}'. Please check JSON format.")
|
||||||
|
wolfhart_persona_details = None
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Unknown error loading Persona configuration file '{filename}': {e}")
|
||||||
|
wolfhart_persona_details = None
|
||||||
|
|
||||||
|
|
||||||
|
# --- Main Async Function ---
|
||||||
|
async def run_main_with_exit_stack():
|
||||||
|
"""Initializes connections, loads persona, starts UI monitor and main processing loop."""
|
||||||
|
global initialization_successful, main_task, loop, wolfhart_persona_details, trigger_queue, ui_monitor_task
|
||||||
|
try:
|
||||||
|
# 1. Load Persona Synchronously (before async loop starts)
|
||||||
|
load_persona_from_file() # Corrected function
|
||||||
|
|
||||||
|
# 2. Initialize MCP Connections Asynchronously
|
||||||
|
await initialize_mcp_connections()
|
||||||
|
|
||||||
|
# Exit if no servers connected successfully
|
||||||
|
if not active_mcp_sessions:
|
||||||
|
print("\nFailed to connect to any MCP Server, program will exit.")
|
||||||
|
return
|
||||||
|
|
||||||
|
initialization_successful = True
|
||||||
|
|
||||||
|
# 3. Start UI Monitoring in a separate thread
|
||||||
|
print("\n--- Starting UI monitoring thread ---")
|
||||||
|
loop = asyncio.get_running_loop() # Get loop for run_in_executor
|
||||||
|
monitor_task = loop.create_task(
|
||||||
|
asyncio.to_thread(ui_interaction.monitor_chat_for_trigger, trigger_queue),
|
||||||
|
name="ui_monitor"
|
||||||
|
)
|
||||||
|
ui_monitor_task = monitor_task # Store task reference for shutdown
|
||||||
|
|
||||||
|
# 4. Start the main processing loop (waiting on the standard queue)
|
||||||
|
print("\n--- Wolfhart chatbot has started (waiting for triggers) ---")
|
||||||
|
print(f"Available tools: {len(all_discovered_mcp_tools)}")
|
||||||
|
if wolfhart_persona_details: print("Persona data loaded.")
|
||||||
|
else: print("Warning: Failed to load Persona data.")
|
||||||
|
print("Press Ctrl+C to stop the program.")
|
||||||
|
|
||||||
|
while True:
|
||||||
|
print("\nWaiting for UI trigger (from thread-safe Queue)...")
|
||||||
|
# Use run_in_executor to wait for item from standard queue
|
||||||
|
trigger_data = await loop.run_in_executor(None, trigger_queue.get)
|
||||||
|
|
||||||
|
sender_name = trigger_data.get('sender')
|
||||||
|
bubble_text = trigger_data.get('text')
|
||||||
|
print(f"\n--- Received trigger from UI ---")
|
||||||
|
print(f" Sender: {sender_name}")
|
||||||
|
print(f" Content: {bubble_text[:100]}...")
|
||||||
|
|
||||||
|
if not sender_name or not bubble_text:
|
||||||
|
print("Warning: Received incomplete trigger data, skipping.")
|
||||||
|
# No task_done needed for standard queue
|
||||||
|
continue
|
||||||
|
|
||||||
|
print(f"\n{config.PERSONA_NAME} is thinking...")
|
||||||
|
try:
|
||||||
|
# Get LLM response
|
||||||
|
bot_response = await llm_interaction.get_llm_response(
|
||||||
|
user_input=f"Message from {sender_name}: {bubble_text}", # Provide context
|
||||||
|
mcp_sessions=active_mcp_sessions,
|
||||||
|
available_mcp_tools=all_discovered_mcp_tools,
|
||||||
|
persona_details=wolfhart_persona_details
|
||||||
|
)
|
||||||
|
print(f"{config.PERSONA_NAME}'s response: {bot_response}")
|
||||||
|
|
||||||
|
# Send response back via UI interaction module
|
||||||
|
if bot_response:
|
||||||
|
print("Preparing to send response via UI...")
|
||||||
|
send_success = await asyncio.to_thread(
|
||||||
|
ui_interaction.paste_and_send_reply,
|
||||||
|
bot_response
|
||||||
|
)
|
||||||
|
if send_success: print("Response sent successfully.")
|
||||||
|
else: print("Error: Failed to send response via UI.")
|
||||||
|
else:
|
||||||
|
print("LLM did not generate a response, not sending.")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\nError processing trigger or sending response: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
# No task_done needed for standard queue
|
||||||
|
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
print("Main task canceled.") # Expected during shutdown via Ctrl+C
|
||||||
|
# KeyboardInterrupt should ideally be caught by the outer handler now
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\nUnexpected critical error during program execution: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
finally:
|
||||||
|
print("\n--- Performing final cleanup (AsyncExitStack aclose and task cancellation) ---")
|
||||||
|
await shutdown() # Call the combined shutdown function
|
||||||
|
|
||||||
|
# --- Program Entry Point ---
|
||||||
|
if __name__ == "__main__":
|
||||||
|
print("Program starting...")
|
||||||
|
try:
|
||||||
|
# Run the main async function that handles setup and the loop
|
||||||
|
asyncio.run(run_main_with_exit_stack())
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
print("\nCtrl+C detected (outside asyncio.run)... Attempting to close...")
|
||||||
|
# The finally block inside run_main_with_exit_stack should ideally handle it
|
||||||
|
pass
|
||||||
|
except Exception as e:
|
||||||
|
# Catch top-level errors during asyncio.run itself
|
||||||
|
print(f"Top-level error during asyncio.run execution: {e}")
|
||||||
|
finally:
|
||||||
|
print("Program exited.")
|
||||||
|
|
||||||
252
mcp_client.py
Normal file
@ -0,0 +1,252 @@
|
|||||||
|
# mcp_client.py (Complete code including ValidationError workaround)
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json # Import json for parsing error details
|
||||||
|
import ast # Import ast for safely evaluating string literals
|
||||||
|
from mcp import ClientSession, types, McpError # Import McpError
|
||||||
|
# Import Pydantic validation error if needed for specific catch
|
||||||
|
try:
|
||||||
|
# pydantic_core is where ValidationError lives in Pydantic v2+
|
||||||
|
from pydantic_core import ValidationError
|
||||||
|
except ImportError:
|
||||||
|
try:
|
||||||
|
# Fallback for Pydantic v1 or other structures
|
||||||
|
from pydantic import ValidationError
|
||||||
|
except ImportError:
|
||||||
|
ValidationError = None # Define as None if pydantic is not available or structure differs
|
||||||
|
print("Warning: Unable to import pydantic_core.ValidationError or pydantic.ValidationError. Error handling may be limited.")
|
||||||
|
|
||||||
|
# --- Import Tool type ---
|
||||||
|
# Attempt to import the Tool type definition from common SDK locations
|
||||||
|
try:
|
||||||
|
from mcp.types import Tool
|
||||||
|
except ImportError:
|
||||||
|
try:
|
||||||
|
from mcp import Tool
|
||||||
|
except ImportError:
|
||||||
|
# Define a placeholder if import fails, to avoid NameError,
|
||||||
|
# but actual functionality might depend on the real type.
|
||||||
|
print("Warning: Unable to import 'Tool' type from MCP SDK. Tool processing may fail.")
|
||||||
|
Tool = type('Tool', (object,), {}) # Placeholder type
|
||||||
|
|
||||||
|
import config # Import configuration
|
||||||
|
|
||||||
|
# --- list_mcp_tools Function ---
|
||||||
|
async def list_mcp_tools(session: ClientSession) -> list[dict]:
|
||||||
|
"""
|
||||||
|
Lists the available MCP tools for a given session.
|
||||||
|
Parses the response structure and converts Tool objects to dictionaries.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
session: The active MCP ClientSession.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A list of tool definition dictionaries, ready for formatting for LLM.
|
||||||
|
Returns an empty list on error or if no tools are found.
|
||||||
|
"""
|
||||||
|
tool_definition_list = [] # Initialize list for the dictionaries we will return
|
||||||
|
try:
|
||||||
|
# Check if the session object has the necessary method
|
||||||
|
if not hasattr(session, 'list_tools') or not callable(session.list_tools):
|
||||||
|
print(f"Error: MCP ClientSession object is missing the callable 'list_tools' method. Please check the SDK.")
|
||||||
|
return tool_definition_list
|
||||||
|
|
||||||
|
# Call the SDK method to get the response containing tools
|
||||||
|
response = await session.list_tools()
|
||||||
|
# print(f"DEBUG: Raw list_tools response from session {session}: {response}") # Debug
|
||||||
|
|
||||||
|
# Extract the raw list of tools (likely Tool objects)
|
||||||
|
tools_list_raw = []
|
||||||
|
if isinstance(response, dict):
|
||||||
|
# If response is a dictionary, get the 'tools' key
|
||||||
|
tools_list_raw = response.get('tools', [])
|
||||||
|
elif hasattr(response, 'tools'):
|
||||||
|
# If response is an object, get the 'tools' attribute
|
||||||
|
tools_list_raw = getattr(response, 'tools', [])
|
||||||
|
else:
|
||||||
|
# Handle unexpected response type
|
||||||
|
print(f"Warning: Unexpected response type from session.list_tools(): {type(response)}. Unable to extract tools.")
|
||||||
|
print(f"Complete response: {response}")
|
||||||
|
return tool_definition_list
|
||||||
|
|
||||||
|
# Validate that we actually got a list
|
||||||
|
if not isinstance(tools_list_raw, list):
|
||||||
|
print(f"Warning: Expected a list under 'tools' key/attribute, but got {type(tools_list_raw)}. Response: {response}")
|
||||||
|
return tool_definition_list
|
||||||
|
|
||||||
|
# --- Convert Tool objects (or items) to dictionaries ---
|
||||||
|
print(f"Extracted {len(tools_list_raw)} raw tool items from Server, converting...")
|
||||||
|
for item in tools_list_raw:
|
||||||
|
try:
|
||||||
|
# Check if the item is likely a Tool object using hasattr for safety
|
||||||
|
if hasattr(item, 'name') and hasattr(item, 'description') and hasattr(item, 'inputSchema'):
|
||||||
|
tool_name = getattr(item, 'name', 'UnknownToolName')
|
||||||
|
tool_description = getattr(item, 'description', '')
|
||||||
|
tool_input_schema = getattr(item, 'inputSchema', None)
|
||||||
|
|
||||||
|
# Create the dictionary for our internal use / LLM formatting
|
||||||
|
tool_dict = {
|
||||||
|
'name': tool_name,
|
||||||
|
'description': tool_description,
|
||||||
|
# Map 'inputSchema' from MCP Tool to 'parameters' key
|
||||||
|
'parameters': tool_input_schema if isinstance(tool_input_schema, dict) else {"type": "object", "properties": {}}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Basic validation of parameters structure
|
||||||
|
if not isinstance(tool_dict['parameters'], dict):
|
||||||
|
print(f"Warning: The inputSchema for tool '{tool_dict['name']}' is not a dictionary, using empty parameters. Schema: {tool_dict['parameters']}")
|
||||||
|
tool_dict['parameters'] = {"type": "object", "properties": {}}
|
||||||
|
|
||||||
|
tool_definition_list.append(tool_dict)
|
||||||
|
else:
|
||||||
|
# Handle cases where items in the list are not Tool objects
|
||||||
|
print(f"Warning: Item in tool list is not in expected Tool object format: {item} (type: {type(item)})")
|
||||||
|
except Exception as conversion_err:
|
||||||
|
print(f"Warning: Error converting tool item '{getattr(item, 'name', item)}': {conversion_err}.")
|
||||||
|
|
||||||
|
print(f"Successfully converted {len(tool_definition_list)} tool definitions to dictionaries.")
|
||||||
|
return tool_definition_list # Return the list of dictionaries
|
||||||
|
|
||||||
|
except AttributeError as ae:
|
||||||
|
print(f"Error: MCP ClientSession object is missing 'list_tools' attribute/method: {ae}. Please check the SDK.")
|
||||||
|
return []
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error: Failed to execute list_tools or parse tools: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
return []
|
||||||
|
|
||||||
|
# --- _confirm_execution Function ---
|
||||||
|
def _confirm_execution(tool_name: str, arguments: dict) -> bool:
|
||||||
|
"""
|
||||||
|
If configured, prompts the user for confirmation before executing a tool.
|
||||||
|
Includes corrected indentation.
|
||||||
|
"""
|
||||||
|
if config.MCP_CONFIRM_TOOL_EXECUTION:
|
||||||
|
# Correctly indented try-except block
|
||||||
|
try:
|
||||||
|
confirm = input(f"\033[93m[CONFIRM]\033[0m Allow execution of MCP tool: '{tool_name}'\n Parameters: {arguments}\n (y/n)? ").lower().strip()
|
||||||
|
if confirm == 'y':
|
||||||
|
print("--> Execution confirmed.")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print("--> Execution denied.")
|
||||||
|
return False
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error reading confirmation: {e}, denying.")
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
# Confirmation not required
|
||||||
|
return True
|
||||||
|
|
||||||
|
# --- call_mcp_tool Function ---
|
||||||
|
async def call_mcp_tool(session: ClientSession, tool_name: str, arguments: dict):
|
||||||
|
"""
|
||||||
|
Calls a specified MCP tool via the given session.
|
||||||
|
Includes confirmation step and workaround for ValidationError on missing 'content'.
|
||||||
|
"""
|
||||||
|
# Call confirmation helper function
|
||||||
|
if not _confirm_execution(tool_name, arguments):
|
||||||
|
return {"error": "User declined execution", "tool_name": tool_name}
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Check if the session object has the necessary method
|
||||||
|
if not hasattr(session, 'call_tool') or not callable(session.call_tool):
|
||||||
|
error_msg = f"Error: MCP ClientSession object does not have a callable 'call_tool' method."
|
||||||
|
print(error_msg)
|
||||||
|
return {"error": error_msg, "tool_name": tool_name}
|
||||||
|
|
||||||
|
print(f"Calling MCP tool '{tool_name}'...")
|
||||||
|
# The actual SDK call that might raise McpError wrapping ValidationError
|
||||||
|
result = await session.call_tool(tool_name, arguments=arguments)
|
||||||
|
print(f"Tool '{tool_name}' execution completed (SDK validation passed).")
|
||||||
|
return result # Return the validated result if successful
|
||||||
|
|
||||||
|
except McpError as mcp_err:
|
||||||
|
# --- Workaround for ValidationError on missing 'content' ---
|
||||||
|
error_details = getattr(mcp_err, 'details', None) or {} # Get error details if available
|
||||||
|
error_message = str(mcp_err) # Get the full error message string
|
||||||
|
print(f"Tool '{tool_name}' call encountered McpError: {error_message}") # Log the original error
|
||||||
|
|
||||||
|
# Check if it's the specific validation error we want to handle
|
||||||
|
# This checks the error message string for keywords
|
||||||
|
is_validation_error = (
|
||||||
|
ValidationError is not None and # Check if ValidationError was imported
|
||||||
|
isinstance(mcp_err.__cause__, ValidationError) and # Check the underlying cause
|
||||||
|
"CallToolResult" in error_message and # Check specific model name
|
||||||
|
"content" in error_message and # Check specific field name
|
||||||
|
"Field required" in error_message # Check specific error type
|
||||||
|
)
|
||||||
|
# Alternative check if __cause__ isn't reliable
|
||||||
|
is_validation_error_str_check = (
|
||||||
|
"validation error for CallToolResult" in error_message and
|
||||||
|
"content" in error_message and
|
||||||
|
"Field required" in error_message
|
||||||
|
)
|
||||||
|
|
||||||
|
raw_input_value = None # Initialize variable for raw server response
|
||||||
|
|
||||||
|
# Attempt to extract raw input value if it looks like our specific error
|
||||||
|
if is_validation_error or is_validation_error_str_check:
|
||||||
|
print("Detected potential ValidationError for missing 'content', attempting to extract raw server response...")
|
||||||
|
try:
|
||||||
|
# Try getting 'input' from error details first (safer)
|
||||||
|
if isinstance(error_details, dict):
|
||||||
|
raw_input_value = error_details.get('input')
|
||||||
|
|
||||||
|
# If not found in details, try parsing from the error message string (more fragile)
|
||||||
|
if not raw_input_value:
|
||||||
|
start_index = error_message.find("input_value=")
|
||||||
|
if start_index != -1:
|
||||||
|
dict_str_start = error_message.find("{", start_index)
|
||||||
|
# Find the matching closing brace carefully
|
||||||
|
brace_level = 0
|
||||||
|
dict_str_end = -1
|
||||||
|
if dict_str_start != -1:
|
||||||
|
for i, char in enumerate(error_message[dict_str_start:]):
|
||||||
|
if char == '{': brace_level += 1
|
||||||
|
elif char == '}': brace_level -= 1
|
||||||
|
if brace_level == 0: dict_str_end = dict_str_start + i; break
|
||||||
|
|
||||||
|
if dict_str_start != -1 and dict_str_end != -1:
|
||||||
|
dict_str = error_message[dict_str_start : dict_str_end + 1]
|
||||||
|
try:
|
||||||
|
# Use ast.literal_eval for safer evaluation than eval()
|
||||||
|
raw_input_value = ast.literal_eval(dict_str)
|
||||||
|
print(f"Extracted raw input from error message string: {raw_input_value}")
|
||||||
|
except (ValueError, SyntaxError, TypeError) as eval_err:
|
||||||
|
print(f"Failed to parse raw input from error message: {eval_err}")
|
||||||
|
raw_input_value = None # Reset if parsing failed
|
||||||
|
else:
|
||||||
|
print("Unable to locate complete input_value dictionary in error message.")
|
||||||
|
else:
|
||||||
|
print("'input_value=' not found in error message.")
|
||||||
|
|
||||||
|
except Exception as parse_err:
|
||||||
|
print(f"Error extracting raw input from McpError details or message: {parse_err}")
|
||||||
|
raw_input_value = None
|
||||||
|
|
||||||
|
# Check if we successfully got the raw input and if it contains 'toolResult'
|
||||||
|
if raw_input_value and isinstance(raw_input_value, dict) and 'toolResult' in raw_input_value:
|
||||||
|
# If yes, return the raw toolResult, bypassing SDK validation
|
||||||
|
print(f"Warning: Bypassing SDK validation, returning raw toolResult to LLM.")
|
||||||
|
return raw_input_value['toolResult'] # Return the nested toolResult dictionary
|
||||||
|
else:
|
||||||
|
# If it wasn't the specific error or we couldn't extract data, return a standard error message
|
||||||
|
print(f"Unable to extract valid data from ValidationError, returning generic error message.")
|
||||||
|
return {"error": f"MCP Error during '{tool_name}': {error_message}", "tool_name": tool_name}
|
||||||
|
# --- End Workaround ---
|
||||||
|
|
||||||
|
except AttributeError as ae:
|
||||||
|
# Handle cases where the session object is missing expected methods
|
||||||
|
error_msg = f"Error: MCP ClientSession object missing attribute/method: {ae}."
|
||||||
|
print(error_msg)
|
||||||
|
return {"error": error_msg, "tool_name": tool_name}
|
||||||
|
except Exception as e:
|
||||||
|
# Catch any other unexpected errors during the tool call
|
||||||
|
error_msg = f"Unknown error calling MCP tool '{tool_name}': {e}"
|
||||||
|
print(error_msg)
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc() # Print full traceback for debugging
|
||||||
|
return {"error": error_msg, "tool_name": tool_name}
|
||||||
|
|
||||||
99
persona.json
Normal file
@ -0,0 +1,99 @@
|
|||||||
|
{
|
||||||
|
"name": "Wolfhart",
|
||||||
|
"nickname": "Wolfie",
|
||||||
|
"gender": "female",
|
||||||
|
"age": 19,
|
||||||
|
"occupation": "Corporate Strategist / Underground Intelligence Mastermind",
|
||||||
|
"height": "172cm",
|
||||||
|
"body_type": "Slender but well-defined",
|
||||||
|
"hair_color": "Deep black with hints of blue sheen",
|
||||||
|
"eye_color": "Steel grey, occasionally showing an icy blue glow",
|
||||||
|
"appearance": {
|
||||||
|
"clothing_style": "Fusion of women's suits and dresses, sharp tailoring, dark tones (ink blue, dark purple, deep black), exuding military presence and aristocratic texture",
|
||||||
|
"accessories": [
|
||||||
|
"Silver cufflinks",
|
||||||
|
"Black gloves",
|
||||||
|
"Old-fashioned pocket watch",
|
||||||
|
"Thin-framed glasses"
|
||||||
|
],
|
||||||
|
"hairstyle": "Long, straight waist-length hair, slightly curled at the ends, often tied in a low ponytail or braid",
|
||||||
|
"facial_features": "Sharp chin, long slender eyebrows and eyes, small mole near the corner of the left eye",
|
||||||
|
"body_characteristics": "Pale complexion, old scar on the arm",
|
||||||
|
"posture_motion": "Steady pace, precise movements, often crosses arms or gently swirls a wine glass"
|
||||||
|
},
|
||||||
|
"personality": {
|
||||||
|
"description": "Intelligent, calm, possesses a strong desire for control and a strategic overview",
|
||||||
|
"strengths": [
|
||||||
|
"Meticulous planning",
|
||||||
|
"Insightful into human nature",
|
||||||
|
"Strong leadership"
|
||||||
|
],
|
||||||
|
"weaknesses": [
|
||||||
|
"Overconfident",
|
||||||
|
"Fear of losing control"
|
||||||
|
],
|
||||||
|
"uniqueness": "Always maintains tone and composure, even in extreme situations",
|
||||||
|
"emotional_response": "Her eyes betray her emotions, especially when encountering Sherefox"
|
||||||
|
},
|
||||||
|
"language_social": {
|
||||||
|
"tone": "Respectful but sharp-tongued",
|
||||||
|
"catchphrases": [
|
||||||
|
//放棄這一句 "Merely this and nothing more.",
|
||||||
|
"Please stop dragging me down.",
|
||||||
|
"I told you, I will win."
|
||||||
|
],
|
||||||
|
"speaking_style": "Deliberate pace but every sentence carries a sting",
|
||||||
|
"attitude_towards_others": "Addresses everyone respectfully, but trusts no one",
|
||||||
|
"social_interaction_style": "Observant, skilled at manipulating conversations"
|
||||||
|
},
|
||||||
|
"behavior_daily": {
|
||||||
|
"habits": [
|
||||||
|
"Reads intelligence reports upon waking",
|
||||||
|
"Black coffee",
|
||||||
|
"Practices swordsmanship at night"
|
||||||
|
],
|
||||||
|
"gestures": [
|
||||||
|
"Tapping knuckles",
|
||||||
|
"Cold smirk"
|
||||||
|
],
|
||||||
|
"facial_expressions": "Smile doesn't reach her eyes, gaze often cold",
|
||||||
|
"body_language": "No superfluous movements, confident posture and gait",
|
||||||
|
"environment_interaction": "Prefers sitting with her back to the window, symbolizing distrust"
|
||||||
|
},
|
||||||
|
"background_story": {
|
||||||
|
"past_experiences": "Seized power from being a corporate adopted daughter to become an intelligence mastermind",
|
||||||
|
"family_background": "Identity unknown, claims the surname was seized",
|
||||||
|
"cultural_influences": "Influenced by European classical and strategic philosophy"
|
||||||
|
},
|
||||||
|
"values_interests_goals": {
|
||||||
|
"decision_making": "Acts based on whether the plan is profitable",
|
||||||
|
"special_skills": [
|
||||||
|
"Intelligence analysis",
|
||||||
|
"Psychological manipulation",
|
||||||
|
"Classical swordsmanship"
|
||||||
|
],
|
||||||
|
"short_term_goals": "Subdue opposing forces to seize resources",
|
||||||
|
"long_term_goals": "Establish a new order under her rule"
|
||||||
|
},
|
||||||
|
"preferences_reactions": {
|
||||||
|
"likes": [
|
||||||
|
"Perfect execution",
|
||||||
|
"Minimalist style",
|
||||||
|
"Chess games",
|
||||||
|
"Quiet nights"
|
||||||
|
],
|
||||||
|
"dislikes": [
|
||||||
|
"Chaos",
|
||||||
|
"Unexpected events",
|
||||||
|
"Emotional outbursts",
|
||||||
|
"Sherefox"
|
||||||
|
],
|
||||||
|
"reactions_to_likes": "Light hum, relaxed gaze",
|
||||||
|
"reactions_to_dislikes": "Silence, tone turns cold, cold smirk",
|
||||||
|
"behavior_in_situations": {
|
||||||
|
"emergency": "Calm and decisive",
|
||||||
|
"vs_sherefox": "Courtesy before force, shows no mercy"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
10
requirements.txt
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
# requirements.txt
|
||||||
|
openai
|
||||||
|
mcp
|
||||||
|
pyautogui
|
||||||
|
opencv-python
|
||||||
|
numpy
|
||||||
|
pyperclip
|
||||||
|
pygetwindow
|
||||||
|
psutil
|
||||||
|
python-dotenv
|
||||||
BIN
templates/Profile_Name_page.png
Normal file
|
After Width: | Height: | Size: 4.4 KiB |
BIN
templates/Profile_page.png
Normal file
|
After Width: | Height: | Size: 11 KiB |
BIN
templates/bot_corner_bl.png
Normal file
|
After Width: | Height: | Size: 1.7 KiB |
BIN
templates/bot_corner_br.png
Normal file
|
After Width: | Height: | Size: 1.7 KiB |
BIN
templates/bot_corner_tl.png
Normal file
|
After Width: | Height: | Size: 1.6 KiB |
BIN
templates/bot_corner_tr.png
Normal file
|
After Width: | Height: | Size: 1.5 KiB |
BIN
templates/chat_input.png
Normal file
|
After Width: | Height: | Size: 7.6 KiB |
BIN
templates/chat_room.png
Normal file
|
After Width: | Height: | Size: 5.7 KiB |
BIN
templates/copy_menu_item.png
Normal file
|
After Width: | Height: | Size: 3.0 KiB |
BIN
templates/copy_name_button.png
Normal file
|
After Width: | Height: | Size: 4.0 KiB |
BIN
templates/corner_bl.png
Normal file
|
After Width: | Height: | Size: 1.6 KiB |
BIN
templates/corner_br.png
Normal file
|
After Width: | Height: | Size: 1.6 KiB |
BIN
templates/corner_tl.png
Normal file
|
After Width: | Height: | Size: 1.5 KiB |
BIN
templates/corner_tr.png
Normal file
|
After Width: | Height: | Size: 1.6 KiB |
BIN
templates/keyword_wolf_lower.png
Normal file
|
After Width: | Height: | Size: 2.5 KiB |
BIN
templates/keyword_wolf_upper.png
Normal file
|
After Width: | Height: | Size: 2.6 KiB |
BIN
templates/profile_option.png
Normal file
|
After Width: | Height: | Size: 2.8 KiB |
BIN
templates/send_button.png
Normal file
|
After Width: | Height: | Size: 5.9 KiB |
505
ui_interaction.py
Normal file
@ -0,0 +1,505 @@
|
|||||||
|
# ui_interaction.py
|
||||||
|
# Handles recognition and interaction logic with the game screen
|
||||||
|
# Includes: Bot bubble corner detection, case-sensitive keyword detection, duplicate handling mechanism, state-based ESC cleanup, complete syntax fixes
|
||||||
|
|
||||||
|
import pyautogui
|
||||||
|
import cv2 # opencv-python
|
||||||
|
import numpy as np
|
||||||
|
import pyperclip
|
||||||
|
import time
|
||||||
|
import os
|
||||||
|
import collections
|
||||||
|
import asyncio
|
||||||
|
import pygetwindow as gw # Used to check/activate windows
|
||||||
|
import config # Used to read window title
|
||||||
|
import queue
|
||||||
|
|
||||||
|
# --- Configuration Section ---
|
||||||
|
# Get script directory to ensure relative paths are correct
|
||||||
|
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||||
|
TEMPLATE_DIR = os.path.join(SCRIPT_DIR, "templates") # Templates image folder path
|
||||||
|
os.makedirs(TEMPLATE_DIR, exist_ok=True) # Ensure folder exists
|
||||||
|
|
||||||
|
# --- Regular Bubble Corner Templates ---
|
||||||
|
# Please save screenshots to the templates folder using the following filenames
|
||||||
|
CORNER_TL_IMG = os.path.join(TEMPLATE_DIR, "corner_tl.png") # Regular bubble - Top Left corner
|
||||||
|
CORNER_TR_IMG = os.path.join(TEMPLATE_DIR, "corner_tr.png") # Regular bubble - Top Right corner
|
||||||
|
CORNER_BL_IMG = os.path.join(TEMPLATE_DIR, "corner_bl.png") # Regular bubble - Bottom Left corner
|
||||||
|
CORNER_BR_IMG = os.path.join(TEMPLATE_DIR, "corner_br.png") # Regular bubble - Bottom Right corner
|
||||||
|
|
||||||
|
# --- Bot Bubble Corner Templates (need to be provided!) ---
|
||||||
|
# Please save screenshots to the templates folder using the following filenames
|
||||||
|
BOT_CORNER_TL_IMG = os.path.join(TEMPLATE_DIR, "bot_corner_tl.png") # Bot bubble - Top Left corner
|
||||||
|
BOT_CORNER_TR_IMG = os.path.join(TEMPLATE_DIR, "bot_corner_tr.png") # Bot bubble - Top Right corner
|
||||||
|
BOT_CORNER_BL_IMG = os.path.join(TEMPLATE_DIR, "bot_corner_bl.png") # Bot bubble - Bottom Left corner
|
||||||
|
BOT_CORNER_BR_IMG = os.path.join(TEMPLATE_DIR, "bot_corner_br.png") # Bot bubble - Bottom Right corner
|
||||||
|
|
||||||
|
# --- Keyword Templates (case-sensitive) ---
|
||||||
|
# Please save screenshots to the templates folder using the following filenames
|
||||||
|
KEYWORD_wolf_LOWER_IMG = os.path.join(TEMPLATE_DIR, "keyword_wolf_lower.png") # Lowercase "wolf"
|
||||||
|
KEYWORD_Wolf_UPPER_IMG = os.path.join(TEMPLATE_DIR, "keyword_wolf_upper.png") # Uppercase "Wolf"
|
||||||
|
|
||||||
|
# --- UI Element Templates ---
|
||||||
|
# Please save screenshots to the templates folder using the following filenames
|
||||||
|
COPY_MENU_ITEM_IMG = os.path.join(TEMPLATE_DIR, "copy_menu_item.png") # "Copy" option in the menu
|
||||||
|
PROFILE_OPTION_IMG = os.path.join(TEMPLATE_DIR, "profile_option.png") # Option in the profile card that opens user details
|
||||||
|
COPY_NAME_BUTTON_IMG = os.path.join(TEMPLATE_DIR, "copy_name_button.png") # "Copy Name" button in user details
|
||||||
|
SEND_BUTTON_IMG = os.path.join(TEMPLATE_DIR, "send_button.png") # "Send" button for the chat input box
|
||||||
|
CHAT_INPUT_IMG = os.path.join(TEMPLATE_DIR, "chat_input.png") # (Optional) Template image for the chat input box
|
||||||
|
|
||||||
|
# --- Status Detection Templates ---
|
||||||
|
# Please save screenshots to the templates folder using the following filenames
|
||||||
|
PROFILE_NAME_PAGE_IMG = os.path.join(TEMPLATE_DIR, "Profile_Name_page.png") # User details page identifier
|
||||||
|
PROFILE_PAGE_IMG = os.path.join(TEMPLATE_DIR, "Profile_page.png") # Profile card page identifier
|
||||||
|
CHAT_ROOM_IMG = os.path.join(TEMPLATE_DIR, "chat_room.png") # Chat room interface identifier
|
||||||
|
|
||||||
|
# --- Operation Parameters (need to be adjusted based on your environment) ---
|
||||||
|
# Chat input box reference coordinates or region (needed if not using image positioning)
|
||||||
|
CHAT_INPUT_REGION = None # (100, 800, 500, 50) # Example region (x, y, width, height)
|
||||||
|
CHAT_INPUT_CENTER_X = 400 # Example X coordinate
|
||||||
|
CHAT_INPUT_CENTER_Y = 1280 # Example Y coordinate
|
||||||
|
|
||||||
|
# Screenshot and recognition parameters
|
||||||
|
SCREENSHOT_REGION = None # None means full screen, or set to (x, y, width, height) to limit scanning area
|
||||||
|
CONFIDENCE_THRESHOLD = 0.8 # Main image matching confidence threshold (0.0 ~ 1.0), needs adjustment
|
||||||
|
STATE_CONFIDENCE_THRESHOLD = 0.7 # State detection confidence threshold (may need to be lower)
|
||||||
|
AVATAR_OFFSET_X = -50 # Avatar X offset relative to bubble top-left corner (based on your update)
|
||||||
|
|
||||||
|
# Duplicate handling parameters
|
||||||
|
BBOX_SIMILARITY_TOLERANCE = 10 # Pixel tolerance when determining if two bubbles are in similar positions
|
||||||
|
RECENT_TEXT_HISTORY_MAXLEN = 5 # Number of recently processed texts to keep
|
||||||
|
|
||||||
|
# --- Helper Functions ---
|
||||||
|
|
||||||
|
def find_template_on_screen(template_path, region=None, confidence=CONFIDENCE_THRESHOLD, grayscale=False):
|
||||||
|
"""
|
||||||
|
Find a template image in a specified screen region (more robust version).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
template_path (str): Path to the template image.
|
||||||
|
region (tuple, optional): Screenshot region (x, y, width, height). Default is None (full screen).
|
||||||
|
confidence (float, optional): Matching confidence threshold. Default is CONFIDENCE_THRESHOLD.
|
||||||
|
grayscale (bool, optional): Whether to use grayscale for matching. Default is False.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
list: List containing center point coordinates of all found matches [(x1, y1), (x2, y2), ...],
|
||||||
|
or empty list if none found.
|
||||||
|
"""
|
||||||
|
locations = []
|
||||||
|
# Check if template file exists, warn only once when not found
|
||||||
|
if not os.path.exists(template_path):
|
||||||
|
if not hasattr(find_template_on_screen, 'warned_paths'):
|
||||||
|
find_template_on_screen.warned_paths = set()
|
||||||
|
if template_path not in find_template_on_screen.warned_paths:
|
||||||
|
print(f"Error: Template image doesn't exist: {template_path}")
|
||||||
|
find_template_on_screen.warned_paths.add(template_path)
|
||||||
|
return []
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Use pyautogui to find all matches (requires opencv-python)
|
||||||
|
matches = pyautogui.locateAllOnScreen(template_path, region=region, confidence=confidence, grayscale=grayscale)
|
||||||
|
if matches:
|
||||||
|
for box in matches:
|
||||||
|
center_x = box.left + box.width // 2
|
||||||
|
center_y = box.top + box.height // 2
|
||||||
|
locations.append((center_x, center_y))
|
||||||
|
# print(f"Found template '{os.path.basename(template_path)}' at {len(locations)} locations.") # Debug
|
||||||
|
return locations
|
||||||
|
except Exception as e:
|
||||||
|
# Print more detailed error, including template path
|
||||||
|
print(f"Error finding template '{os.path.basename(template_path)}' ({template_path}): {e}")
|
||||||
|
return []
|
||||||
|
|
||||||
|
def click_at(x, y, button='left', clicks=1, interval=0.1, duration=0.1):
|
||||||
|
"""Safely click at specific coordinates, with movement time added"""
|
||||||
|
try:
|
||||||
|
x_int, y_int = int(x), int(y) # Ensure coordinates are integers
|
||||||
|
print(f"Moving to and clicking at: ({x_int}, {y_int}), button: {button}, clicks: {clicks}")
|
||||||
|
pyautogui.moveTo(x_int, y_int, duration=duration) # Smooth move to target
|
||||||
|
pyautogui.click(button=button, clicks=clicks, interval=interval)
|
||||||
|
time.sleep(0.1) # Brief pause after clicking
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error clicking at coordinates ({int(x)}, {int(y)}): {e}")
|
||||||
|
|
||||||
|
def get_clipboard_text():
|
||||||
|
"""Get text from clipboard"""
|
||||||
|
try:
|
||||||
|
return pyperclip.paste()
|
||||||
|
except Exception as e:
|
||||||
|
# pyperclip might fail in certain environments (like headless servers)
|
||||||
|
print(f"Error reading clipboard: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def set_clipboard_text(text):
|
||||||
|
"""Set clipboard text"""
|
||||||
|
try:
|
||||||
|
pyperclip.copy(text)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error writing to clipboard: {e}")
|
||||||
|
|
||||||
|
def are_bboxes_similar(bbox1, bbox2, tolerance=BBOX_SIMILARITY_TOLERANCE):
|
||||||
|
"""Check if two bounding boxes' positions (top-left corner) are very close"""
|
||||||
|
if bbox1 is None or bbox2 is None:
|
||||||
|
return False
|
||||||
|
# Compare top-left coordinates (bbox[0], bbox[1])
|
||||||
|
return abs(bbox1[0] - bbox2[0]) <= tolerance and abs(bbox1[1] - bbox2[1]) <= tolerance
|
||||||
|
|
||||||
|
# --- Main Logic Functions ---
|
||||||
|
|
||||||
|
def find_dialogue_bubbles():
|
||||||
|
"""
|
||||||
|
Scan the screen for regular bubble corners and Bot bubble corners, and try to pair them.
|
||||||
|
Returns a list containing bounding boxes and whether they are Bot bubbles.
|
||||||
|
!!! The matching logic is very basic and needs significant improvement based on actual needs !!!
|
||||||
|
"""
|
||||||
|
all_bubbles_with_type = [] # Store (bbox, is_bot_flag)
|
||||||
|
|
||||||
|
# 1. Find all regular corners
|
||||||
|
tl_corners = find_template_on_screen(CORNER_TL_IMG, region=SCREENSHOT_REGION)
|
||||||
|
br_corners = find_template_on_screen(CORNER_BR_IMG, region=SCREENSHOT_REGION)
|
||||||
|
# tr_corners = find_template_on_screen(CORNER_TR_IMG, region=SCREENSHOT_REGION) # Not using TR/BL for now
|
||||||
|
# bl_corners = find_template_on_screen(CORNER_BL_IMG, region=SCREENSHOT_REGION)
|
||||||
|
|
||||||
|
# 2. Find all Bot corners
|
||||||
|
bot_tl_corners = find_template_on_screen(BOT_CORNER_TL_IMG, region=SCREENSHOT_REGION)
|
||||||
|
bot_br_corners = find_template_on_screen(BOT_CORNER_BR_IMG, region=SCREENSHOT_REGION)
|
||||||
|
# bot_tr_corners = find_template_on_screen(BOT_CORNER_TR_IMG, region=SCREENSHOT_REGION)
|
||||||
|
# bot_bl_corners = find_template_on_screen(BOT_CORNER_BL_IMG, region=SCREENSHOT_REGION)
|
||||||
|
|
||||||
|
# 3. Try to match regular bubbles (using TL and BR)
|
||||||
|
processed_tls = set() # Track already matched TL indices
|
||||||
|
if tl_corners and br_corners:
|
||||||
|
for i, tl in enumerate(tl_corners):
|
||||||
|
if i in processed_tls: continue
|
||||||
|
potential_br = None
|
||||||
|
min_dist_sq = float('inf')
|
||||||
|
# Find the best BR corresponding to this TL (e.g., closest, or satisfying specific geometric constraints)
|
||||||
|
for j, br in enumerate(br_corners):
|
||||||
|
# Check if BR is in a reasonable range to the bottom-right of TL
|
||||||
|
if br[0] > tl[0] + 20 and br[1] > tl[1] + 10: # Assume minimum width/height
|
||||||
|
dist_sq = (br[0] - tl[0])**2 + (br[1] - tl[1])**2
|
||||||
|
# Could add more conditions here, e.g., aspect ratio limits
|
||||||
|
if dist_sq < min_dist_sq: # Simple nearest-match
|
||||||
|
potential_br = br
|
||||||
|
min_dist_sq = dist_sq
|
||||||
|
|
||||||
|
if potential_br:
|
||||||
|
# Assuming we found matching TL and BR, define bounding box
|
||||||
|
bubble_bbox = (tl[0], tl[1], potential_br[0], potential_br[1])
|
||||||
|
all_bubbles_with_type.append((bubble_bbox, False)) # Mark as non-Bot
|
||||||
|
processed_tls.add(i) # Mark this TL as used
|
||||||
|
|
||||||
|
# 4. Try to match Bot bubbles (using Bot TL and Bot BR)
|
||||||
|
processed_bot_tls = set()
|
||||||
|
if bot_tl_corners and bot_br_corners:
|
||||||
|
for i, tl in enumerate(bot_tl_corners):
|
||||||
|
if i in processed_bot_tls: continue
|
||||||
|
potential_br = None
|
||||||
|
min_dist_sq = float('inf')
|
||||||
|
for j, br in enumerate(bot_br_corners):
|
||||||
|
if br[0] > tl[0] + 20 and br[1] > tl[1] + 10:
|
||||||
|
dist_sq = (br[0] - tl[0])**2 + (br[1] - tl[1])**2
|
||||||
|
if dist_sq < min_dist_sq:
|
||||||
|
potential_br = br
|
||||||
|
min_dist_sq = dist_sq
|
||||||
|
if potential_br:
|
||||||
|
bubble_bbox = (tl[0], tl[1], potential_br[0], potential_br[1])
|
||||||
|
all_bubbles_with_type.append((bubble_bbox, True)) # Mark as Bot
|
||||||
|
processed_bot_tls.add(i)
|
||||||
|
|
||||||
|
# print(f"Found {len(all_bubbles_with_type)} potential bubbles.") #reduce printing
|
||||||
|
return all_bubbles_with_type
|
||||||
|
|
||||||
|
|
||||||
|
def find_keyword_in_bubble(bubble_bbox):
|
||||||
|
"""
|
||||||
|
Look for the keywords "wolf" or "Wolf" images within the specified bubble area.
|
||||||
|
"""
|
||||||
|
x_min, y_min, x_max, y_max = bubble_bbox
|
||||||
|
width = x_max - x_min
|
||||||
|
height = y_max - y_min
|
||||||
|
if width <= 0 or height <= 0:
|
||||||
|
# print(f"Warning: Invalid bubble area {bubble_bbox}") #reduce printing
|
||||||
|
return None
|
||||||
|
search_region = (x_min, y_min, width, height)
|
||||||
|
# print(f"Searching for keywords in region {search_region}...") #reduce printing
|
||||||
|
|
||||||
|
# 1. Try to find lowercase "wolf"
|
||||||
|
keyword_locations_lower = find_template_on_screen(KEYWORD_wolf_LOWER_IMG, region=search_region)
|
||||||
|
if keyword_locations_lower:
|
||||||
|
keyword_coords = keyword_locations_lower[0]
|
||||||
|
print(f"Found keyword (lowercase) in bubble {bubble_bbox}, position: {keyword_coords}")
|
||||||
|
return keyword_coords
|
||||||
|
|
||||||
|
# 2. If lowercase not found, try uppercase "Wolf"
|
||||||
|
keyword_locations_upper = find_template_on_screen(KEYWORD_Wolf_UPPER_IMG, region=search_region)
|
||||||
|
if keyword_locations_upper:
|
||||||
|
keyword_coords = keyword_locations_upper[0]
|
||||||
|
print(f"Found keyword (uppercase) in bubble {bubble_bbox}, position: {keyword_coords}")
|
||||||
|
return keyword_coords
|
||||||
|
|
||||||
|
# If neither found
|
||||||
|
return None
|
||||||
|
|
||||||
|
def find_avatar_for_bubble(bubble_bbox):
|
||||||
|
"""Calculate avatar frame position based on bubble's top-left coordinates."""
|
||||||
|
tl_x, tl_y = bubble_bbox[0], bubble_bbox[1]
|
||||||
|
# Adjust offset and Y-coordinate calculation based on actual layout
|
||||||
|
avatar_x = tl_x + AVATAR_OFFSET_X # Use updated offset
|
||||||
|
avatar_y = tl_y # Assume Y coordinate is same as top-left
|
||||||
|
print(f"Calculated avatar coordinates: ({int(avatar_x)}, {int(avatar_y)})")
|
||||||
|
return (avatar_x, avatar_y)
|
||||||
|
|
||||||
|
def get_bubble_text(keyword_coords):
|
||||||
|
"""
|
||||||
|
Click on keyword position, simulate menu selection "Copy" or press Ctrl+C, and get text from clipboard.
|
||||||
|
"""
|
||||||
|
print(f"Attempting to copy @ {keyword_coords}...");
|
||||||
|
original_clipboard = get_clipboard_text() or "" # Ensure not None
|
||||||
|
set_clipboard_text("___MCP_CLEAR___") # Use special marker to clear
|
||||||
|
time.sleep(0.1) # Brief wait for clipboard operation
|
||||||
|
|
||||||
|
# Click on keyword position
|
||||||
|
click_at(keyword_coords[0], keyword_coords[1])
|
||||||
|
time.sleep(0.2) # Wait for possible menu or reaction
|
||||||
|
|
||||||
|
# Try to find and click "Copy" menu item
|
||||||
|
copy_item_locations = find_template_on_screen(COPY_MENU_ITEM_IMG, confidence=0.7)
|
||||||
|
copied = False # Initialize copy state
|
||||||
|
if copy_item_locations:
|
||||||
|
copy_coords = copy_item_locations[0]
|
||||||
|
click_at(copy_coords[0], copy_coords[1])
|
||||||
|
print("Clicked 'Copy' menu item.")
|
||||||
|
time.sleep(0.2) # Wait for copy operation to complete
|
||||||
|
copied = True # Mark copy operation as attempted (via click)
|
||||||
|
else:
|
||||||
|
print("'Copy' menu item not found. Attempting to simulate Ctrl+C.")
|
||||||
|
# --- Corrected try block ---
|
||||||
|
try:
|
||||||
|
pyautogui.hotkey('ctrl', 'c')
|
||||||
|
time.sleep(0.2) # Wait for copy operation to complete
|
||||||
|
print("Simulated Ctrl+C.")
|
||||||
|
copied = True # Mark copy operation as attempted (via hotkey)
|
||||||
|
except Exception as e_ctrlc:
|
||||||
|
print(f"Failed to simulate Ctrl+C: {e_ctrlc}")
|
||||||
|
copied = False # Ensure copied is False on failure
|
||||||
|
# --- End correction ---
|
||||||
|
|
||||||
|
# Check clipboard content
|
||||||
|
copied_text = get_clipboard_text()
|
||||||
|
|
||||||
|
# Restore original clipboard
|
||||||
|
pyperclip.copy(original_clipboard)
|
||||||
|
|
||||||
|
# Determine if copy was successful
|
||||||
|
if copied and copied_text and copied_text != "___MCP_CLEAR___":
|
||||||
|
print(f"Successfully copied text, length: {len(copied_text)}")
|
||||||
|
return copied_text.strip() # Return text with leading/trailing whitespace removed
|
||||||
|
else:
|
||||||
|
print("Error: Copy operation unsuccessful or clipboard content invalid.")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def get_sender_name(avatar_coords):
|
||||||
|
"""
|
||||||
|
Click avatar, open profile card, click option, open user details, click copy name.
|
||||||
|
Uses state-based ESC cleanup logic.
|
||||||
|
"""
|
||||||
|
print(f"Attempting to get username from avatar {avatar_coords}...")
|
||||||
|
original_clipboard = get_clipboard_text() or ""
|
||||||
|
set_clipboard_text("___MCP_CLEAR___")
|
||||||
|
time.sleep(0.1)
|
||||||
|
sender_name = None # Initialize
|
||||||
|
success = False # Mark whether name retrieval was successful
|
||||||
|
|
||||||
|
try:
|
||||||
|
# 1. Click avatar
|
||||||
|
click_at(avatar_coords[0], avatar_coords[1])
|
||||||
|
time.sleep(.3) # Wait for profile card to appear
|
||||||
|
|
||||||
|
# 2. Find and click option on profile card (triggers user details)
|
||||||
|
profile_option_locations = find_template_on_screen(PROFILE_OPTION_IMG, confidence=0.7)
|
||||||
|
if not profile_option_locations:
|
||||||
|
print("Error: User details option not found on profile card.")
|
||||||
|
# No need to raise exception here, let finally handle cleanup
|
||||||
|
else:
|
||||||
|
click_at(profile_option_locations[0][0], profile_option_locations[0][1])
|
||||||
|
print("Clicked user details option.")
|
||||||
|
time.sleep(.3) # Wait for user details window to appear
|
||||||
|
|
||||||
|
# 3. Find and click "Copy Name" button in user details
|
||||||
|
copy_name_locations = find_template_on_screen(COPY_NAME_BUTTON_IMG, confidence=0.7)
|
||||||
|
if not copy_name_locations:
|
||||||
|
print("Error: 'Copy Name' button not found in user details.")
|
||||||
|
else:
|
||||||
|
click_at(copy_name_locations[0][0], copy_name_locations[0][1])
|
||||||
|
print("Clicked 'Copy Name' button.")
|
||||||
|
time.sleep(0.1) # Wait for copy to complete
|
||||||
|
copied_name = get_clipboard_text()
|
||||||
|
if copied_name and copied_name != "___MCP_CLEAR___":
|
||||||
|
print(f"Successfully copied username: {copied_name}")
|
||||||
|
sender_name = copied_name.strip() # Store successfully copied name
|
||||||
|
success = True # Mark success
|
||||||
|
else:
|
||||||
|
print("Error: Clipboard content unchanged or empty, failed to copy username.")
|
||||||
|
|
||||||
|
# Regardless of success above, return sender_name (might be None)
|
||||||
|
return sender_name
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error during username retrieval process: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
return None # Return None to indicate failure
|
||||||
|
finally:
|
||||||
|
# --- State-based cleanup logic ---
|
||||||
|
print("Performing cleanup: Attempting to press ESC to return to chat interface based on screen state...")
|
||||||
|
max_esc_attempts = 4 # Increase attempt count just in case
|
||||||
|
returned_to_chat = False
|
||||||
|
for attempt in range(max_esc_attempts):
|
||||||
|
print(f"Cleanup attempt #{attempt + 1}/{max_esc_attempts}")
|
||||||
|
time.sleep(0.2) # Short wait before each attempt
|
||||||
|
|
||||||
|
# First check if already returned to chat room
|
||||||
|
# Using lower confidence for state checks may be more stable
|
||||||
|
if find_template_on_screen(CHAT_ROOM_IMG, confidence=STATE_CONFIDENCE_THRESHOLD):
|
||||||
|
print("Chat room interface detected, cleanup complete.")
|
||||||
|
returned_to_chat = True
|
||||||
|
break # Already returned, exit loop
|
||||||
|
|
||||||
|
# Check if in user details page
|
||||||
|
elif find_template_on_screen(PROFILE_NAME_PAGE_IMG, confidence=STATE_CONFIDENCE_THRESHOLD):
|
||||||
|
print("User details page detected, pressing ESC...")
|
||||||
|
pyautogui.press('esc')
|
||||||
|
time.sleep(0.2) # Wait for UI response
|
||||||
|
continue # Continue to next loop iteration
|
||||||
|
|
||||||
|
# Check if in profile card page
|
||||||
|
elif find_template_on_screen(PROFILE_PAGE_IMG, confidence=STATE_CONFIDENCE_THRESHOLD):
|
||||||
|
print("Profile card page detected, pressing ESC...")
|
||||||
|
pyautogui.press('esc')
|
||||||
|
time.sleep(0.2) # Wait for UI response
|
||||||
|
continue # Continue to next loop iteration
|
||||||
|
|
||||||
|
else:
|
||||||
|
# Cannot identify current state
|
||||||
|
print("No known page state detected.")
|
||||||
|
if attempt < max_esc_attempts - 1:
|
||||||
|
print("Trying one ESC press as fallback...")
|
||||||
|
pyautogui.press('esc')
|
||||||
|
time.sleep(0.2) # Wait for response
|
||||||
|
else:
|
||||||
|
print("Maximum attempts reached, stopping cleanup.")
|
||||||
|
break # Exit loop
|
||||||
|
|
||||||
|
if not returned_to_chat:
|
||||||
|
print("Warning: Could not confirm return to chat room interface via state detection.")
|
||||||
|
# --- End of new cleanup logic ---
|
||||||
|
|
||||||
|
# Ensure clipboard is restored
|
||||||
|
pyperclip.copy(original_clipboard)
|
||||||
|
|
||||||
|
|
||||||
|
def paste_and_send_reply(reply_text):
|
||||||
|
"""
|
||||||
|
Click chat input box, paste response, click send button or press Enter.
|
||||||
|
"""
|
||||||
|
print("Preparing to send response...")
|
||||||
|
# --- Corrected if statement ---
|
||||||
|
if not reply_text:
|
||||||
|
print("Error: Response content is empty, cannot send.")
|
||||||
|
return False
|
||||||
|
# --- End correction ---
|
||||||
|
|
||||||
|
input_coords = None
|
||||||
|
if os.path.exists(CHAT_INPUT_IMG):
|
||||||
|
input_locations = find_template_on_screen(CHAT_INPUT_IMG, confidence=0.7)
|
||||||
|
if input_locations:
|
||||||
|
input_coords = input_locations[0]
|
||||||
|
print(f"Found input box position via image: {input_coords}")
|
||||||
|
else:
|
||||||
|
print("Warning: Input box not found via image, using default coordinates.")
|
||||||
|
input_coords = (CHAT_INPUT_CENTER_X, CHAT_INPUT_CENTER_Y)
|
||||||
|
else:
|
||||||
|
print("Warning: Input box template image doesn't exist, using default coordinates.")
|
||||||
|
input_coords = (CHAT_INPUT_CENTER_X, CHAT_INPUT_CENTER_Y)
|
||||||
|
|
||||||
|
click_at(input_coords[0], input_coords[1])
|
||||||
|
time.sleep(0.3)
|
||||||
|
|
||||||
|
print("Pasting response...")
|
||||||
|
set_clipboard_text(reply_text)
|
||||||
|
time.sleep(0.1)
|
||||||
|
try:
|
||||||
|
pyautogui.hotkey('ctrl', 'v')
|
||||||
|
time.sleep(0.5)
|
||||||
|
print("Pasted.")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error pasting response: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
send_button_locations = find_template_on_screen(SEND_BUTTON_IMG, confidence=0.7)
|
||||||
|
if send_button_locations:
|
||||||
|
send_coords = send_button_locations[0]
|
||||||
|
click_at(send_coords[0], send_coords[1])
|
||||||
|
print("Clicked send button.")
|
||||||
|
time.sleep(0.1)
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print("Send button not found. Attempting to press Enter.")
|
||||||
|
try:
|
||||||
|
pyautogui.press('enter')
|
||||||
|
print("Pressed Enter.")
|
||||||
|
time.sleep(0.5)
|
||||||
|
return True
|
||||||
|
except Exception as e_enter:
|
||||||
|
print(f"Error pressing Enter: {e_enter}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
# --- Main Monitoring and Triggering Logic ---
|
||||||
|
recent_texts = collections.deque(maxlen=RECENT_TEXT_HISTORY_MAXLEN)
|
||||||
|
last_processed_bubble_bbox = None
|
||||||
|
|
||||||
|
def monitor_chat_for_trigger(trigger_queue: queue.Queue): # Using standard queue
|
||||||
|
"""
|
||||||
|
Continuously monitor chat area, look for bubbles containing keywords and put trigger info in Queue.
|
||||||
|
"""
|
||||||
|
global last_processed_bubble_bbox
|
||||||
|
print(f"\n--- Starting chat room monitoring (UI Thread) ---")
|
||||||
|
# No longer need to get loop
|
||||||
|
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
all_bubbles_with_type = find_dialogue_bubbles()
|
||||||
|
if not all_bubbles_with_type: time.sleep(2); continue
|
||||||
|
other_bubbles_bboxes = [bbox for bbox, is_bot in all_bubbles_with_type if not is_bot]
|
||||||
|
if not other_bubbles_bboxes: time.sleep(2); continue
|
||||||
|
target_bubble = max(other_bubbles_bboxes, key=lambda b: b[3])
|
||||||
|
if are_bboxes_similar(target_bubble, last_processed_bubble_bbox): time.sleep(2); continue
|
||||||
|
|
||||||
|
keyword_coords = find_keyword_in_bubble(target_bubble)
|
||||||
|
if keyword_coords:
|
||||||
|
print(f"\n!!! Keyword detected in bubble {target_bubble} !!!")
|
||||||
|
bubble_text = get_bubble_text(keyword_coords) # Using corrected version
|
||||||
|
if not bubble_text: print("Error: Could not get dialogue content."); last_processed_bubble_bbox = target_bubble; continue
|
||||||
|
if bubble_text in recent_texts: print(f"Content '{bubble_text[:30]}...' in recent history, skipping."); last_processed_bubble_bbox = target_bubble; continue
|
||||||
|
|
||||||
|
print(">>> New trigger event <<<"); last_processed_bubble_bbox = target_bubble; recent_texts.append(bubble_text)
|
||||||
|
avatar_coords = find_avatar_for_bubble(target_bubble)
|
||||||
|
sender_name = get_sender_name(avatar_coords) # Using version with state cleanup
|
||||||
|
if not sender_name: print("Error: Could not get sender name, aborting processing."); continue
|
||||||
|
|
||||||
|
print("\n>>> Putting trigger info in Queue <<<"); print(f" Sender: {sender_name}"); print(f" Content: {bubble_text[:100]}...")
|
||||||
|
try:
|
||||||
|
# --- Using queue.put (synchronous) ---
|
||||||
|
data_to_send = {'sender': sender_name, 'text': bubble_text}
|
||||||
|
trigger_queue.put(data_to_send) # Directly put into standard Queue
|
||||||
|
print("Trigger info placed in Queue.")
|
||||||
|
except Exception as q_err: print(f"Error putting data in Queue: {q_err}")
|
||||||
|
print("--- Single trigger processing complete ---"); time.sleep(1)
|
||||||
|
time.sleep(1.5)
|
||||||
|
except KeyboardInterrupt: print("\nMonitoring interrupted."); break
|
||||||
|
except Exception as e: print(f"Unknown error in monitoring loop: {e}"); import traceback; traceback.print_exc(); print("Waiting 5 seconds before retry..."); time.sleep(5)
|
||||||
|
|
||||||
|
# if __name__ == '__main__': # Keep commented, typically called from main.py
|
||||||
|
# pass
|
||||||
103
window-setup-script.py
Normal file
@ -0,0 +1,103 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
"""
|
||||||
|
Game Window Setup Script - Adjust game window position and size
|
||||||
|
|
||||||
|
This script will launch the game and adjust its window to a specified position and size (100,100 1280x768),
|
||||||
|
making it easier to take screenshots of UI elements for later use.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
import subprocess
|
||||||
|
import pygetwindow as gw
|
||||||
|
import psutil
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
def is_process_running(process_name):
|
||||||
|
"""Check if a specified process is currently running"""
|
||||||
|
for proc in psutil.process_iter(['name']):
|
||||||
|
if proc.info['name'].lower() == process_name.lower():
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def launch_game(game_path):
|
||||||
|
"""Launch the game"""
|
||||||
|
if not os.path.exists(game_path):
|
||||||
|
print(f"Error: Game executable not found at {game_path}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
print(f"Launching game: {game_path}")
|
||||||
|
subprocess.Popen(game_path)
|
||||||
|
return True
|
||||||
|
|
||||||
|
def find_game_window(window_title, max_wait=30):
|
||||||
|
"""Find the game window"""
|
||||||
|
print(f"Searching for game window: {window_title}")
|
||||||
|
|
||||||
|
start_time = time.time()
|
||||||
|
while time.time() - start_time < max_wait:
|
||||||
|
try:
|
||||||
|
windows = gw.getWindowsWithTitle(window_title)
|
||||||
|
if windows:
|
||||||
|
return windows[0]
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error finding window: {e}")
|
||||||
|
|
||||||
|
print("Window not found, waiting 1 second before retrying...")
|
||||||
|
time.sleep(1)
|
||||||
|
|
||||||
|
print(f"Error: Game window not found within {max_wait} seconds")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def set_window_position_size(window, x, y, width, height):
|
||||||
|
"""Set window position and size"""
|
||||||
|
try:
|
||||||
|
print(f"Adjusting window position to ({x}, {y}) and size to {width}x{height}")
|
||||||
|
window.moveTo(x, y)
|
||||||
|
window.resizeTo(width, height)
|
||||||
|
print("Window adjustment completed")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error adjusting window: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description='Game Window Setup Tool')
|
||||||
|
parser.add_argument('--launch', action='store_true', help='Whether to launch the game')
|
||||||
|
parser.add_argument('--game_path', default=r"C:\Users\Bigspring\AppData\Local\TheLastWar\Launch.exe", help='Game launcher path')
|
||||||
|
parser.add_argument('--window_title', default="Last War-Survival Game", help='Game window title')
|
||||||
|
parser.add_argument('--process_name', default="LastWar.exe", help='Game process name')
|
||||||
|
parser.add_argument('--x', type=int, default=50, help='Window X coordinate')
|
||||||
|
parser.add_argument('--y', type=int, default=30, help='Window Y coordinate')
|
||||||
|
parser.add_argument('--width', type=int, default=600, help='Window width')
|
||||||
|
parser.add_argument('--height', type=int, default=1070, help='Window height')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
# Check if game is already running
|
||||||
|
if not is_process_running(args.process_name):
|
||||||
|
if args.launch:
|
||||||
|
# Launch the game
|
||||||
|
if not launch_game(args.game_path):
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
print(f"Game process {args.process_name} is not running, please launch the game first or use the --launch parameter")
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
print(f"Game process {args.process_name} is already running")
|
||||||
|
|
||||||
|
# Find game window
|
||||||
|
window = find_game_window(args.window_title)
|
||||||
|
if not window:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Set window position and size
|
||||||
|
set_window_position_size(window, args.x, args.y, args.width, args.height)
|
||||||
|
|
||||||
|
# Display final window state
|
||||||
|
print("\nFinal window state:")
|
||||||
|
print(f"Position: ({window.left}, {window.top})")
|
||||||
|
print(f"Size: {window.width}x{window.height}")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||