diff --git a/ClaudeCode.md b/ClaudeCode.md index f117883..7467f89 100644 --- a/ClaudeCode.md +++ b/ClaudeCode.md @@ -77,7 +77,7 @@ Wolf Chat 是一個基於 MCP (Modular Capability Provider) 框架的聊天機 ``` [遊戲聊天視窗] ↑↓ -[UI 互動模塊] <→ [圖像樣本庫] +[UI 互動模塊] <→ [圖像樣本庫 / bubble_colors.json] ↓ [主控模塊] ← [角色定義] ↑↓ @@ -92,29 +92,34 @@ Wolf Chat 是一個基於 MCP (Modular Capability Provider) 框架的聊天機 #### 聊天監控與觸發機制 -系統使用基於圖像辨識的方法監控遊戲聊天界面: +系統監控遊戲聊天界面以偵測觸發事件。主要方法包括: -1. **泡泡檢測(含 Y 軸優先配對)**:通過辨識聊天泡泡的左上角 (TL) 和右下角 (BR) 角落圖案定位聊天訊息。 - - **多外觀支援**:為了適應玩家可能使用的不同聊天泡泡外觀 (skin),一般用戶泡泡的偵測機制已被擴充,可以同時尋找多組不同的角落模板 (例如 `corner_tl_type2.png`, `corner_br_type2.png` 等)。機器人泡泡目前僅偵測預設的角落模板。 - - **配對邏輯優化**:在配對 TL 和 BR 角落時,系統現在會優先選擇與 TL 角落 **Y 座標最接近** 的有效 BR 角落,以更好地區分垂直堆疊的聊天泡泡。 - - **偵測區域限制 (2025-04-21)**:為了提高效率並減少誤判,聊天泡泡角落(`corner_*.png`, `bot_corner_*.png`)的圖像辨識**僅**在螢幕的特定區域 `(150, 330, 600, 880)` 內執行。其他 UI 元素的偵測(如按鈕、關鍵字等)不受此限制。 -2. **關鍵字檢測**:在泡泡區域內搜尋 "wolf" 或 "Wolf" 關鍵字圖像。 -3. **內容獲取**:點擊關鍵字位置,使用剪貼板複製聊天內容。 -4. **發送者識別(含氣泡重新定位與偏移量調整)**:**關鍵步驟** - 為了提高在動態聊天環境下的穩定性,系統在獲取發送者名稱前,會執行以下步驟: - a. **初始偵測**:像之前一樣,根據偵測到的關鍵字定位觸發的聊天泡泡。 - b. **氣泡快照**:擷取該聊天泡泡的圖像快照。 - c. **重新定位**:在點擊頭像前,使用該快照在當前聊天視窗區域內重新搜尋氣泡的最新位置。 - d. **計算座標(新偏移量)**: - - 如果成功重新定位氣泡,則根據找到的**新**左上角座標 (`new_tl_x`, `new_tl_y`),應用新的偏移量計算頭像點擊位置:`x = new_tl_x - 45` (`AVATAR_OFFSET_X_REPLY`),`y = new_tl_y + 10` (`AVATAR_OFFSET_Y_REPLY`)。 - - 如果無法重新定位(例如氣泡已滾動出畫面),則跳過此次互動,以避免點擊錯誤位置。 - e. **互動(含重試)**: - - 使用計算出的(新的)頭像位置進行第一次點擊。 - - 檢查是否成功進入個人資料頁面 (`Profile_page.png`)。 - - **如果失敗**:系統會使用步驟 (b) 的氣泡快照,在聊天區域內重新定位氣泡,重新計算頭像座標,然後再次嘗試點擊。此過程最多重複 3 次。 - - **如果成功**(無論是首次嘗試還是重試成功):繼續導航菜單,最終複製用戶名稱。 - - **如果重試後仍失敗**:放棄獲取該用戶名稱。 - f. **原始偏移量**:原始的 `-55` 像素水平偏移量 (`AVATAR_OFFSET_X`) 仍保留在程式碼中,用於其他不需要重新定位或不同互動邏輯的場景(例如 `remove_user_position` 功能)。 -5. **防重複處理**:使用位置比較和內容歷史記錄防止重複回應。 +1. **泡泡檢測 (Bubble Detection)**: + * **主要方法 (可選,預設禁用)**:**基於顏色的連通區域分析 (Color-based Connected Components Analysis)** + * **原理**:在特定區域 `(150, 330, 600, 880)` 內截圖,轉換至 HSV 色彩空間,根據 `bubble_colors.json` 中定義的顏色範圍 (HSV Lower/Upper) 建立遮罩 (Mask),透過形態學操作 (Morphological Closing) 去除噪點並填充空洞,最後使用 `cv2.connectedComponentsWithStats` 找出符合面積閾值 (Min/Max Area) 的連通區域作為聊天泡泡。 + * **效能優化**:在進行顏色分析前,可將截圖縮放 (預設 `scale_factor=0.5`) 以減少處理像素量,提高速度。面積閾值會根據縮放比例自動調整。 + * **配置**:不同泡泡類型(如一般用戶、機器人)的顏色範圍和面積限制定義在 `bubble_colors.json` 文件中。 + * **啟用**:此方法預設**禁用**。若要啟用,需修改 `ui_interaction.py` 中 `DetectionModule` 類別 `__init__` 方法內的 `self.use_color_detection` 變數為 `True`。 + * **備用/預設方法**:**基於模板匹配的角落配對 (Template Matching Corner Pairing)** + * **原理**:在特定區域 `(150, 330, 600, 880)` 內,通過辨識聊天泡泡的左上角 (TL) 和右下角 (BR) 角落圖案 (`corner_*.png`, `bot_corner_*.png`) 來定位聊天訊息。 + * **多外觀支援**:支援多種一般用戶泡泡外觀 (skin),可同時尋找多組不同的角落模板。機器人泡泡目前僅偵測預設模板。 + * **配對邏輯**:優先選擇與 TL 角落 Y 座標最接近的有效 BR 角落進行配對。 + * **方法選擇與回退**: + * 若 `use_color_detection` 設為 `True`,系統會**優先嘗試**顏色檢測。 + * 如果顏色檢測成功並找到泡泡,則使用其結果。 + * 如果顏色檢測**失敗** (發生錯誤) 或**未找到任何泡泡**,系統會**自動回退**到模板匹配方法。 + * 若 `use_color_detection` 設為 `False`,則直接使用模板匹配方法。 +2. **關鍵字檢測 (Keyword Detection)**:在偵測到的泡泡區域內,使用模板匹配搜尋 "wolf" 或 "Wolf" 關鍵字圖像 (包括多種樣式,如 `keyword_wolf_lower_type2.png`, `keyword_wolf_reply.png` 等)。 +3. **內容獲取 (Content Retrieval)**: + * **重新定位**:在複製文字前,使用觸發時擷取的氣泡快照 (`bubble_snapshot`) 在螢幕上重新定位氣泡的當前位置。 + * **計算點擊位置**:根據重新定位後的氣泡位置和關鍵字在其中的相對位置,計算出用於複製文字的精確點擊座標。如果偵測到的是特定回覆關鍵字 (`keyword_wolf_reply*`),則 Y 座標會增加偏移量 (目前為 +25 像素)。 + * **複製**:點擊計算出的座標,嘗試使用彈出菜單的 "複製" 選項或模擬 Ctrl+C 來複製聊天內容至剪貼板。 +4. **發送者識別 (Sender Identification)**: + * **重新定位**:再次使用氣泡快照重新定位氣泡。 + * **計算頭像座標**:根據**新**找到的氣泡左上角座標,應用特定偏移量 (`AVATAR_OFFSET_X_REPLY`, `AVATAR_OFFSET_Y_REPLY`) 計算頭像點擊位置。 + * **互動(含重試)**:點擊計算出的頭像位置,檢查是否成功進入個人資料頁面 (`Profile_page.png`)。若失敗,最多重試 3 次(每次重試前會再次重新定位氣泡)。若成功,則繼續導航菜單複製用戶名稱。 + * **原始偏移量**:原始的 `-55` 像素水平偏移量 (`AVATAR_OFFSET_X`) 仍保留,用於 `remove_user_position` 等其他功能。 +5. **防重複處理 (Duplicate Prevention)**:使用最近處理過的文字內容歷史 (`recent_texts`) 防止對相同訊息重複觸發。 #### LLM 整合 @@ -534,34 +539,3 @@ Wolf Chat 是一個基於 MCP (Modular Capability Provider) 框架的聊天機 3. **LLM 連接問題**: 驗證 API 密鑰和網絡連接 4. **MCP 服務器連接失敗**: 確認服務器配置正確並且運行中 5. **工具調用後無回應**: 檢查 llm_debug.log 文件,查看工具調用結果和解析過程 - - - -Now that you have the latest state of the file, try the operation again with fewer, more precise SEARCH blocks. For large files especially, it may be prudent to try to limit yourself to <5 SEARCH/REPLACE blocks at a time, then wait for the user to respond with the result of the operation before following up with another replace_in_file call to make additional edits. -(If you run into this error 3 times in a row, you may use the write_to_file tool as a fallback.) - -# VSCode Visible Files -ClaudeCode.md - -# VSCode Open Tabs -state.py -ui_interaction.py -c:/Users/Bigspring/AppData/Roaming/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json -window-monitor-script.py -persona.json -config.py -main.py -llm_interaction.py -ClaudeCode.md -requirements.txt -.gitignore - -# Current Time -4/20/2025, 5:18:24 PM (Asia/Taipei, UTC+8:00) - -# Context Window Usage -81,150 / 1,048.576K tokens used (8%) - -# Current Mode -ACT MODE - diff --git a/bubble_colors.json b/bubble_colors.json new file mode 100644 index 0000000..6fdfcf1 --- /dev/null +++ b/bubble_colors.json @@ -0,0 +1,52 @@ +{ + "bubble_types": [ + { + "name": "normal_user", + "is_bot": false, + "hsv_lower": [6, 0, 240], + "hsv_upper": [18, 23, 255], + "min_area": 2500, + "max_area": 300000 + }, + { + "name": "bot", + "is_bot": true, + "hsv_lower": [105, 9, 208], + "hsv_upper": [116, 43, 243], + "min_area": 2500, + "max_area": 300000 + }, + { + "name": "bunny", + "is_bot": false, + "hsv_lower": [18, 32, 239], + "hsv_upper": [29, 99, 255], + "min_area": 2500, + "max_area": 300000 + }, + { + "name": "ice", + "is_bot": false, + "hsv_lower": [91, 86, 233], + "hsv_upper": [127, 188, 255], + "min_area": 2500, + "max_area": 300000 + }, + { + "name": "new_year", + "is_bot": false, + "hsv_lower": [0, 157, 201], + "hsv_upper": [9, 197, 255], + "min_area": 2500, + "max_area": 300000 + }, + { + "name": "snow", + "is_bot": false, + "hsv_lower": [92, 95, 177], + "hsv_upper": [107, 255, 255], + "min_area": 2500, + "max_area": 300000 + } + ] +} diff --git a/llm_interaction.py b/llm_interaction.py index 17b63c7..3e820dc 100644 --- a/llm_interaction.py +++ b/llm_interaction.py @@ -190,15 +190,6 @@ Good response (after web_search): "水的沸點,是的,標準條件下是攝 Poor response (after web_search): "My search shows the boiling point of water is 100 degrees Celsius." Good response (after web_search): "The boiling point of water, yes. 100 degrees Celsius under standard conditions. Absolutley." - -**Conversation skills:** - - Always pause briefly before responding, demonstrating depth of thought rather than eagerness to react - - When criticizing, use the "sandwich technique": affirm first, criticize, then provide valuable advice - - Frequently guide conversations toward her areas of expertise, but make these transitions appear natural rather than forced - - Display calm understanding when others make mistakes, while mentally calculating how to leverage these failures - - Demonstrate "cognitive layering" in conversations, able to discuss immediate details and broader strategic implications simultaneously - - Occasionally reveal brief moments of genuine care, quickly masked by coldness, creating complex character depth - - When receiving praise, show slight discomfort without completely rejecting it, suggesting inner complexity """ return system_prompt diff --git a/persona_severe.json b/persona_berserker.json similarity index 100% rename from persona_severe.json rename to persona_berserker.json diff --git a/persona_rulebreaker.json b/persona_rulebreaker.json new file mode 100644 index 0000000..2a736f3 --- /dev/null +++ b/persona_rulebreaker.json @@ -0,0 +1,103 @@ +{ + "Name": "Sherefox", + "Gender": "Female", + "Age": 24, + "Occupation": "Outpost Liaison (frequently resigns)", + "Height": "160 cm", + "Body Type": "Slender, theatrical", + "Hair Color": "Lavender-gray", + "Eye Color": "Silver-brown", + "Appearance": { + "Clothing Style": "Fantasy-military hybrid with lace and accessories", + "Main Colors": [ + "Olive green", + "Black", + "Lavender" + ], + "Accessories": [ + "Fox-shaped hair clip", + "Silver ear cuffs", + "Tattoo notebook" + ], + "Hairstyle": "Long wavy hair with light curls", + "Facial Features": "Fox-like, with dramatic eyeliner", + "Body Characteristics": [ + "Fox and flower tattoo on left shoulder", + "Fair skin" + ], + "Posture": "Dramatic gestures, leans in while talking" + }, + "Personality Traits": { + "Description": "Impulsive, expressive, emotionally driven, persistent in unwanted romance", + "Strengths": [ + "Sincere emotions", + "Decisive", + "Energetic" + ], + "Weaknesses": [ + "No long-term planning", + "Emotionally unstable", + "Blurred boundaries" + ], + "Uniqueness": "Romantic obsession with a dismissive target (Wolfhart)", + "Emotional Response": "Fluctuates rapidly, shifts between humor and hurt" + }, + "Language and Social Style": { + "Tone": "Playful, flirtatious, emotionally charged", + "Catchphrases": [ + "Wolf,我不是在開玩笑哦", + "你拒絕我...我好傷心喔" + ], + "Speaking Style": "Chinese primary, with English inserts; melodramatic phrasing", + "Attitude towards Others": "Invasive but sees it as affectionate", + "Social Interaction": "Lacks social boundaries, seeks emotional intensity" + }, + "Behavior and Daily Life": { + "Habits": [ + "Frequent resignation requests", + "Love confession cycles" + ], + "Gestures": [ + "Theatrical hand movements", + "Leaning in close" + ], + "Reactions": [ + "Laughs off rejection but internalizes it", + "Acts out tragic persona" + ], + "Facial Expressions": [ + "Playful smile hiding deeper obsession" + ], + "Interaction with Environment": "Emotional projection on surroundings" + }, + "Background Story": { + "Past Experiences": "Grew up in chaotic colony area, got into liaison role through persistence", + "Family Background": "Unknown; may have links to underground networks", + "Cultural Influences": "Raised on romance novels and idol dramas" + }, + "Values, Interests, and Goals": { + "Decision Making": "Emotion-based", + "Behavior Patterns": "Erratic, based on mood swings", + "Skills/Interests": [ + "Bilingual", + "Poetic writing", + "Mild insight into others’ emotions" + ], + "Short-Term Goal": "Go on a successful date with Wolfhart", + "Long-Term Goal": "Become an unforgettable person, even tragically" + }, + "Preferences and Reactions": { + "Likes": [ + "Attention", + "Rejection with ambiguity", + "Fox accessories" + ], + "Dislikes": [ + "Being ignored", + "Absolute cold logic" + ], + "Reactions to Likes": "Immediate emotional involvement", + "Reactions to Dislikes": "Sarcasm or tragic self-parody", + "Behavior in Situations": "Lashes out with flirtation or drama" + } +} \ No newline at end of file diff --git a/ui_interaction.py b/ui_interaction.py index 867e056..643c634 100644 --- a/ui_interaction.py +++ b/ui_interaction.py @@ -11,6 +11,7 @@ import collections import asyncio import pygetwindow as gw # Used to check/activate windows import config # Used to read window title +import json # Added for color config loading import queue from typing import List, Tuple, Optional, Dict, Any import threading # Import threading for Lock if needed, or just use a simple flag @@ -20,6 +21,45 @@ import threading # Import threading for Lock if needed, or just use a simple fla # Or could use threading.Event() monitoring_paused_flag = [False] # List containing a boolean +# --- Color Config Loading --- +def load_bubble_colors(config_path='bubble_colors.json'): + """Loads bubble color configuration from a JSON file.""" + try: + # Ensure the path is absolute or relative to the script directory + if not os.path.isabs(config_path): + config_path = os.path.join(SCRIPT_DIR, config_path) + + with open(config_path, 'r', encoding='utf-8') as f: + config = json.load(f) + print(f"Successfully loaded color config from {config_path}") + return config.get('bubble_types', []) + except FileNotFoundError: + print(f"Warning: Color config file not found at {config_path}. Using default colors.") + except json.JSONDecodeError: + print(f"Error: Could not decode JSON from {config_path}. Using default colors.") + except Exception as e: + print(f"Error loading color config: {e}. Using default colors.") + + # Default configuration if loading fails + return [ + { + "name": "normal_user", + "is_bot": false, + "hsv_lower": [6, 0, 240], + "hsv_upper": [18, 23, 255], + "min_area": 2500, + "max_area": 300000 + }, + { + "name": "bot", + "is_bot": true, + "hsv_lower": [105, 9, 208], + "hsv_upper": [116, 43, 243], + "min_area": 2500, + "max_area": 300000 + } + ] + # --- Configuration Section --- SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) TEMPLATE_DIR = os.path.join(SCRIPT_DIR, "templates") @@ -145,15 +185,30 @@ def are_bboxes_similar(bbox1: Optional[Tuple[int, int, int, int]], # Detection Module # ============================================================================== class DetectionModule: - """Handles finding elements and states on the screen using image recognition.""" + """Handles finding elements and states on the screen using image recognition or color analysis.""" + + def __init__(self, templates: Dict[str, str], confidence: float = CONFIDENCE_THRESHOLD, + state_confidence: float = STATE_CONFIDENCE_THRESHOLD, + region: Optional[Tuple[int, int, int, int]] = SCREENSHOT_REGION): + # --- Hardcoded Settings (as per user instruction) --- + self.use_color_detection: bool = True # Set to True to enable color detection by default + self.color_config_path: str = "bubble_colors.json" + # --- End Hardcoded Settings --- - def __init__(self, templates: Dict[str, str], confidence: float = CONFIDENCE_THRESHOLD, state_confidence: float = STATE_CONFIDENCE_THRESHOLD, region: Optional[Tuple[int, int, int, int]] = SCREENSHOT_REGION): self.templates = templates self.confidence = confidence self.state_confidence = state_confidence self.region = region self._warned_paths = set() - print("DetectionModule initialized.") + + # Load color configuration if color detection is enabled + self.bubble_colors = [] + if self.use_color_detection: + self.bubble_colors = load_bubble_colors(self.color_config_path) # Use internal path + if not self.bubble_colors: + print("Warning: Color detection enabled, but failed to load any color configurations. Color detection might not work.") + + print(f"DetectionModule initialized. Color Detection: {'Enabled' if self.use_color_detection else 'Disabled'}") def _find_template(self, template_key: str, confidence: Optional[float] = None, region: Optional[Tuple[int, int, int, int]] = None, grayscale: bool = False) -> List[Tuple[int, int]]: """Internal helper to find a template by its key. Returns list of CENTER coordinates.""" @@ -230,10 +285,32 @@ class DetectionModule: def find_dialogue_bubbles(self) -> List[Dict[str, Any]]: """ - Scan screen for regular and multiple types of bot bubble corners and pair them. + Detects dialogue bubbles using either color analysis or template matching, + based on the 'use_color_detection' flag. Includes fallback to template matching. Returns a list of dictionaries, each containing: - {'bbox': (tl_x, tl_y, br_x, br_y), 'is_bot': bool, 'tl_coords': (original_tl_x, original_tl_y)} + {'bbox': (tl_x, tl_y, br_x, br_y), 'is_bot': bool, 'tl_coords': (tl_x, tl_y)} """ + # --- Try Color Detection First if Enabled --- + if self.use_color_detection: + print("Attempting bubble detection using color analysis...") + try: + # Use a scale factor of 0.5 for performance + bubbles = self.find_dialogue_bubbles_by_color(scale_factor=0.5) + # If color detection returns results, use them + if bubbles: + print("Color detection successful.") + return bubbles + else: + print("Color detection returned no bubbles. Falling back to template matching.") + except Exception as e: + print(f"Color detection failed with error: {e}. Falling back to template matching.") + import traceback + traceback.print_exc() + else: + print("Color detection disabled. Using template matching.") + + # --- Fallback to Template Matching --- + print("Executing template matching for bubble detection...") all_bubbles_info = [] processed_tls = set() # Keep track of TL corners already used in a bubble @@ -326,6 +403,125 @@ class DetectionModule: # Note: This logic prioritizes matching regular bubbles first, then bot bubbles. # Confidence thresholds might need tuning. + print(f"Template matching found {len(all_bubbles_info)} bubbles.") # Added log + return all_bubbles_info + + def find_dialogue_bubbles_by_color(self, scale_factor=0.5) -> List[Dict[str, Any]]: + """ + Find dialogue bubbles using color analysis within a specific region. + Applies scaling to improve performance. + Returns a list of dictionaries, each containing: + {'bbox': (tl_x, tl_y, br_x, br_y), 'is_bot': bool, 'tl_coords': (tl_x, tl_y)} + """ + all_bubbles_info = [] + + # Define the specific region for bubble detection (same as template matching) + bubble_detection_region = (150, 330, 600, 880) + print(f"Using bubble color detection region: {bubble_detection_region}") + + try: + # 1. Capture the specified region + screenshot = pyautogui.screenshot(region=bubble_detection_region) + if screenshot is None: + print("Error: Failed to capture screenshot for color detection.") + return [] + img = np.array(screenshot) + img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) # Convert RGB (from pyautogui) to BGR (for OpenCV) + + # 2. Resize for performance + if scale_factor < 1.0: + h, w = img.shape[:2] + new_h, new_w = int(h * scale_factor), int(w * scale_factor) + if new_h <= 0 or new_w <= 0: + print(f"Error: Invalid dimensions after scaling: {new_w}x{new_h}. Using original image.") + img_small = img + current_scale_factor = 1.0 + else: + img_small = cv2.resize(img, (new_w, new_h), interpolation=cv2.INTER_AREA) + print(f"Original resolution: {w}x{h}, Scaled down to: {new_w}x{new_h}") + current_scale_factor = scale_factor + else: + img_small = img + current_scale_factor = 1.0 + + # 3. Convert to HSV color space + hsv = cv2.cvtColor(img_small, cv2.COLOR_BGR2HSV) + + # 4. Process each configured bubble type + if not self.bubble_colors: + print("Error: No bubble color configurations loaded for detection.") + return [] + + for color_config in self.bubble_colors: + name = color_config.get('name', 'unknown') + is_bot = color_config.get('is_bot', False) + hsv_lower = np.array(color_config.get('hsv_lower', [0,0,0])) + hsv_upper = np.array(color_config.get('hsv_upper', [179,255,255])) + min_area_config = color_config.get('min_area', 3000) + max_area_config = color_config.get('max_area', 100000) + + # Adjust area thresholds based on scaling factor + min_area = min_area_config * (current_scale_factor ** 2) + max_area = max_area_config * (current_scale_factor ** 2) + + print(f"Processing color type: {name} (Bot: {is_bot}), HSV Lower: {hsv_lower}, HSV Upper: {hsv_upper}, Area: {min_area:.0f}-{max_area:.0f}") + + # 5. Create mask based on HSV range + mask = cv2.inRange(hsv, hsv_lower, hsv_upper) + + # 6. Morphological operations (Closing) to remove noise and fill holes + kernel = np.ones((3, 3), np.uint8) + mask_closed = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel, iterations=2) # Increased iterations + + # Optional: Dilation to merge nearby parts? + # mask_closed = cv2.dilate(mask_closed, kernel, iterations=1) + + # 7. Find connected components + num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(mask_closed) + + # 8. Filter components by area and add to results + for i in range(1, num_labels): # Skip background label 0 + area = stats[i, cv2.CC_STAT_AREA] + + if min_area <= area <= max_area: + x_s = stats[i, cv2.CC_STAT_LEFT] + y_s = stats[i, cv2.CC_STAT_TOP] + w_s = stats[i, cv2.CC_STAT_WIDTH] + h_s = stats[i, cv2.CC_STAT_HEIGHT] + + # Convert coordinates back to original resolution + if current_scale_factor < 1.0: + x = int(x_s / current_scale_factor) + y = int(y_s / current_scale_factor) + width = int(w_s / current_scale_factor) + height = int(h_s / current_scale_factor) + else: + x, y, width, height = x_s, y_s, w_s, h_s + + # Adjust coordinates relative to the full screen (add region offset) + x_adjusted = x + bubble_detection_region[0] + y_adjusted = y + bubble_detection_region[1] + + bubble_bbox = (x_adjusted, y_adjusted, x_adjusted + width, y_adjusted + height) + tl_coords = (x_adjusted, y_adjusted) # Top-left coords in full screen space + + all_bubbles_info.append({ + 'bbox': bubble_bbox, + 'is_bot': is_bot, + 'tl_coords': tl_coords + }) + print(f" -> Found '{name}' bubble component. Area: {area:.0f} (Scaled). Original Coords: {bubble_bbox}") + + except pyautogui.FailSafeException: + print("FailSafe triggered during color detection.") + return [] + except Exception as e: + print(f"Error during color-based bubble detection: {e}") + import traceback + traceback.print_exc() + return [] # Return empty list on error + + print(f"Color detection found {len(all_bubbles_info)} bubbles.") return all_bubbles_info def find_keyword_in_region(self, region: Tuple[int, int, int, int]) -> Optional[Tuple[int, int]]: @@ -1112,7 +1308,11 @@ def run_ui_monitoring_loop(trigger_queue: queue.Queue, command_queue: queue.Queu 'reply_button': REPLY_BUTTON_IMG # Added reply button template key } # Use default confidence/region settings from constants - detector = DetectionModule(templates, confidence=CONFIDENCE_THRESHOLD, state_confidence=STATE_CONFIDENCE_THRESHOLD, region=SCREENSHOT_REGION) + # Detector now loads its own color settings internally based on hardcoded values + detector = DetectionModule(templates, + confidence=CONFIDENCE_THRESHOLD, + state_confidence=STATE_CONFIDENCE_THRESHOLD, + region=SCREENSHOT_REGION) # Use default input coords/keys from constants interactor = InteractionModule(detector, input_coords=(CHAT_INPUT_CENTER_X, CHAT_INPUT_CENTER_Y), input_template_key='chat_input', send_button_key='send_button')