给贾维斯加“手势控制”:从原理到落地,打造多模态交互的本地智能助
之前我们打造的贾维斯已经实现了“语音交互+设备控制”,但总觉得少了点“钢铁侠式”的丝滑——如果能像托尼一样,不用说话,挥挥手就让贾维斯开灯、切歌、调空调,是不是更酷?今天这篇就带大家**深度拆解手势控制的技术细节**,从原理剖析到避坑指南,再到进阶扩展,把贾维斯的交互体验拉满,让“手势控制”不仅能用,还能用得爽。
一、为什么手势控制是“下一个交互维度”?
在聊技术之前,先聊聊“为什么要加手势控制”。语音交互虽然方便,但有场景局限:比如会议中不想打扰别人、厨房做饭时手上有油没法说话、玩游戏时需要快速操作——这时候手势控制就是更自然的补充。
本地手势控制的核心优势,除了“自然”,还有三点:
隐私绝对安全:摄像头画面只在本地内存处理,不存储、不上传,避免“云端监听”的顾虑;
实时无延迟:基于轻量级模型(如MediaPipe),推理速度能达到30帧/秒,比语音识别更快(语音需要1-2秒录音+识别);
硬件门槛低:普通笔记本内置摄像头、USB摄像头都能跑,不需要3D摄像头(如Kinect),成本几乎为0。
我们的目标不只是“比OK开灯”,而是打造“手势+语音”的多模态交互——比如先说“贾维斯,控制空调”,再比“OK”调高温、“握拳”调低温度,让交互更精准、更灵活。
二、手势识别的技术原理:从“像素”到“指令”的全过程
很多人觉得手势识别很复杂,其实核心是“**捕捉→解析→判断→执行**”四个步骤。我们先把技术原理讲透,再动手写代码,这样后续优化时才知道改哪里。
1. 第一步:摄像头实时捕捉(画面输入)
硬件:电脑摄像头(内置/外置,支持720P即可);
软件:用OpenCV读取摄像头帧,每秒30帧(人眼分辨不出卡顿的最低帧率);
关键参数:画面分辨率设为640×480(太大影响速度,太小识别不准),格式从OpenCV默认的BGR转为RGB(MediaPipe需要RGB格式)。
2. 第二步:手部关键点解析(核心技术)
这是手势识别的“灵魂”,我们用Google开源的**MediaPipe Hands**模型,它能自动检测手部21个关键点(每个关节对应一个编号),比如:
编号0:手腕(手部根部);
编号4:拇指指尖;
编号8:食指指尖;
编号12:中指指尖;
编号16:无名指指尖;
编号20:小指指尖。
MediaPipe的优势在于:
轻量级:模型体积只有几MB,普通笔记本CPU就能跑(不需要GPU);
抗干扰:支持复杂背景、不同光照,甚至手部有轻微遮挡也能识别;
实时性:单帧推理时间<30ms,30帧/秒完全无压力。
3. 第三步:手势判断(逻辑核心)
有了21个关键点的坐标,怎么判断是“OK手势”还是“握拳”?核心是**数学计算+规则匹配**:
距离计算:比如“OK手势”需要拇指尖(4)和食指尖(8)的距离小于30像素(闭合),其他指尖(12、16、20)离掌心较远(张开);
角度计算:比如“点赞手势”需要拇指向上,其他手指弯曲,可通过计算手指关节的角度判断是否弯曲;
阈值调优:距离、角度的阈值需要根据实际场景调整(比如光线暗时阈值要放宽,避免误识别)。
4. 第四步:指令执行(与贾维斯联动)
识别到手势后,需要映射到具体指令:比如“OK”→开灯,“握拳”→关灯,这一步要和之前贾维斯的设备控制逻辑(如control_smart_home
函数)对接,同时加入语音反馈,让用户知道指令已执行。
三、手把手实现:从0到1搭建手势识别模块
接下来,我们分“基础模块开发→问题排查→性能优化”三部分,确保你能一次跑通,避免踩坑。
1. 环境准备:不止是安装库
之前提到安装mediapipe
和opencv-python
,但实际操作中可能遇到版本兼容性问题,这里给出**经过验证的版本组合**(避免出现“ImportError”):
# 先卸载可能冲突的旧版本
pip uninstall -y mediapipe opencv-python
# 安装兼容版本(Python 3.8-3.11可用)
pip install mediapipe==0.10.9 opencv-python==4.8.0.76
另外,需要处理**摄像头权限**:
Windows:第一次运行时会弹出“是否允许此应用使用摄像头”,选“是”;
macOS:打开“系统设置→隐私与安全性→摄像头”,勾选你的Python IDE(如PyCharm);
Linux(Ubuntu):需要安装
v4l-utils
驱动,命令:sudo apt-get install v4l-utils
。
2. 核心代码:可复用的手势识别类(带详细注释)
我们封装一个GestureRecognizer
类,支持“启动/停止识别”“获取最新手势”“自定义手势规则”,后续可直接导入到贾维斯主程序中:
import cv2
import mediapipe as mp
import time
import math
from threading import Thread, Event
from typing import Dict, Tuple, Optional, List# --------------------------
# 1. 常量定义(可根据需求修改)
# --------------------------
# 手势类型枚举(新增手势需在这里定义)
GESTURE_TYPES = {"OK": "ok_gesture", # OK手势:打开设备"FIST": "fist_gesture", # 握拳手势:关闭设备"PEACE": "peace_gesture", # 剪刀手势:调节参数(如音量+)"THUMB_UP": "thumb_up_gesture", # 点赞手势:调节参数(如音量-)"PALM": "palm_gesture" # 手掌手势:暂停/播放
}# 手部关键点编号(对应MediaPipe的21个点)
HAND_LANDMARKS = {"WRIST": 0, # 手腕"THUMB_TIP": 4, # 拇指尖"THUMB_IP": 3, # 拇指近节指骨"THUMB_MCP": 2, # 拇指掌指关节"THUMB_CMC": 1, # 拇指腕掌关节"INDEX_TIP": 8, # 食指尖"INDEX_PIP": 7, # 食指近节指骨"INDEX_MCP": 6, # 食指掌指关节"INDEX_PIP": 7, # 食指近节指骨"MIDDLE_TIP": 12, # 中指尖"RING_TIP": 16, # 无名指指尖"PINKY_TIP": 20, # 小指指尖"PALM_CENTER": None # 掌心(动态计算)
}# --------------------------
# 2. 手势识别核心类
# --------------------------
class GestureRecognizer:def __init__(self,camera_index: int = 0,confidence_threshold: float = 0.7,frame_width: int = 640,frame_height: int = 480,frame_rate: int = 30):"""初始化手势识别器:param camera_index: 摄像头索引(0=内置,1=外置):param confidence_threshold: 识别置信度(0-1,越高越精准但易漏识别):param frame_width/frame_height: 摄像头画面尺寸(平衡速度和精度):param frame_rate: 帧率(建议25-30,太低卡顿,太高耗资源)"""# 初始化MediaPipe Hands模型self.mp_hands = mp.solutions.handsself.hands_model = self.mp_hands.Hands(static_image_mode=False, # 实时视频模式(非静态图片)max_num_hands=1, # 最多识别1只手(避免多手干扰)min_detection_confidence=confidence_threshold,min_tracking_confidence=confidence_threshold)# 用于绘制手部关键点(调试用,可关闭)self.mp_drawing = mp.solutions.drawing_utilsself.draw_landmarks = True # 默认可视化关键点# 摄像头配置self.cap = cv2.VideoCapture(camera_index)if not self.cap.isOpened():raise RuntimeError(f"无法打开摄像头(索引:{camera_index}),请检查摄像头连接或权限")self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, frame_width)self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_height)self.frame_width = frame_widthself.frame_height = frame_heightself.frame_delay = 1 / frame_rate # 每帧间隔时间(控制帧率)# 状态管理self.running = Event() # 控制线程启停(True=运行,False=停止)self.running.set()self.latest_gesture: Optional[str] = None # 最新识别到的手势self.landmarks_history: List[Dict[int, Tuple[int, int]]] = [] # 关键点历史(用于动态手势)self.history_length = 5 # 历史帧数(平滑识别结果,减少抖动)# 启动后台识别线程(守护线程,主程序退出时自动关闭)self.recognize_thread = Thread(target=self._run_recognition_loop, daemon=True)def start(self) -> "GestureRecognizer":"""启动手势识别(链式调用)"""if not self.recognize_thread.is_alive():self.recognize_thread.start()print(f"✅ 手势识别已启动(摄像头索引:{self.cap.get(cv2.CAP_PROP_INDEX)})")print(f" - 画面尺寸:{self.frame_width}×{self.frame_height},帧率:{int(1/self.frame_delay)}")print(f" - 按 'q' 关闭摄像头预览,按 'd' 切换关键点可视化")return selfdef stop(self) -> None:"""停止手势识别,释放资源"""self.running.clear()if self.recognize_thread.is_alive():self.recognize_thread.join(timeout=1) # 等待线程退出(最多1秒)self.cap.release()cv2.destroyAllWindows()print("\n❌ 手势识别已停止,资源已释放")def get_latest_gesture(self, clear_after_get: bool = True) -> Optional[str]:"""获取最新识别到的手势:param clear_after_get: 获取后是否清空(避免重复触发指令):return: 手势类型(如"ok_gesture")或None(未识别到)"""gesture = self.latest_gestureif clear_after_get:self.latest_gesture = Nonereturn gesturedef toggle_landmarks_drawing(self) -> bool:"""切换手部关键点可视化(调试用)"""self.draw_landmarks = not self.draw_landmarksprint(f"🔄 关键点可视化已{'开启' if self.draw_landmarks else '关闭'}")return self.draw_landmarks# --------------------------# 3. 核心识别循环(后台线程)# --------------------------def _run_recognition_loop(self) -> None:"""实时识别循环:读取帧→解析关键点→判断手势→更新结果"""while self.running.is_set():# 1. 读取摄像头帧(超时处理,避免卡死)ret, frame = self.cap.read()if not ret:time.sleep(0.05) # 摄像头未就绪,等待50ms重试continue# 2. 预处理画面(BGR→RGB,镜像翻转,避免左右颠倒)frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)frame_mirrored = cv2.flip(frame_rgb, 1) # 水平翻转(符合人眼习惯)# 3. 用MediaPipe解析手部关键点results = self.hands_model.process(frame_mirrored)current_landmarks: Optional[Dict[int, Tuple[int, int]]] = Noneif results.multi_hand_landmarks:# 只处理第一只手(避免多手干扰)hand_landmarks = results.multi_hand_landmarks[0]# 转换关键点坐标(相对坐标→绝对像素坐标)current_landmarks = self._convert_landmarks(hand_landmarks)# 计算掌心坐标(用于判断握拳、手掌等手势)HAND_LANDMARKS["PALM_CENTER"] = self._calculate_palm_center(current_landmarks)# 保存关键点历史(用于平滑识别结果)self._update_landmarks_history(current_landmarks)# 4. 判断手势(有关键点则判断,无则清空历史)if current_landmarks and len(self.landmarks_history) >= self.history_length:# 基于历史关键点平滑判断(减少单帧抖动)self.latest_gesture = self._judge_gesture_smoothed()else:self.landmarks_history.clear()self.latest_gesture = None# 5. 显示预览画面(支持按键交互)self._show_preview(frame_mirrored, current_landmarks)# 6. 控制帧率(避免CPU占用过高)time.sleep(self.frame_delay)# --------------------------# 4. 辅助函数(技术细节)# --------------------------def _convert_landmarks(self, hand_landmarks: mp.solutions.hands.HandLandmark) -> Dict[int, Tuple[int, int]]:"""将MediaPipe的相对关键点坐标转为绝对像素坐标"""landmarks = {}for idx, lm in enumerate(hand_landmarks.landmark):# lm.x/lm.y是相对画面的比例(0-1),需乘以画面尺寸得到像素坐标x = int(lm.x * self.frame_width)y = int(lm.y * self.frame_height)landmarks[idx] = (x, y)return landmarksdef _calculate_palm_center(self, landmarks: Dict[int, Tuple[int, int]]) -> Tuple[int, int]:"""计算掌心坐标(基于手掌根部和手指关节的平均值)"""# 选取手掌根部的4个关键点(手腕、拇指腕掌关节、食指掌指关节、小指掌指关节)palm_landmarks = [0, 1, 5, 17]x_coords = [landmarks[idx][0] for idx in palm_landmarks if idx in landmarks]y_coords = [landmarks[idx][1] for idx in palm_landmarks if idx in landmarks]return (sum(x_coords) // len(x_coords), sum(y_coords) // len(y_coords))def _update_landmarks_history(self, landmarks: Dict[int, Tuple[int, int]]) -> None:"""更新关键点历史,保持固定长度(用于平滑识别)"""self.landmarks_history.append(landmarks)if len(self.landmarks_history) > self.history_length:self.landmarks_history.pop(0) # 移除最旧的帧def _judge_gesture_smoothed(self) -> Optional[str]:"""基于历史关键点平滑判断手势(减少单帧误识别)"""# 统计历史帧中出现最多的手势gesture_counts = {}for landmarks in self.landmarks_history:gesture = self._judge_gesture_single_frame(landmarks)if gesture:gesture_counts[gesture] = gesture_counts.get(gesture, 0) + 1# 返回出现次数最多的手势(需超过半数帧才有效)if not gesture_counts:return Nonemax_count = max(gesture_counts.values())if max_count >= self.history_length // 2 + 1:return max(gesture_counts, key=gesture_counts.get)return Nonedef _judge_gesture_single_frame(self, landmarks: Dict[int, Tuple[int, int]]) -> Optional[str]:"""单帧手势判断(核心逻辑,可自定义扩展)"""# 确保获取到所有关键关键点required_landmarks = [0, 4, 8, 12, 16, 20] # 手腕+5个指尖if not all(idx in landmarks for idx in required_landmarks):return None# 提取关键坐标wrist = landmarks[0]thumb_tip = landmarks[4]index_tip = landmarks[8]middle_tip = landmarks[12]ring_tip = landmarks[16]pinky_tip = landmarks[20]palm_center = HAND_LANDMARKS["PALM_CENTER"]# --------------------------# 手势1:OK手势(拇指+食指闭合,其他张开)# --------------------------# 计算拇指尖与食指尖的距离(欧氏距离)ok_distance = self._calculate_euclidean_distance(thumb_tip, index_tip)# 计算其他指尖与掌心的距离(判断是否张开)middle_palm_dist = self._calculate_euclidean_distance(middle_tip, palm_center)ring_palm_dist = self._calculate_euclidean_distance(ring_tip, palm_center)pinky_palm_dist = self._calculate_euclidean_distance(pinky_tip, palm_center)# 判断条件:OK距离<30像素,其他指尖距离>50像素(张开)if (ok_distance < 30 and middle_palm_dist > 50 and ring_palm_dist > 50 and pinky_palm_dist > 50):return GESTURE_TYPES["OK"]# --------------------------# 手势2:握拳手势(所有指尖靠近掌心)# --------------------------# 所有指尖到掌心的距离<40像素(闭合)all_tips = [thumb_tip, index_tip, middle_tip, ring_tip, pinky_tip]fist_distances = [self._calculate_euclidean_distance(tip, palm_center) for tip in all_tips]if all(dist < 40 for dist in fist_distances):return GESTURE_TYPES["FIST"]# --------------------------# 手势3:剪刀手势(食指+中指张开,其他闭合)# --------------------------# 食指与中指距离>60像素(张开),其他指尖到掌心距离<40像素(闭合)peace_distance = self._calculate_euclidean_distance(index_tip, middle_tip)thumb_fist = self._calculate_euclidean_distance(thumb_tip, palm_center) < 40ring_fist = self._calculate_euclidean_distance(ring_tip, palm_center) < 40pinky_fist = self._calculate_euclidean_distance(pinky_tip, palm_center) < 40if peace_distance > 60 and thumb_fist and ring_fist and pinky_fist:return GESTURE_TYPES["PEACE"]# --------------------------# 手势4:点赞手势(拇指张开,其他闭合)# --------------------------# 拇指尖到手腕的距离>80像素(向上张开),其他指尖到掌心距离<40像素(闭合)thumb_up_distance = self._calculate_euclidean_distance(thumb_tip, wrist)index_fist = self._calculate_euclidean_distance(index_tip, palm_center) < 40middle_fist = self._calculate_euclidean_distance(middle_tip, palm_center) < 40if thumb_up_distance > 80 and index_fist and middle_fist and ring_fist and pinky_fist:return GESTURE_TYPES["THUMB_UP"]# --------------------------# 手势5:手掌手势(所有手指张开)# --------------------------# 所有指尖到掌心距离>60像素(张开)if (middle_palm_dist > 60 and ring_palm_dist > 60 and pinky_palm_dist > 60 and ok_distance > 60):return GESTURE_TYPES["PALM"]# 未识别到已知手势return Nonedef _calculate_euclidean_distance(self, point1: Tuple[int, int], point2: Tuple[int, int]) -> float:"""计算两点之间的欧氏距离(判断手指是否闭合/张开)"""return math.hypot(point1[0] - point2[0], point1[1] - point2[1])def _show_preview(self, frame_rgb: cv2.Mat, landmarks: Optional[Dict[int, Tuple[int, int]]]) -> None:"""显示摄像头预览画面,支持按键交互"""# 1. 绘制手部关键点(如果开启)frame_bgr = cv2.cvtColor(frame_rgb, cv2.COLOR_RGB2BGR) # 转回BGR格式用于OpenCV显示if self.draw_landmarks and landmarks:# 绘制关键点(红色)和连接线(绿色)for idx, (x, y) in landmarks.items():cv2.circle(frame_bgr, (x, y), 5, (0, 0, 255), -1) # 关键点:红色实心圆# 绘制手指连接线(参考MediaPipe的HAND_CONNECTIONS)connections = self.mp_hands.HAND_CONNECTIONSfor (start_idx, end_idx) in connections:if start_idx in landmarks and end_idx in landmarks:start_x, start_y = landmarks[start_idx]end_x, end_y = landmarks[end_idx]cv2.line(frame_bgr, (start_x, start_y), (end_x, end_y), (0, 255, 0), 2) # 连接线:绿色# 2. 显示当前手势状态gesture_text = self.latest_gesture if self.latest_gesture else "未识别"cv2.putText(frame_bgr,f"当前手势:{gesture_text}",(10, 30),cv2.FONT_HERSHEY_SIMPLEX,1,(255, 0, 0), # 蓝色文字2)# 3. 显示操作提示cv2.putText(frame_bgr,"按 'q' 退出,按 'd' 切换关键点",(10, self.frame_height - 10),cv2.FONT_HERSHEY_SIMPLEX,0.7,(0, 255, 255), # 黄色文字1)# 4. 显示画面并处理按键cv2.imshow("Jarvis Gesture Recognition", frame_bgr)key = cv2.waitKey(1) & 0xFFif key == ord('q'):self.running.clear() # 按'q'退出elif key == ord('d'):self.toggle_landmarks_drawing() # 按'd'切换关键点可视化# --------------------------
# 测试:单独运行手势识别模块
# --------------------------
if __name__ == "__main__":try:# 初始化识别器(摄像头0=内置,置信度0.7)recognizer = GestureRecognizer(camera_index=0,confidence_threshold=0.7,frame_width=640,frame_height=480).start()# 主循环:打印最新手势print("\n📌 开始监听手势(按 Ctrl+C 停止)")while recognizer.running.is_set():gesture = recognizer.get_latest_gesture(clear_after_get=True)if gesture:print(f"✅ 识别到手势:{gesture}")time.sleep(0.5) # 降低轮询频率,减少CPU占用except KeyboardInterrupt:print("\n🛑 用户中断程序")finally:recognizer.stop() # 确保资源释放
3. 代码核心亮点(解决实际问题)
关键点可视化:默认显示手部关键点和连接线,按
d
可关闭,方便调试时观察关键点是否正确识别;平滑识别:保存5帧关键点历史,只在超过半数帧识别到同一手势时才返回结果,避免“手抖”导致的误识别;
按键交互:按
q
退出预览,按d
切换可视化,操作更灵活;异常处理:摄像头打不开、权限不足时会抛出明确错误,方便排查;
可扩展性:新增手势只需在
_judge_gesture_single_frame
中添加判断逻辑,无需修改其他代码。
四、与贾维斯主程序整合:实现“手势→指令→控制”闭环
之前的贾维斯已经支持语音交互和设备控制,现在需要将手势识别模块接入,实现“手势触发指令”,同时保留语音反馈,让用户知道指令已执行。
1. 手势-指令映射配置(可自定义)
首先定义手势与指令的对应关系,放在jarvis_config.py
中,方便后续修改:
# jarvis_config.py:贾维斯配置文件(集中管理,避免硬编码)
from gesture_recognizer import GESTURE_TYPES# --------------------------
# 手势-指令映射(核心配置)
# --------------------------
GESTURE_COMMAND_MAP = {# 基础设备控制GESTURE_TYPES["OK"]: {"action": "control_smart_device","params": {"device_type": "light", # 设备类型:灯"action": "on", # 动作:打开"device_id": "living_room_light" # 设备ID:客厅灯(支持多设备)},"response_text": "已识别OK手势,打开客厅台灯","cooldown": 2 # 冷却时间(秒):避免短时间重复触发},GESTURE_TYPES["FIST"]: {"action": "control_smart_device","params": {"device_type": "light","action": "off","device_id": "living_room_light"},"response_text": "已识别握拳手势,关闭客厅台灯","cooldown": 2},# 多媒体控制GESTURE_TYPES["PEACE"]: {"action": "control_media","params": {"media_type": "music", # 媒体类型:音乐"action": "next" # 动作:下一曲},"response_text": "已识别剪刀手势,切换到下一首音乐","cooldown": 1},GESTURE_TYPES["THUMB_UP"]: {"action": "control_media","params": {"media_type": "music","action": "volume_up" # 动作:音量+},"response_text": "已识别点赞手势,音量调高","cooldown": 0.5},# 电脑控制GESTURE_TYPES["PALM"]: {"action": "control_computer","params": {"action": "pause_video" # 动作:暂停视频(如浏览器、播放器)},"response_text": "已识别手掌手势,暂停当前视频","cooldown": 1}
}# --------------------------
# 设备配置(替换为你的设备信息)
# --------------------------
SMART_DEVICE_CONFIG = {"living_room_light": { # 客厅灯(对应上面的device_id)"type": "mi_light", # 小米台灯"ip": "192.168.31.100", # 设备IP(通过小米家庭APP查看)"token": "your_mi_light_token", # 设备Token(获取方法见下文)"brightness_step": 10 # 亮度调节步长(0-100)},"bedroom_light": { # 卧室灯(可扩展多设备)"type": "broadlink_socket", # 博联插座(控制普通灯)"ip": "192.168.31.101","mac": "AA:BB:CC:DD:EE:FF"}
}# --------------------------
# 多媒体控制配置
# --------------------------
MEDIA_CONTROL_CONFIG = {"music": {"player_path": "C:\\Program Files\\Foobar2000\\foobar2000.exe", # 音乐播放器路径"hotkeys": { # 快捷键映射(通过模拟键盘按键控制)"next": "media_next_track", # 下一曲(系统媒体快捷键)"volume_up": "volume_up", # 音量+"volume_down": "volume_down" # 音量-}},"video": {"hotkeys": {"pause": "space", # 暂停/播放(空格)"fullscreen": "f11" # 全屏(F11)}}
}
2. 设备控制工具类(统一接口)
为了让贾维斯支持不同类型的设备(小米、博联、电脑),封装一个DeviceController
类,统一指令调用接口:
# jarvis_device.py:设备控制工具类
import subprocess
import time
from typing import Dict, Optional
from pynput.keyboard import Controller as KeyboardController # 模拟键盘按键
from miio import Light # 小米设备控制(需安装:pip install python-miio)
from broadlink import BroadlinkDevice # 博联设备控制(需安装:pip install broadlink)
from jarvis_config import SMART_DEVICE_CONFIG, MEDIA_CONTROL_CONFIGclass DeviceController:def __init__(self):"""初始化设备控制器(懒加载,使用时才连接设备)"""self.smart_devices: Dict[str, object] = {} # 智能设备实例缓存self.keyboard = KeyboardController() # 键盘模拟器(控制多媒体)self.last_trigger_time: Dict[str, float] = {} # 指令冷却时间记录def control_smart_device(self, device_id: str, action: str, params: Optional[Dict] = None) -> bool:"""控制智能设备(如台灯、插座):param device_id: 设备ID(对应SMART_DEVICE_CONFIG中的key):param action: 动作(on/off/brightness_up等):param params: 额外参数(如亮度值):return: 控制成功返回True,失败返回False"""# 检查设备配置if device_id not in SMART_DEVICE_CONFIG:print(f"❌ 设备ID {device_id} 未配置")return Falsedevice_config = SMART_DEVICE_CONFIG[device_id]device_type = device_config["type"]# 检查冷却时间if not self._check_cooldown(f"smart_{device_id}_{action}"):return Falsetry:# 懒加载设备实例(第一次使用时连接)if device_id not in self.smart_devices:if device_type == "mi_light":# 连接小米台灯self.smart_devices[device_id] = Light(device_config["ip"], device_config["token"])print(f"✅ 成功连接小米设备:{device_id}(IP:{device_config['ip']})")elif device_type == "broadlink_socket":# 连接博联插座self.smart_devices[device_id] = BroadlinkDevice(device_config["ip"], device_config["mac"])self.smart_devices[device_id].auth()print(f"✅ 成功连接博联设备:{device_id}(IP:{device_config['ip']})")else:print(f"❌ 不支持的设备类型:{device_type}")return False# 执行动作device = self.smart_devices[device_id]if action == "on":if device_type == "mi_light":device.on()elif device_type == "broadlink_socket":device.send_data(device_config.get("on_code")) # 需要提前学习红外码elif action == "off":if device_type == "mi_light":device.off()elif device_type == "broadlink_socket":device.send_data(device_config.get("off_code"))elif action == "brightness_up":if device_type == "mi_light":status = device.status()new_brightness = min(status.brightness + device_config["brightness_step"], 100)device.set_brightness(new_brightness)elif action == "brightness_down":if device_type == "mi_light":status = device.status()new_brightness = max(status.brightness - device_config["brightness_step"], 10)device.set_brightness(new_brightness)else:print(f"❌ 不支持的动作:{action}")return False# 记录冷却时间self._update_cooldown(f"smart_{device_id}_{action}")print(f"✅ 设备 {device_id} 执行动作:{action}")return Trueexcept Exception as e:print(f"❌ 控制设备 {device_id} 失败:{str(e)[:50]}")# 清除失效的设备实例(下次重新连接)if device_id in self.smart_devices:del self.smart_devices[device_id]return Falsedef control_media(self, media_type: str, action: str) -> bool:"""控制多媒体(如音乐、视频):param media_type: 媒体类型(music/video):param action: 动作(next/volume_up/pause等):return: 控制成功返回True,失败返回False"""if media_type not in MEDIA_CONTROL_CONFIG:print(f"❌ 媒体类型 {media_type} 未配置")return Falsemedia_config = MEDIA_CONTROL_CONFIG[media_type]# 检查冷却时间if not self._check_cooldown(f"media_{media_type}_{action}"):return Falsetry:# 获取对应的快捷键hotkey = media_config["hotkeys"].get(action)if not hotkey:print(f"❌ 媒体 {media_type} 不支持动作:{action}")return False# 模拟键盘按键(控制多媒体)if hotkey in ["media_next_track", "media_prev_track"]:# 系统媒体快捷键(直接调用)self.keyboard.press(hotkey)self.keyboard.release(hotkey)elif hotkey in ["volume_up", "volume_down"]:# 音量调节(按2次,效果更明显)for _ in range(2):self.keyboard.press(hotkey)self.keyboard.release(hotkey)time.sleep(0.1)else:# 普通按键(如空格、F11)self.keyboard.press(hotkey)self.keyboard.release(hotkey)# 记录冷却时间self._update_cooldown(f"media_{media_type}_{action}")print(f"✅ 媒体 {media_type} 执行动作:{action}(快捷键:{hotkey})")return Trueexcept Exception as e:print(f"❌ 控制媒体 {media_type} 失败:{str(e)}")return Falsedef control_computer(self, action: str) -> bool:"""控制电脑(如打开软件、暂停视频)"""# 检查冷却时间if not self._check_cooldown(f"computer_{action}"):return Falsetry:if action == "pause_video":# 暂停当前视频(模拟空格按键)self.keyboard.press("space")self.keyboard.release("space")elif action == "open_browser":# 打开浏览器(替换为你的浏览器路径)browser_path = "C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe"subprocess.Popen([browser_path])elif action == "lock_screen":# 锁定电脑(Windows)subprocess.run(["rundll32.exe", "user32.dll,LockWorkStation"])else:print(f"❌ 不支持的电脑控制动作:{action}")return Falseself._update_cooldown(f"computer_{action}")print(f"✅ 电脑执行动作:{action}")return Trueexcept Exception as e:print(f"❌ 控制电脑失败:{str(e)}")return Falsedef _check_cooldown(self, key: str) -> bool:"""检查指令是否在冷却时间内"""current_time = time.time()last_time = self.last_trigger_time.get(key, 0)cooldown = self._get_cooldown_by_key(key)if current_time - last_time < cooldown:# print(f"⌛ 指令 {key} 冷却中,剩余 {cooldown - (current_time - last_time):.1f} 秒")return Falsereturn Truedef _update_cooldown(self, key: str) -> None:"""更新指令的最后触发时间"""self.last_trigger_time[key] = time.time()def _get_cooldown_by_key(self, key: str) -> float:"""根据指令key获取冷却时间(从配置中读取)"""from jarvis_config import GESTURE_COMMAND_MAPfor gesture, cmd in GESTURE_COMMAND_MAP.items():if cmd["action"] == "control_smart_device" and f"smart_{cmd['params']['device_id']}_{cmd['params']['action']}" == key:return cmd["cooldown"]elif cmd["action"] == "control_media" and f"media_{cmd['params']['media_type']}_{cmd['params']['action']}" == key:return cmd["cooldown"]elif cmd["action"] == "control_computer" and f"computer_{cmd['params']['action']}" == key:return cmd["cooldown"]return 1.0 # 默认冷却时间1秒
3. 整合到贾维斯GUI主程序
最后,将手势识别模块、设备控制模块接入贾维斯的GUI主程序,实现“手势识别→指令执行→语音反馈→日志显示”的完整流程:
# jarvis_main.py:贾维斯主程序(整合语音+手势+设备控制)
import sys
import time
from PyQt5.QtWidgets import (QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout, QLabel, QPushButton, QTextEdit, QTabWidget, QGroupBox)
from PyQt5.QtCore import QThread, pyqtSignal, Qt
from PyQt5.QtGui import QFont, QColor, QPalette
import jarvis_core # 原有语音交互核心(whisper+tts)
from gesture_recognizer import GestureRecognizer
from jarvis_device import DeviceController
from jarvis_config import GESTURE_COMMAND_MAP# --------------------------
# 1. 后台线程:手势指令处理
# --------------------------
class GestureCommandThread(QThread):"""处理手势识别结果,执行对应指令并发送反馈信号"""gesture_signal = pyqtSignal(str, str) # (手势类型,反馈文本)def __init__(self, gesture_recognizer: GestureRecognizer, device_controller: DeviceController, tts):super().__init__()self.recognizer = gesture_recognizerself.device_controller = device_controllerself.tts = ttsself.running = Truedef run(self):while self.running:# 获取最新手势gesture = self.recognizer.get_latest_gesture(clear_after_get=True)if not gesture or gesture not in GESTURE_COMMAND_MAP:time.sleep(0.2)continue# 获取手势对应的指令配置cmd_config = GESTURE_COMMAND_MAP[gesture]action = cmd_config["action"]params = cmd_config["params"]response_text = cmd_config["response_text"]# 执行指令success = Falseif action == "control_smart_device":success = self.device_controller.control_smart_device(device_id=params["device_id"],action=params["action"])elif action == "control_media":success = self.device_controller.control_media(media_type=params["media_type"],action=params["action"])elif action == "control_computer":success = self.device_controller.control_computer(action=params["action"])# 发送反馈信号(成功则语音播报,失败则提示)if success:feedback_text = response_text# 语音反馈(在子线程中执行,避免阻塞)self.tts.say(feedback_text)self.tts.runAndWait()else:feedback_text = f"识别到{gesture},但执行指令失败"# 发送信号给GUI更新日志self.gesture_signal.emit(gesture, feedback_text)time.sleep(0.5)def stop(self):self.running = False# --------------------------
# 2. 后台线程:语音交互
# --------------------------
class VoiceInteractionThread(QThread):voice_signal = pyqtSignal(str, str) # (用户输入,反馈文本)def __init__(self, whisper_model, tts):super().__init__()self.whisper_model = whisper_modelself.tts = ttsself.running = Truedef run(self):while self.running:# 录音5秒(原有逻辑)audio_path = jarvis_core.record_audio(duration=5)# 语音识别result = self.whisper_model.transcribe(audio_path, language="zh")user_text = result["text"].strip()if not user_text:response_text = "我没听清,请再说一遍"self.tts.say(response_text)self.tts.runAndWait()self.voice_signal.emit("", response_text)continue# 语音指令处理(简化版,可扩展)response_text = ""if "你好" in user_text or "唤醒" in user_text:response_text = "你好,我是贾维斯,随时为你服务"elif "时间" in user_text:current_time = time.strftime("%H:%M:%S", time.localtime())response_text = f"现在时间是 {current_time}"elif "打开台灯" in user_text:response_text = "好的,正在打开客厅台灯"self.tts.say(response_text)self.tts.runAndWait()# 调用设备控制器success = DeviceController().control_smart_device(device_id="living_room_light", action="on")if not success:response_text = "打开台灯失败,请检查设备连接"elif "退出" in user_text or "再见" in user_text:response_text = "再见,期待下次为你服务"self.voice_signal.emit(user_text, response_text)self.tts.say(response_text)self.tts.runAndWait()self.running = Falsereturnelse:response_text = f"已收到你的指令:{user_text},我会尽快处理"# 语音反馈和日志更新self.tts.say(response_text)self.tts.runAndWait()self.voice_signal.emit(user_text, response_text)def stop(self):self.running = False# --------------------------
# 3. 贾维斯GUI界面
# --------------------------
class JarvisMainWindow(QMainWindow):def __init__(self):super().__init__()self.setWindowTitle("贾维斯智能助手(多模态交互版)")self.setGeometry(100, 100, 1000, 700)self.setWindowIcon(self.style().standardIcon(QMainWindow.MessageBoxInformation))# 初始化核心组件self._init_core_components()# 初始化UI界面self._init_ui()# 初始化后台线程self._init_threads()def _init_core_components(self):"""初始化语音、手势、设备控制核心组件"""# 1. 语音交互(whisper+tts)self.whisper_model = jarvis_core.init_whisper(model_name="base") # 基础模型,速度快self.tts = jarvis_core.init_tts() # 初始化文本转语音# 2. 手势识别(摄像头0=内置)self.gesture_recognizer = GestureRecognizer(camera_index=0,confidence_threshold=0.7,frame_width=640,frame_height=480).start()# 3. 设备控制器self.device_controller = DeviceController()def _init_ui(self):"""构建GUI界面(分标签页:交互日志、设备状态、手势说明)"""# 中心窗口central_widget = QWidget()self.setCentralWidget(central_widget)main_layout = QVBoxLayout(central_widget)main_layout.setContentsMargins(15, 15, 15, 15)main_layout.setSpacing(10)# 标题title_label = QLabel("贾维斯智能助手(支持语音+手势控制)")title_label.setFont(QFont("微软雅黑", 16, QFont.Bold))title_label.setAlignment(Qt.AlignCenter)main_layout.addWidget(title_label)# 状态标签self.status_label = QLabel("状态:所有模块已启动,你可以说话或使用手势控制")self.status_label.setFont(QFont("微软雅黑", 10))self.status_label.setAlignment(Qt.AlignCenter)palette = QPalette()palette.setColor(QPalette.WindowText, QColor(0, 128, 0)) # 绿色文字self.status_label.setPalette(palette)main_layout.addWidget(self.status_label)# 标签页控件tab_widget = QTabWidget()tab_widget.setFont(QFont("微软雅黑", 10))main_layout.addWidget(tab_widget, stretch=1) # 占满剩余空间# 标签1:交互日志log_tab = QWidget()log_layout = QVBoxLayout(log_tab)# 日志文本框self.log_edit = QTextEdit()self.log_edit.setReadOnly(True)self.log_edit.setFont(QFont("Consolas", 10))self.log_edit.setPlaceholderText("交互日志将显示在这里...")log_layout.addWidget(self.log_edit)tab_widget.addTab(log_tab, "交互日志")# 标签2:设备状态device_tab = QWidget()device_layout = QVBoxLayout(device_tab)# 设备状态分组框self._add_device_group(device_layout, "智能设备", ["living_room_light", "bedroom_light"])self._add_device_group(device_layout, "多媒体控制", ["music", "video"])tab_widget.addTab(device_tab, "设备状态")# 标签3:手势说明gesture_tab = QWidget()gesture_layout = QVBoxLayout(gesture_tab)# 手势说明表格gesture_info = [("OK手势", "拇指+食指闭合,其他张开", "打开客厅台灯"),("握拳手势", "所有手指弯曲靠近掌心", "关闭客厅台灯"),("剪刀手势", "食指+中指张开,其他闭合", "音乐下一曲"),("点赞手势", "拇指张开,其他闭合", "音乐音量+"),("手掌手势", "所有手指张开", "暂停当前视频")]for gesture_name, desc, action in gesture_info:gesture_label = QLabel(f"<b>{gesture_name}</b>:{desc} → {action}")gesture_label.setFont(QFont("微软雅黑", 10))gesture_layout.addWidget(gesture_label)tab_widget.addTab(gesture_tab, "手势说明")# 底部控制按钮btn_layout = QHBoxLayout()# 启动/停止手势识别按钮self.gesture_btn = QPushButton("停止手势识别")self.gesture_btn.setFont(QFont("微软雅黑", 10))self.gesture_btn.setStyleSheet("background-color: #FF6B6B; color: white;")self.gesture_btn.clicked.connect(self.toggle_gesture_recognition)btn_layout.addWidget(self.gesture_btn)# 退出按钮self.exit_btn = QPushButton("退出贾维斯")self.exit_btn.setFont(QFont("微软雅黑", 10))self.exit_btn.setStyleSheet("background-color: #4ECDC4; color: white;")self.exit_btn.clicked.connect(self.close)btn_layout.addWidget(self.exit_btn)main_layout.addLayout(btn_layout)def _add_device_group(self, parent_layout, group_name, device_ids):"""添加设备状态分组框"""group_box = QGroupBox(group_name)group_box.setFont(QFont("微软雅黑", 11, QFont.Bold))group_layout = QVBoxLayout(group_box)for device_id in device_ids:status_label = QLabel(f"{device_id}:未连接")status_label.setFont(QFont("微软雅黑", 10))# 存储设备状态标签,便于后续更新setattr(self, f"{device_id}_status", status_label)group_layout.addWidget(status_label)parent_layout.addWidget(group_box)def _init_threads(self):"""初始化后台线程"""# 1. 手势指令线程self.gesture_thread = GestureCommandThread(self.gesture_recognizer, self.device_controller, self.tts)self.gesture_thread.gesture_signal.connect(self.update_gesture_log)self.gesture_thread.start()# 2. 语音交互线程self.voice_thread = VoiceInteractionThread(self.whisper_model, self.tts)self.voice_thread.voice_signal.connect(self.update_voice_log)self.voice_thread.start()# 3. 设备状态更新线程(每秒更新一次)self.status_thread = QThread()self.status_thread.run = self.update_device_statusself.status_thread.start()def update_voice_log(self, user_text, response_text):"""更新语音交互日志"""current_time = time.strftime("%H:%M:%S", time.localtime())if user_text:self.log_edit.append(f"[{current_time}] 你:{user_text}")self.log_edit.append(f"[{current_time}] 贾维斯:{response_text}\n")# 自动滚动到最新日志self.log_edit.verticalScrollBar().setValue(self.log_edit.verticalScrollBar().maximum())def update_gesture_log(self, gesture_type, feedback_text):"""更新手势指令日志"""current_time = time.strftime("%H:%M:%S", time.localtime())self.log_edit.append(f"[{current_time}] 手势:{gesture_type}")self.log_edit.append(f"[{current_time}] 贾维斯:{feedback_text}\n")self.log_edit.verticalScrollBar().setValue(self.log_edit.verticalScrollBar().maximum())def update_device_status(self):"""更新设备状态(如是否在线)"""while True:# 示例:更新小米台灯状态if hasattr(self.device_controller, "smart_devices") and "living_room_light" in self.device_controller.smart_devices:try:light = self.device_controller.smart_devices["living_room_light"]status = light.status()status_text = f"living_room_light:{'开启' if status.is_on else '关闭'}(亮度:{status.brightness}%)"palette = QPalette()palette.setColor(QPalette.WindowText, QColor(0, 128, 0) if status.is_on else QColor(128, 128, 128))self.living_room_light_status.setText(status_text)self.living_room_light_status.setPalette(palette)except Exception:self.living_room_light_status.setText("living_room_light:连接异常")palette = QPalette()palette.setColor(QPalette.WindowText, QColor(255, 0, 0))self.living_room_light_status.setPalette(palette)time.sleep(1) # 每秒更新一次def toggle_gesture_recognition(self):"""切换手势识别的启停"""if self.gesture_recognizer.running.is_set():# 停止手势识别self.gesture_recognizer.stop()self.gesture_btn.setText("启动手势识别")self.gesture_btn.setStyleSheet("background-color: #4ECDC4; color: white;")self.status_label.setText("状态:手势识别已停止,仅支持语音控制")palette = QPalette()palette.setColor(QPalette.WindowText, QColor(255, 165, 0)) # 橙色文字self.status_label.setPalette(palette)else:# 启动手势识别self.gesture_recognizer = GestureRecognizer(camera_index=0).start()self.gesture_thread.recognizer = self.gesture_recognizerself.gesture_btn.setText("停止手势识别")self.gesture_btn.setStyleSheet("background-color: #FF6B6B; color: white;")self.status_label.setText("状态:所有模块已启动,你可以说话或使用手势控制")palette = QPalette()palette.setColor(QPalette.WindowText, QColor(0, 128, 0)) # 绿色文字self.status_label.setPalette(palette)def closeEvent(self, event):"""关闭窗口时,停止所有线程和资源"""# 停止线程self.gesture_thread.stop()self.voice_thread.stop()self.status_thread.quit()self.status_thread.wait()# 释放资源self.gesture_recognizer.stop()self.device_controller = None # 释放设备连接event.accept()print("🛑 贾维斯已完全退出,所有资源已释放")# --------------------------
# 主程序入口
# --------------------------
if __name__ == "__main__":try:app = QApplication(sys.argv)window = JarvisMainWindow()window.show()sys.exit(app.exec_())except Exception as e:print(f"❌ 贾维斯启动失败:{str(e)}")sys.exit(1)
五、避坑指南:解决90%的实际问题
很多人在实际操作中会遇到各种小问题,这里总结了高频坑点和解决方案,帮你少走弯路:
1. 摄像头打不开/权限被拒
Windows:打开“设置→隐私和安全性→摄像头”,确保“允许应用访问摄像头”已开启,且你的Python IDE(如PyCharm)在“选择可以访问摄像头的应用”列表中;
macOS:打开“系统设置→隐私与安全性→摄像头”,勾选你的IDE(如VS Code),若已勾选,先取消再重新勾选;
Linux:执行
ls /dev/video0
查看摄像头设备是否存在,若不存在,检查摄像头是否插好;若存在但打不开,执行sudo chmod 666 /dev/video0
赋予权限。
2. 手势识别不准/频繁误识别
光线问题:避免逆光(如背对窗户),光线过暗时打开室内灯,MediaPipe在强光/暗光下识别率会下降;
手部距离:手离摄像头的距离保持在30-50厘米,太近会导致画面超出,太远关键点模糊;
阈值调优:
若误识别多,调高
confidence_threshold
(如从0.7到0.8),或调小手势判断的距离阈值(如OK手势的ok_distance
从30到25);若漏识别多,调低
confidence_threshold
(如从0.7到0.6),或调大距离阈值(如OK手势的ok_distance
从30到35);
平滑识别:增加
history_length
(如从5到8),让识别结果更稳定,但会增加一点延迟。
3. 小米设备Token获取不到
小米设备需要Token才能通过代码控制,获取方法:
安装“小米家庭”APP,登录你的账号;
电脑安装“小米米家APP抓包工具”(如MiHomeTokenExtractor);
手机和电脑连同一WiFi,用工具抓取APP与设备的通信包,从中提取Token;
注意:小米设备升级固件后,Token可能会变,需要重新获取。
4. CPU占用过高
降低画面分辨率(如从640×480到480×360);
关闭关键点可视化(按
d
键);降低帧率(如从30到25);
若用笔记本,插电运行(电池模式下CPU会降频,影响识别速度)。
六、进阶扩展:让贾维斯更“智能”
基础版手势控制已经能用,但还可以玩出更多花样:
1. 动态手势识别(如挥手、画圈)
当前识别的是“静态手势”(手保持某个姿势),可以扩展“动态手势”:
思路:保存10-20帧的关键点历史,分析手部的运动轨迹(如“从左到右挥手”“顺时针画圈”);
示例:“从左到右挥手”→切换PPT下一页,“从右到左挥手”→切换上一页。
2. 手势+语音组合指令
比如先说“贾维斯,控制空调”,再比“OK”调高温、“握拳”调低温度,避免单一手势的歧义(比如“OK”既可以开灯也可以开空调)。
3. 多手识别
当前只识别1只手,可修改max_num_hands=2
,支持双手手势:
示例:“双手OK”→同时打开客厅灯和卧室灯,“左手OK+右手握拳”→打开客厅灯并关闭卧室灯。
4. 结合本地大模型优化指令理解
比如用户说“贾维斯,把灯调亮一点”,贾维斯用本地大模型(如Llama 3)理解“调亮一点”对应“亮度+10%”,再触发手势或语音确认,让交互更自然。
七、结语:从“工具”到“助手”的进化
给贾维斯加手势控制,不只是多了一个控制方式,更是让交互从“被动响应”走向“主动理解”。当你在厨房做饭时,不用擦手找手机,挥挥手就能开灯;当你在看电影时,不用找遥控器,比个手势就能调音量——这才是智能助手该有的样子。
关键是,这一切都在你的本地电脑上运行,数据不联网、隐私有保障,完全属于你自己。接下来,你可以根据自己的需求,继续扩展设备控制、优化手势识别,让贾维斯越来越贴合你的生活习惯。
如果你在扩展过程中遇到问题,或者有更酷的想法,欢迎在评论区交流——技术的乐趣,就在于不断探索和创造~