当前位置: 首页 > news >正文

用 mem0 构建一个简单的 AI Healthcare 助手

1. 背景

在医疗场景中,AI 助手不仅需要回答健康问题,还要记住用户的既往病史、预约信息,并在后续交互中利用这些信息进行个性化回应。
mem0 提供了 长期记忆存储、向量检索和知识提取的能力,非常适合做智能健康助手。

2. 环境依赖

首先安装所需依赖:

pip install langchain-community python-dotenv mem0 qdrant-client

同时准备好:

  • 阿里云 DashScope API Key(用于 ChatTongyiDashScopeEmbeddings

  • Qdrant 向量数据库(用于记忆存储)

3. 定义事实抽取 Prompt

我们希望 AI 只提取与健康相关的信息,因此自定义了一个 事实提取 prompt

custom_prompt = """
Please only extract entities containing patient health information, appointment details, and user information. 
Here are some few shot examples:Input: I have a headache and would like to schedule an appointment.
Output: {"facts" : ["Patient reports headache", "Wants to schedule an appointment"]}Input: My name is Jane Smith, and I need to reschedule my appointment for next Tuesday.
Output: {"facts" : ["Patient name: Jane Smith", "Wants to reschedule appointment", "Original appointment: next Tuesday"]}Input: I have diabetes and my blood sugar is high.
Output: {"facts" : ["Patient has diabetes", "Reports high blood sugar"]}
"""

这样,每次用户输入时,mem0 会自动从对话中提取健康信息并存入记忆库。

4. 配置 Memory

使用 MemoryConfig 配置 mem0 的工作方式,包括:

  • LLM(ChatTongyi)

  • Embedding 模型(DashScopeEmbeddings)

  • 向量存储(Qdrant)

config = MemoryConfig(llm=LlmConfig(provider="langchain",config={"model": llm},),embedder=EmbedderConfig(provider="langchain",config={"model": embeder}),vector_store=VectorStoreConfig(provider="qdrant",config={"host": "localhost","port": 6333,"collection_name": "memory_vectors","embedding_model_dims": 1536,}),custom_fact_extraction_prompt=custom_prompt
)

5. 实现 AI Healthcare 助手类

我们定义 AIHealthcareSupport 类,封装了以下能力:

  • ask(question):结合记忆与上下文回答问题

  • add_memory():存储用户-助手的交互

  • get_memories():获取某个用户的全部记忆

  • search_memory():搜索与输入相关的历史信息

核心逻辑:

class AIHealthcareSupport:def __init__(self, config):self.memory = Memory(config=config)self.app_id = "app-1"self.model = llmdef ask(self, question, user_id=None):# 1. 从记忆中检索相关信息memories = self.search_memory(question, user_id=user_id)context = self.convert_to_facts(memories['results'])# 2. 构造 prompt,带入上下文messages = [SystemMessage(content=f"""You are a helpful healthcare support assistant. Use the provided context to personalize your responses and remember user health information and past interactions. {context}"""),HumanMessage(content=question)]response = self.model.invoke(messages)# 3. 存储本轮交互self.add_memory(question, response.content, user_id=user_id)return {"messages": [response.content]}

6. 示例对话

我们可以写一个测试函数 test1(),模拟用户和助手的多轮交互:

def test1():ai_support = AIHealthcareSupport(config)user_id = "Harry"questions = ["I have a family history of diabetes; how can I reduce my risk?","Can you recommend any specific dietary changes?","I have a headache and would like to schedule an appointment."]for q in questions:response = ai_support.ask(q, user_id=user_id)print(f"User: {q}")print(f"AI: {response['messages'][0]}\n")# 打印所有记忆memories = ai_support.get_memories(user_id=user_id)print("All Memories:")for memory in memories['results']:print(f"- {memory}")

运行后,助手会记住用户的家族病史饮食咨询预约请求,并能在后续对话中自动带入这些信息。

 对话的代码如下:

def chatbot():user_id, thread_id = "Harry", "25f570725f6a4233ad8942d9d1c6cc79"ai_support = AIHealthcareSupport(config)while True:try:user_input = input("🧑 User: ")if user_input.lower() in ["quit", "exit", "q"]:print("Goodbye!")break# print("user input", user_input)response = ai_support.ask(user_input, user_id)print(f"🤖 Assistant: {response['messages'][0]}")except Exception as e:print("发生错误:")traceback.print_exc()break

7. 运行交互式 Chatbot

最后,我们可以进入一个循环对话模式:

if __name__ == '__main__':chatbot()

运行后,可以像下面这样对话:

🧑 User: I have diabetes.
🤖 Assistant: Thanks for sharing. I’ll keep in mind that you have diabetes. Would you like me to provide dietary or lifestyle advice?

8. 总结

通过以上代码,我们构建了一个具备以下能力的 AI Healthcare 助手

  1. 提取用户输入中的健康信息并保存;

  2. 在后续对话中利用记忆,提供个性化建议

  3. 支持医疗预约、健康建议、病史追踪等扩展场景。

未来你可以继续扩展:

  • 加入 Neo4j 图数据库,存储患者关系网;

  • 加入 医疗知识库检索,回答更复杂的问题;

  • 接入 医院预约系统 API,实现自动挂号。

所有测试代码

import os
import tracebackfrom dotenv import load_dotenv
from langchain_community.chat_models import ChatTongyi
from langchain_community.embeddings import DashScopeEmbeddings
from langchain_core.messages import SystemMessage, HumanMessage
from mem0 import Memory, MemoryClient
from mem0.configs.base import MemoryConfig
from mem0.embeddings.configs import EmbedderConfig
from mem0.graphs.configs import GraphStoreConfig
from mem0.llms.configs import LlmConfig
from mem0.vector_stores.configs import VectorStoreConfigcustom_prompt = """
Please only extract entities containing patient health information, appointment details, and user information. 
Here are some few shot examples:Input: Hi.
Output: {{"facts" : []}}Input: The weather is nice today.
Output: {{"facts" : []}}Input: I have a headache and would like to schedule an appointment.
Output: {{"facts" : ["Patient reports headache", "Wants to schedule an appointment"]}}Input: My name is Jane Smith, and I need to reschedule my appointment for next Tuesday.
Output: {{"facts" : ["Patient name: Jane Smith", "Wants to reschedule appointment", "Original appointment: next Tuesday"]}}Input: I have diabetes and my blood sugar is high.
Output: {{"facts" : ["Patient has diabetes", "Reports high blood sugar"]}}Return the facts and patient information in a json format as shown above.
"""load_dotenv()
TONGYI_API_KEY = os.getenv("TONGYI_API_KEY")llm = ChatTongyi(model="qwen-plus", api_key=TONGYI_API_KEY)
embeder = DashScopeEmbeddings(model="text-embedding-v2", dashscope_api_key = TONGYI_API_KEY)# 1. 配置 Memory
config = MemoryConfig( llm = LlmConfig( provider="langchain", config={"model":llm }, ),embedder = EmbedderConfig( provider = "langchain", config= { "model":embeder} ),vector_store = VectorStoreConfig(provider = "qdrant",config={"host": "localhost","port": 6333,"collection_name": "memory_vectors","embedding_model_dims": 1536,}),custom_fact_extraction_prompt = custom_prompt# graph_store=  GraphStoreConfig(provider = "neo4j",#                                    config= {#                                     "url": "bolt://localhost:7687",#                                     "username": "neo4j",#                                     "password": "myhome1234"##                                     }#             ),)class AIHealthcareSupport:def __init__(self, config):self.memory  = Memory(config=config)self.app_id = "app-1"self.model = llmdef ask(self, question, user_id=None):memories = self.search_memory(question, user_id=user_id)context = self.convert_to_facts(memories['results'])print("context:",context)messages = [SystemMessage(content=f"""You are a helpful healthcare support assistant. Use the provided context to personalize your responses and remember user health information and past interactions. {context}"""),HumanMessage(content=question)]response = self.model.invoke(messages)# Store the interaction in memoryself.add_memory(question, response.content, user_id=user_id)return {"messages": [response.content]}def add_memory(self, question, response, user_id=None):messages = [{"role": "user", "content": question},{"role": "assistant", "content": response},]self.memory.add(messages, user_id=user_id, metadata={"app_id": self.app_id})def get_memories(self, user_id=None):return self.memory.get_all(user_id=user_id)def search_memory(self, query, user_id=None):related_memories = self.memory.search(query, user_id=user_id)return related_memoriesdef convert_to_facts(self, memories):if not memories:return ""output_lines = []output_lines.append("# These are the most relevant facts and their valid date ranges")output_lines.append("# format: FACT (Date range: from - to)")output_lines.append("<FACTS>")# 添加每个记忆项for memory in memories:content = memory.get('memory', '')created_at = memory.get('created_at')if memory.get('updated_at') and memory.get('updated_at')!=None:created_at = memory.get('updated_at')expiration_date = "present"if memory.get("expiration_date") and memory.get("expiration_date") != None:expiration_date = memory.get("expiration_date")# 格式化时间范围if created_at:time_range = f"({created_at} - {expiration_date})"else:time_range = "(unknown date range)"output_lines.append(f"  - {content} {time_range}")output_lines.append("</FACTS>")return "\n".join(output_lines)def test1():# Initialize the AIHealthcareSupport botai_support = AIHealthcareSupport(config)# User ID for interactionuser_id = "Harry"# Interacting with the botprint("Interacting with AI Healthcare Support:\n")# Example interactionsquestions = ["I have a family history of diabetes; how can I reduce my risk?",  # Preventive care inquiry"Can you recommend any specific dietary changes?",  # Focusing on diet"I have a headache and would like to schedule an appointment."]# Loop through each question, ask the bot, and print responsesfor question in questions:response = ai_support.ask(question, user_id=user_id)print(f"User: {question}")print(f"AI: {response['messages'][0]}\n")# Retrieve and display memories associated with the usermemories = ai_support.get_memories(user_id=user_id)print("All Memories:")for memory in memories['results']:print(f"- {memory}")def chatbot():user_id, thread_id = "Harry", "25f570725f6a4233ad8942d9d1c6cc79"ai_support = AIHealthcareSupport(config)while True:try:user_input = input("🧑 User: ")if user_input.lower() in ["quit", "exit", "q"]:print("Goodbye!")break# print("user input", user_input)response = ai_support.ask(user_input, user_id)print(f"🤖 Assistant: {response['messages'][0]}")except Exception as e:print("发生错误:")traceback.print_exc()breakif __name__ == '__main__':# test1()chatbot()

http://www.dtcms.com/a/351522.html

相关文章:

  • 基于Vue通用组件定制化的场景解决
  • UNet改进(35):基于WGAM模块的PyTorch实战
  • Qt在Linux下编译发布 -- linuxdeployqt的使用
  • 第十九节:阴影进阶 - 软阴影与性能平衡技术
  • FileMenu Tools for Win:突破 Windows 文件管理困境的利器
  • Git:基本使用
  • 数字化转型三阶段:从信息化、数字化到数智化的战略进化
  • Leetcode+Java+动态规划II
  • 知行——同为科技24周年庆典
  • Thingsboard 租户管理员权限,增加租户普通用户权限
  • Go errgroup:高效并发控制与错误处理
  • WPF基于LiveCharts2图形库,实现:折线图,柱状图,饼状图
  • 03. 协程入门_Android异步处理机制
  • 系统架构设计师备考第7天——网络协议中间件软件构件
  • WebSocket简单了解
  • 线性代数之深入理解旋转矩阵
  • lesson46-2:Linux 高级指令全解析:从文件操作到系统管理
  • mybatisplus 配置二级缓存
  • 【系统编程】线程简介
  • 【人工智能】2025年AI代理开源革命:社区驱动的智能体生态重塑未来
  • Linux--seLinux的概述
  • FRET、PLA、Co-IP和GST pull-down有何区别? 应该如何选择?
  • 原型模式系统开发中的原型分类全景:水平、垂直、抛弃式与演化式
  • nvm切换node版本之后报错,无法将“node”项识别为 cmdlet、函数、脚本文件或可运行程序的名称
  • 嵌入式C语言进阶:结构体封装函数的艺术与实践
  • IUV5G专网排障(上)
  • 支持向量机(SVM)学习笔记
  • SOME/IP服务发现PRS_SOMEIPSD_00277的解析
  • 服务器数据恢复—热备盘上线失败如何恢复数据?
  • 【Android】webview强制Crash后再自恢复设计