当前位置: 首页 > news >正文

ubuntu 22.04 本地部署DeepSeek的Janus Pro

ubuntu 22.04 本地部署DeepSeek的Janus Pro

  • 下载代码
  • 下载模型文件
  • 环境配置
  • Janus Pro测试
  • 效果演示
    • 图像分析测试
    • 文字生成图像测试
  • 可能遇到的错误
    • 1、没有模型
      • 问题分析
        • `OSError` 错误
      • 解决办法
        • 安装 `git-lfs` 并下载模型权重
    • 2、RuntimeError: "triu_tril_cuda_template" not implemented for 'BFloat16'

参考文章:半小时在本地部署DeepSeek的Janus Pro,进行图片分析和文生图
我的基础环境

系统:Ubuntu 22.04
cuda :12.3
显卡:3090 24G

下载代码

git clone https://github.com/deepseek-ai/Janus.git

下载模型文件

模型文件地址:

https://hf-mirror.com/deepseek-ai/Janus-Pro-7B
https://modelscope.cn/models/deepseek-ai/Janus-Pro-7B

也可以使用git clone克隆模型。

进入源代码目录后,执行下面的克隆命令。

git clone https://hf-mirror.com/deepseek-ai/Janus-Pro-7B

在这里插入图片描述

环境配置

提前安装好cuda

conda create -n janus_pro python=3.10
conda activate janus_pro

先单独安装torch==2.2.1

pip install torch==2.2.1 torchvision==0.17.1 torchaudio

然后将 requirements.txt 中下面几项改成

torch                     2.2.1
torchaudio                2.2.1
torchvision               0.17.1
transformers              4.40.0

然后再

pip install -r requirements.txt 

安装下来基本没有什么问题
在这里插入图片描述

Janus Pro测试

修改模型路径:
将demo/app_januspro.py 中

model_path = "deepseek-ai/Janus-Pro-7B"

改为

model_path = "/home/dell/work/python_project/Janus/Janus-Pro-7B"

在这里插入图片描述

运行程序

python demo/app_januspro.py

在这里插入图片描述

占用的显存空间得到14G,
在这里插入图片描述

打开网页:http://127.0.0.1:7860

http://127.0.0.1:7860

在这里插入图片描述
然后输入一张图像,并写一个问题,点击chat

在这里插入图片描述

效果演示

图像分析测试

运行结果:
在这里插入图片描述

也有不好的效果
在这里插入图片描述

文字生成图像测试

效果挺差的,分辨率一般
在这里插入图片描述

可能遇到的错误

1、没有模型

报错:

(janus_pro) dell@dell-Precision-7920-Tower:~/work/python_project/Janus$ python demo/app_januspro.py
Python version is above 3.10, patching the collections module.
/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/torchvision/datapoints/__init__.py:12: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 namespaces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes. You can silence this warning by calling torchvision.disable_beta_transforms_warning().
  warnings.warn(_BETA_TRANSFORMS_WARNING)
/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/torchvision/transforms/v2/__init__.py:54: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 namespaces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes. You can silence this warning by calling torchvision.disable_beta_transforms_warning().
  warnings.warn(_BETA_TRANSFORMS_WARNING)
/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/transformers/models/auto/image_processing_auto.py:594: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use `slow_image_processor_class`, or `fast_image_processor_class` instead
  warnings.warn(
Loading checkpoint shards:   0%|                                                                                                           | 0/2 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/transformers/modeling_utils.py", line 556, in load_state_dict
    return torch.load(
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/torch/serialization.py", line 814, in load
    raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None
_pickle.UnpicklingError: Weights only load failed. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Unsupported operand 118

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/dell/work/python_project/Janus/demo/app_januspro.py", line 19, in <module>
    vl_gpt = AutoModelForCausalLM.from_pretrained(model_path,
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
    return model_class.from_pretrained(
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/transformers/modeling_utils.py", line 262, in _wrapper
    return func(*args, **kwargs)
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4319, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4873, in _load_pretrained_model
    state_dict = load_state_dict(
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/transformers/modeling_utils.py", line 566, in load_state_dict
    raise OSError(
OSError: You seem to have cloned a repository without having git-lfs installed. Please install git-lfs and run `git lfs install` followed by `git lfs pull` in the folder you cloned.

解决办法
从你给出的错误信息来看,在运行 demo/app_januspro.py 脚本时碰到了几个问题,下面为你详细分析并给出解决办法。

问题分析

OSError 错误

这是主要的错误,提示你在克隆仓库时没有安装 git-lfsgit-lfs(Git Large File Storage)用于管理大文件,许多深度学习模型的权重文件都很大,需要借助 git-lfs 来下载。

解决办法

安装 git-lfs 并下载模型权重

安装 git-lfs 并下载模型权重。
步骤如下:

  1. 安装 git-lfs
    • Ubuntu/Debian
sudo apt-get install git-lfs
  1. 初始化 git-lfs
    在路径**~/work/python_project/Janus** ,运行
git lfs install

在这里插入图片描述

  1. 进入模型所在的仓库目录并拉取大文件
    在路径/home/dell/work/python_project/Janus/Janus-Pro-7B
    运行
git lfs pull

在这里插入图片描述

2、RuntimeError: “triu_tril_cuda_template” not implemented for ‘BFloat16’

报错日志:

$ python demo/app_januspro.py
Python version is above 3.10, patching the collections module.
/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/torchvision/datapoints/__init__.py:12: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 namespaces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes. You can silence this warning by calling torchvision.disable_beta_transforms_warning().
  warnings.warn(_BETA_TRANSFORMS_WARNING)
/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/torchvision/transforms/v2/__init__.py:54: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 namespaces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes. You can silence this warning by calling torchvision.disable_beta_transforms_warning().
  warnings.warn(_BETA_TRANSFORMS_WARNING)
/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/transformers/models/auto/image_processing_auto.py:594: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use `slow_image_processor_class`, or `fast_image_processor_class` instead
  warnings.warn(
Loading checkpoint shards:   0%|                                                                                                                                          | 0/2 [00:00<?, ?it/s]/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  return self.fget.__get__(instance, owner)()
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:21<00:00, 10.59s/it]
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.
Some kwargs in processor config are unused and will not have any effect: add_special_token, num_image_tokens, ignore_id, sft_format, image_tag, mask_prompt. 
Running on local URL:  http://127.0.0.1:7860
IMPORTANT: You are using gradio version 3.48.0, however version 4.44.1 is available, please upgrade.
--------
Running on public URL: https://179c234a5c69fb53fa.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
Traceback (most recent call last):
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/gradio/routes.py", line 534, in predict
    output = await route_utils.call_process_api(
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/gradio/route_utils.py", line 226, in call_process_api
    output = await app.get_blocks().process_api(
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/gradio/blocks.py", line 1550, in process_api
    result = await self.call_function(
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/gradio/blocks.py", line 1185, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
    return await future
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 962, in run
    result = context.run(func, *args)
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/gradio/utils.py", line 661, in wrapper
    response = f(*args, **kwargs)
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/dell/work/python_project/Janus/demo/app_januspro.py", line 60, in multimodal_understanding
    outputs = vl_gpt.language_model.generate(
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/transformers/generation/utils.py", line 2223, in generate
    result = self._sample(
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/transformers/generation/utils.py", line 3211, in _sample
    outputs = self(**model_inputs, return_dict=True)
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
    return func(*args, **kwargs)
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 842, in forward
    outputs = self.model(
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 564, in forward
    causal_mask = self._update_causal_mask(
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 666, in _update_causal_mask
    causal_mask = self._prepare_4d_causal_attention_mask_with_cache_position(
  File "/home/dell/anaconda3/envs/janus_pro/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 732, in _prepare_4d_causal_attention_mask_with_cache_position
    causal_mask = torch.triu(causal_mask, diagonal=1)
RuntimeError: "triu_tril_cuda_template" not implemented for 'BFloat16'

解决办法,升级torch版本,默认的为2.0.1,使用

pip install torch==2.2.1 torchvision==0.17.1 torchaudio

相关文章:

  • BigFoot EventAlertMod lua
  • 【hot100】046全排列
  • 初识数组下篇
  • 滑动窗口及边缘化直观理解
  • Maven 的常用指令
  • 编程视界:C++命名空间
  • 人工智能之数学基础:坐标变换
  • 【JavaEE】SpringIoC与SpringDI
  • 详细学习 pandas 和 xlrd:从零开始
  • 软件设计模式之简单工厂模式
  • PHP将HTML标签转化为图片
  • 理解字符流和字节流,节点流和处理流、缓冲流、InputStreamReader、BufferInputStream、BufferReader...
  • 深入解析 C 语言中含数组和指针的构造体与共同体内存计算
  • 在python中运行Wireshark抓包并保存
  • 【MACOS】用户数据过多
  • 01 音视频知识学习(视频)
  • AI绘画环境描述终极心法:《氛围渲染的量子跃迁——从三维空间到十一维叙事的降维打击》
  • OPPO机器学习算法岗(AI智能体)内推
  • 智驾技术全链条解析
  • 嵌入式NuttX RTOS面试题及参考答案 草
  • 印度军方否认S-400防空系统被摧毁
  • 5天完成1000多万元交易额,“一张手机膜”畅销海内外的启示
  • 经济日报刊文:品牌经营不能让情怀唱“独角戏”
  • “毛茸茸”的画,诗意、温暖又治愈
  • 优秀“博主”在上海杨浦购房最高补贴200万元,有何条件?
  • 中华人民共和国和俄罗斯联邦在纪念中国人民抗日战争、苏联伟大卫国战争胜利和联合国成立80周年之际关于进一步深化中俄新时代全面战略协作伙伴关系的联合声明