大模型SAM辅助labelme分割数据集(纯小白教程)
一、labelme的安装
1.打开终端(win+r,输入cmd),输入以下命令
2.创建虚拟环境
conda create -n data python==3.10.0
3.激活虚拟环境
conda activate data
4.安装labelme
pip install labelme==5.6.1
出现下面内容则安装成功
5.AI辅助标注
从以下路径中下载不同模型的ONNX文件,保存到labelme文件中
# 这里是这些模型的下载路径
https://github.com/wkentaro/labelme/releases/download/sam-20230416/sam_vit_b_01ec64.quantized.encoder.onnx
https://github.com/wkentaro/labelme/releases/download/sam-20230416/sam_vit_b_01ec64.quantized.decoder.onnx
https://github.com/wkentaro/labelme/releases/download/sam-20230416/sam_vit_l_0b3195.quantized.encoder.onnx
https://github.com/wkentaro/labelme/releases/download/sam-20230416/sam_vit_l_0b3195.quantized.decoder.onnx
https://github.com/wkentaro/labelme/releases/download/sam-20230416/sam_vit_h_4b8939.quantized.encoder.onnx
https://github.com/wkentaro/labelme/releases/download/sam-20230416/sam_vit_h_4b8939.quantized.decoder.onnx
https://github.com/labelmeai/efficient-sam/releases/download/onnx-models-20231225/efficient_sam_vitt_encoder.onnx
https://github.com/labelmeai/efficient-sam/releases/download/onnx-models-20231225/efficient_sam_vitt_decoder.onnx
https://github.com/labelmeai/efficient-sam/releases/download/onnx-models-20231225/efficient_sam_vits_encoder.onnx
https://github.com/labelmeai/efficient-sam/releases/download/onnx-models-20231225/efficient_sam_vits_decoder.onnx
打开anaconda的envs目录下的此路径,打开-init-.py文件进行编辑
替换全部-init-.py文件的内容为以下代码,把MODEL_DIR
改为第一步models文件的存放路径。
import os.path as osp
MODEL_DIR = "D:/Anaconda3/envs/data/Lib/site-packages/labelme/models/"
from .efficient_sam import EfficientSam
from .segment_anything_model import SegmentAnythingModel
from .text_to_annotation import get_rectangles_from_texts # NOQA: F401
from .text_to_annotation import get_shapes_from_annotations # NOQA: F401
from .text_to_annotation import non_maximum_suppression # NOQA: F401
class SegmentAnythingModelVitB(SegmentAnythingModel):
name = "SegmentAnything (speed)"
def __init__(self):
super().__init__(
encoder_path=osp.join(MODEL_DIR, "sam_vit_b_01ec64.quantized.encoder.onnx"), # NOQA
decoder_path=osp.join(MODEL_DIR, "sam_vit_b_01ec64.quantized.decoder.onnx"), # NOQA
)
class SegmentAnythingModelVitL(SegmentAnythingModel):
name = "SegmentAnything (balanced)"
def __init__(self):
super().__init__(
encoder_path=osp.join(MODEL_DIR, "sam_vit_l_0b3195.quantized.encoder.onnx"), # NOQA
decoder_path=osp.join(MODEL_DIR, "sam_vit_l_0b3195.quantized.decoder.onnx"), # NOQA
)
class SegmentAnythingModelVitH(SegmentAnythingModel):
name = "SegmentAnything (accuracy)"
def __init__(self):
super().__init__(
encoder_path=osp.join(MODEL_DIR, "sam_vit_h_4b8939.quantized.encoder.onnx"), # NOQA
decoder_path=osp.join(MODEL_DIR, "sam_vit_h_4b8939.quantized.decoder.onnx"), # NOQA
)
class EfficientSamVitT(EfficientSam):
name = "EfficientSam (speed)"
def __init__(self):
super().__init__(
encoder_path=osp.join(MODEL_DIR, "efficient_sam_vitt_encoder.onnx"), # NOQA
decoder_path=osp.join(MODEL_DIR, "efficient_sam_vitt_decoder.onnx"), # NOQA
)
class EfficientSamVitS(EfficientSam):
name = "EfficientSam (accuracy)"
def __init__(self):
super().__init__(
encoder_path=osp.join(MODEL_DIR, "efficient_sam_vits_encoder.onnx"), # NOQA
decoder_path=osp.join(MODEL_DIR, "efficient_sam_vits_decoder.onnx"), # NOQA
)
MODELS = [
SegmentAnythingModelVitB,
SegmentAnythingModelVitL,
SegmentAnythingModelVitH,
EfficientSamVitT,
EfficientSamVitS,
]
二、labelme的使用
1.创建数据集文件夹
在数据集文件夹中,新建images文件夹(存放需要被分割的原始图像)和labels文件夹(存放分割后的标签)。
2.打开labelme界面
终端输入命令labelme后,自动跳转labelme界面
3.分割图像
打开图像存放位置
选择辅助的大模型:作者亲测,EfficientSAM(speed)比较好用,又快又准确
在图片位置右击,选择创建AI多边形。
左键选择你想要的区域,然后shift+左键选择你不要的区域。
双击完成标注,在弹出的窗口输入标签类别,确认后在右侧栏出现标签名称。
选择左侧编辑多边形可以手动调节刚才AI辅助分割的地方。
标注后的结果展示,分为rail(铁路)和obstacle(障碍物)两种标签。
在左上角文件中在此处点击“更改输出路径”,选择前边在数据集文件夹中创建的labels文件夹。点击“自动保存”后,点击下一幅,即可自动保存标注数据。