学做网站开发银川市做网站的公司
image tokenizer原理步骤:
(1)图像分块:将输入图像划分为N×N(如16×16)的patch
(2)线性投影:通过卷积或全连接层将每个patch展平为embedding
(3)添加位置编码:将position encoding加到patch embedding
huggingface/transformers调用代码:
from transformers import ViTImageProcessor, ViTModelprocessor = ViTImageProcessor.from_pretrained("google/vit-base-patch16-224")
model = ViTModel.from_pretrained("google/vit-base-patch16-224")inputs = processor(images=image, return_tensors="pt")
参考源码:
github.com/google-research/vision_transformer
github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py
github.com/huggingface/transformers/blob/main/src/transformers/models/vit/modeling_vit.py