当前位置: 首页 > news >正文

[ICLR 2022]How Much Can CLIP Benefit Vision-and-Language Tasks?

论文网址:pdf

英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用

目录

1. 心得

2. 论文逐段精读

2.1. Abstract

2.2. Introduction

2.3. Background and Motivation

2.3.1. Motivation

2.4. CLIP-ViL

2.4.1. Visual Question Aswering

2.4.2. Image Captioning

2.4.3. Vision-and-Language Navigation

2.5. Vision-and-Language Pre-training

2.5.1. CLIP-VIL_p

2.5.2. Experiments

2.6. Analysis

2.7. Conclusions

1. 心得

(1)?非常简单的一篇文章,感觉在测试CLIP?

2. 论文逐段精读

2.1. Abstract

        ①Model pre-trained on large number of data brings better performance

        ②Scenarios suitable for CLIP: plug and fine-tune, or combining with V&L

2.2. Introduction

        ①Bottleneck of vision-and-language (V&L) tasks: visual representation and scarce labled data

        ②Most V&L tasks require complex reasoning, which can not use visual model directly

        ③They define two scenarios:

CLIP_ViLCLIP in direct task-specific fine-tuning
CLIP_ViL_pintegrate CLIP with V&L pre-training on image-text pairs and transfer to downstream tasks

        ④Tasks: Visual Question Answering, Image Captioning, and Vision-and-Language Navigation

2.3. Background and Motivation

        ①Training stage: 

visual encoder pretrianing, alignment (opt), downstream task

        ②Different types of model:

region based, network based, and CLIP (contrastive)

2.3.1. Motivation

        ①就是说直接把CLIP用在不同复杂视觉任务上性能一般般所以要小改一下

2.4. CLIP-ViL

2.4.1. Visual Question Aswering

        ①Performance of models on VQA v2.0 dataset:

2.4.2. Image Captioning

        ①Image captioning comparison table on COCO dataset:

2.4.3. Vision-and-Language Navigation

        ①The model performance on Room-to-Room (R2R) dataset:

        ②Changing ResNet to CLIP, the performance table:

2.5. Vision-and-Language Pre-training

2.5.1. CLIP-VIL_p

        ①For text segment T, tokenize it into subwords \{w_{1},w_{2},...,w_{k}\} and further embedded as the sum of its token, position and segment embeddings \{\textbf{w}_{1},\textbf{w}_{2},...,\textbf{w}_{k}\}

        ②Image I is is embedded as \{\textbf{v}_{1},\textbf{v}_{2},...,\textbf{v}_{m}\}

        ③Concatenate them two as \{\textbf{w}_{1},\textbf{w}_{2},...,\textbf{w}_{n},\textbf{v}_{1},\textbf{v}_{2},...,\textbf{v}_{m}\}

        ④Reconstruct sentence with 15% mask ratio, match text and image with the 50% correct sentence ratio, then execute visual question answering

2.5.2. Experiments

        ①Two variants of CLIP as visual encoder: CLIP-Res50andCLIP Res50x4

        ②Datasets: MSCOCOCaptions, VisualGenomeCaptions, VQA,GQA, and VG-QA  for pre-training

        ③Patch number for each image: 100

        ④Epoch of pretraining: 20

        ⑤Fine tune pretrained model on evaluation stage

        ⑥Dataset of tasks: VQAv2.0, visual entailment SNLI-VE, and GQA

        ⑦Results:

2.6. Analysis

        ①Zero-shot performance of CLIP on VQA v2.0 mini-eval:

        ②Influence of V&L pre-training:

        ③Visualization of feature positioning of different models:

2.7. Conclusions

        ~

http://www.dtcms.com/a/242696.html

相关文章:

  • PyArk飘云阁出品的ARK工具
  • IP地址可视化:从现网监控到合规检测、准入控制全面管理
  • Microsoft Azure 马来西亚区域正式上线
  • 大模型原理、架构与落地
  • 黑马python(三)
  • Css实现悬浮对角线边框动效
  • 智慧医疗能源事业线深度画像分析(上)
  • leetcode题解450:删除BST中的结点!调整二叉树的结构最难!
  • DiffBP: generative diffusion of 3D molecules for target protein binding
  • 利用Seagate service获得system shell
  • 什么样的登录方式才是最安全的?
  • 安全大模型智驱网络和数据安全效能跃迁
  • [Java基础] stream流中Collectors.toMap报空指针异常情况
  • CentOS7.9 查询运维安全日志,排查恶意用户
  • Oraclede 的体系结构
  • V837s-调整内核dmesg内容ring buffer大小
  • 调用支付宝接口响应40004 SYSTEM_ERROR问题排查
  • 标准 IO 流- Rust 标准输入 stdin 与 C/C++ 标准输入(Standard I/O Input)对比分析
  • iview组件库:自定义方法去控制Tree树形数据的根节点与叶节点的关联性
  • Vim 高亮命令完整学习笔记
  • 看板任务描述不清如何解决
  • Blogx项目配置文件读取流程详解
  • coze的基本使用
  • 【使用LLM搭建系统】7 搭建一个带评估的端到端问答系统
  • 第6章 方法 笔记
  • 自动化三维扫描检测赋能汽车铸造件高效检测
  • 【Flash 芯片 MTD 专栏】Flash芯片识别异常导致mtd子系统分区创建失败
  • 「Java基本语法」运算符与表达式
  • Brooks SLA5810 SLAMf10-20橡胶密封压力控制器Models SLA5810/20 and SLAMf10/20
  • VSCode主题设计