반응형

논문 읽기/Zero shot 36

[논문 읽기] IPN(2021), Isometric Propagation Network for Generalized Zero-Shot Learning

Isometric Propagation Network for Generalized Zero-Shot Learning https://arxiv.org/abs/2102.02038 Isometric Propagation Network for Generalized Zero-shot Learning Zero-shot learning (ZSL) aims to classify images of an unseen class only based on a few attributes describing that class but no access to any training sample. A popular strategy is to learn a mapping between the semantic space of class..

[논문 읽기] CPL(2019), Convolutional Prototype Learning for Zero-Shot Recognition

Convolutional Prototype Learning for Zero-Shot Recognition(2019) https://arxiv.org/abs/1910.09728 Convolutional Prototype Learning for Zero-Shot Recognition Zero-shot learning (ZSL) has received increasing attention in recent years especially in areas of fine-grained object recognition, retrieval, and image captioning. The key to ZSL is to transfer knowledge from the seen to the unseen classes v..

[논문 읽기] DRN, Class-Prototype Discriminative Network for Generalized Zero-Shot Learning(2020)

Class-Prototype Discriminative Network for Generalized Zero-Shot Learning https://ieeexplore.ieee.org/abstract/document/8966463 Class-Prototype Discriminative Network for Generalized Zero-Shot Learning We present a novel end-to-end deep metric learning model named Class-Prototype Discriminative Network (CPDN) for Generalized Zero-Shot Learning (GZSL). It consists of a generative network for prod..

[논문 읽기] Prototypical Matching and Open Set Rejection for Zero-Shot Semantic Segmentation(2021)

Prototypical Matching and Open Seg Rejection for Zero-Shot Semantic Segmentation https://openaccess.thecvf.com/content/ICCV2021/papers/Zhang_Prototypical_Matching_and_Open_Set_Rejection_for_Zero-Shot_Semantic_Segmentation_ICCV_2021_paper Seen과 unknown 을 구분하는 segmentation model을 학습한 후에 unknown에 대하여 unseen class를 예측한다. unseen class를 쉽게 확장하기 위하여 일반적인 classifier가 아닌 prototype matching 방법을 제안한다. sema..

[논문 읽기] Rethinking Zero-Shot Learning: A Conditional Visual Classification Perspective

Rethinking Zero-Shot Learning: A Conditional Visual Classification Perspective(2019) https://arxiv.org/abs/1909.05995 Rethinking Zero-Shot Learning: A Conditional Visual Classification Perspective Zero-shot learning (ZSL) aims to recognize instances of unseen classes solely based on the semantic descriptions of the classes. Existing algorithms usually formulate it as a semantic-visual correspond..

[논문 읽기] TCN(2019), Transferable Contrastive Network for Generalized Zero-Shot Learning

https://arxiv.org/abs/1908.05832 Transferable Contrastive Network for Generalized Zero-Shot Learning Zero-shot learning (ZSL) is a challenging problem that aims to recognize the target categories without seen data, where semantic information is leveraged to transfer knowledge from some source classes. Although ZSL has made great progress in recent years, arxiv.org Transferable Contrastive Networ..

[논문 읽기] CE-GZSL(2021), Contrastive Embedding for Generalized Zero-Shot Learning

https://arxiv.org/abs/2103.16173 Contrastive Embedding for Generalized Zero-Shot Learning Generalized zero-shot learning (GZSL) aims to recognize objects from both seen and unseen classes, when only the labeled examples from seen classes are provided. Recent feature generation methods learn a generative model that can synthesize the missing vis arxiv.org Zero shot Learning은 보통 embedding based me..

[논문 읽기] ALIGN(2021), Scaling Up Vision-Language Representation Learning with Noisy Text Supervision

Scaling Up Vision-Language Representation Learning with Noisy Text Supervision https://arxiv.org/abs/2102.05918 Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual ..

[논문 읽기] DAZLE(2020), Fine-Grained Generalized Zero-Shot Learning via Dense Attribute-Based Attention

Fine-Grained Generalized Zero-Shot Learning via Dense Attribute-Based Attention PDF, Zero-Shot Classification, Dat Huynh, Ehsan Elhamifar, CVPR 2020 Summary attribute에 해당하는 이미지 region에 집중해서 보겠다는 논문이다. 즉, 전체 이미지로부터 추출한 feature vector를 전체 attribute와 매칭하지 않고, attribute에 해당하는 이미지 region을 찾아내어 가중치를 준 다음에 class를 예측한다. 좀 더 세밀한 feature를 활용할 수 있다는 것이 장점. image region으로부터 추출한 feature와 attribute word embed..

[논문 읽기] LiT, Zero-Shot Transfer with Locked-image Text Tuning(2021)

LiT: Zero-Shot Transfer with Locked-image Text Tuning PDF, Zero-Shot Transfer, Zhai et al, arXiv 2021 Summary Transfer Learning과 Zero-Shot Transfer 의 차이점 먼저 설명하고 논문을 소개하겠다. Transfer Learning은 big dataset로 pre trained된 big model을 down stream으로 fine-tuning을 하는 것이다. 즉, 두 가지 철차로 이루어진다. (1) pre-training, (2) fine-tuning. 이 과정을 통하여 데이터가 적은 task에서도 좋은 성능을 가진 모델을 사용할 수 있다. Zero-Shot Transfer은 fine-tunin..

반응형