microsoft/xclip-base-patch16-zero-shot

69次阅读

microsoft/xclip-base-patch16-zero-shot


X-CLIP (base-sized model)

X-CLIP model (base-sized, patch resolution of 16) trained on Kinetics-400. It was introduced in the paper Expanding Language-Image Pretrained Models for General Video Recognition by Ni et al. and first released in this repository.
This model was trained using 32 frames per video, at a resolution of 224×224.
Disclaimer: The team releasing X-CLIP did not write a model card for this model so this model card has been written by the Hugging Face team.


Model description

X-CLIP is a minimal extension of CLIP for general video-language understanding. The model is trained in a contrastive way on (video, text) pairs.

This allows the model to be used for tasks like zero-shot, few-shot or fully supervised video classification and video-text retrieval.


Intended uses & limitations

You can use the raw model for determining how well text goes with a given video. See the model hub to look for
fine-tuned versions on a task that interests you.


How to use

For code examples, we refer to the documentation.


Training data

This model was trained on Kinetics 400.


Preprocessing

The exact details of preprocessing during training can be found here.
The exact details of preprocessing during validation can be found here.
During validation, one resizes the shorter edge of each frame, after which center cropping is performed to a fixed-size resolution (like 224×224). Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation.


Evaluation results

This model achieves a zero-shot top-1 accuracy of 44.6% on HMDB-51, 72.0% on UCF-101 and 65.2% on Kinetics-600.

前往AI网址导航

正文完
 0
微草录
版权声明:本站原创文章,由 微草录 2024-01-03发表,共计1364字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。