pszemraj

ft clip as example

Jul 26th, 2022 (edited)
626
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 11.65 KB | None | 0 0
  1. ![](https://huggingface.co/blog/assets/30_clip_rsicd/clip-rsicd-header-image.png)
  2.  
  3. In July this year, [Hugging Face](https://huggingface.co/) organized a [Flax/JAX Community Week](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md), and invited the community to submit projects to train Hugging Face [transformers](https://github.com/huggingface/transformers) models in the areas of Natural Language Processing (NLP) and Computer Vision (CV).
  4.  
  5. Participants used Tensor Processing Units (TPUs) with [Flax](https://github.com/google/flax) and [JAX](https://github.com/google/jax). JAX is a linear algebra library (like `numpy`) that can do automatic differentiation ([Autograd](https://github.com/hips/autograd)) and compile down to [XLA](https://www.tensorflow.org/xla), and Flax is a neural network library and ecosystem for JAX. TPU compute time was provided free by [Google Cloud](https://cloud.google.com/), who co-sponsored the event.
  6.  
  7. Over the next two weeks, teams participated in lectures from Hugging Face and Google, trained one or more models using JAX/Flax, shared them with the community, and provided a [Hugging Face Spaces](https://huggingface.co/spaces) demo showcasing the capabilities of their model. Approximately 100 teams participated in the event, and it resulted in 170 models and 36 demos.
  8.  
  9. Our team, like probably many others, is a distributed one, spanning 12 time zones. Our common thread is that we all belong to the [TWIML Slack Channel](https://twimlai.slack.com/), where we came together based on a shared interest in Artificial Intelligence (AI) and Machine Learning (ML) topics.
  10.  
  11. We fine-tuned the [CLIP Network from OpenAI](https://openai.comclip/) with satellite images and captions from the [RSICD dataset](https://github.com/201528014227051/RSICD_optimal). The CLIP network learns visual concepts by being trained with image and caption pairs in a self-supervised manner, by using text paired with images found across the Internet. During inference, the model can predict the most relevant image given a text description or the most relevant text description given an image. CLIP is powerful enough to be used in zero-shot manner on everyday images. However, we felt that satellite images were sufficiently different from everyday images that it would be useful to fine-tune CLIP with them. Our intuition turned out to be correct, as the evaluation results (described below) shows. In this post, we describe details of our training and evaluation process, and our plans for future work on this project.
  12.  
  13. The goal of our project was to provide a useful service and demonstrate how to use CLIP for practical use cases. Our model can be used by applications to search through large collections of satellite images using textual queries. Such queries could describe the image in totality (for example, beach, mountain, airport, baseball field, etc) or search or mention specific geographic or man-made features within these images. CLIP can similarly be fine-tuned for other domains as well, as shown by the [medclip-demo team](https://huggingface.co//flax-community/medclip-demo) for medical images.
  14.  
  15. The ability to search through large collections of images using text queries is an immensely powerful feature, and can be used as much for social good as for malign purposes. Possible applications include national defense and anti-terrorism activities, the ability to spot and address effects of climate change before they become unmanageable, etc. Unfortunately, this power can also be misused, such as for military and police surveillance by authoritarian nation-states, so it does raise some ethical questions as well.
  16.  
  17. You can read about the project on our [project page](https://github.com/arampacha/CLIP-rsicd), download our [trained model](https://huggingface.co/flax-community/clip-rsicd-v2) to use for inference on your own data, or see it in action on our [demo](https://huggingface.co/spaces/sujitpal/clip-rsicd-demo).
  18.  
  19. ### [](https://huggingface.co/blog/fine-tune-clip-rsicd#training)Training
  20.  
  21. #### [](https://huggingface.co/blog/fine-tune-clip-rsicd#dataset)Dataset
  22.  
  23. We fine-tuned the CLIP model primarily with the [RSICD dataset](https://github.com/201528014227051/RSICD_optimal). This dataset consists of about 10,000 images collected from Google Earth, Baidu Map, MapABC, and Tianditu. It is provided freely to the research community to advance remote sensing captioning via [Exploring Models and Data for Remote Sensing Image Caption Generation](https://arxiv.org/abs/1712.0783) (Lu et al, 2017). The images are (224, 224) RGB images at various resolutions, and each image has up to 5 captions associated with it.
  24.  
  25. ![](https://huggingface.co/blog/assets/30_clip_rsicd/rsicd-images-sampling.png)
  26.  
  27. _Some examples of images from the RSICD dataset_
  28.  
  29. In addition, we used the [UCM Dataset](https://mega.nz/folder/wCpSzSoS#RXzIlrv--TDt3ENZdKN8JA) and the [Sydney dataset](https://mega.nz/folder/pG4yTYYA#4c4buNFLibryZnlujsrwEQ) for training, The UCM dataset is based on the UC Merced Land Use dataset. It consists of 2100 images belonging to 21 classes (100 images per class), and each image has 5 captions. The Sydney dataset contains images of Sydney, Australia from Google Earth. It contains 613 images belonging to 7 classes. Images are (500, 500) RGB and provides 5 captions for each image. We used these additional datasets because we were not sure if the RSICD dataset would be large enough to fine-tune CLIP.
  30.  
  31. #### [](https://huggingface.co/blog/fine-tune-clip-rsicd#model)Model
  32.  
  33. Our model is just the fine-tuned version of the original CLIP model shown below. Inputs to the model are a batch of captions and a batch of images passed through the CLIP text encoder and image encoder respectively. The training process uses [contrastive learning](https://towardsdatascience.com/understanding-contrastive-learning-d5b19fd96607) to learn a joint embedding representation of image and captions. In this embedding space, images and their respective captions are pushed close together, as are similar images and similar captions. Conversely, images and captions for different images, or dissimilar images and captions, are likely to be pushed further apart.
  34.  
  35. ![](https://huggingface.co/blog/assets/30_clip_rsicd/clip_schematic.png)
  36.  
  37. _CLIP Training and Inference (Image Credit: CLIP: Connecting Text and Images (https://openai.comclip/))_
  38.  
  39. #### [](https://huggingface.co/blog/fine-tune-clip-rsicd#data-augmentation)Data Augmentation
  40.  
  41. In order to regularize our dataset and prevent overfitting due to the size of the dataset, we used both image and text augmentation.
  42.  
  43. Image augmentation was done inline using built-in transforms from Pytorch's [Torchvision](https://pytorch.org/vision/stable/index.html) package. The transformations used were Random Cropping, Random Resizing and Cropping, Color Jitter, and Random Horizontal and Vertical flipping.
  44.  
  45. We augmented the text with backtranslation to generate captions for images with less than 5 unique captions per image. The [Marian MT](https://huggingface.co/blog/(https://huggingface.co/transformers/model_doc/marian.html)) family of models from Hugging Face was used to translate the existing captions into French, Spanish, Italian, and Portuguese and back to English to fill out the captions for these images.
  46.  
  47. As shown in these loss plots below, image augmentation reduced overfitting significantly, and text and image augmentation reduced overfitting even further.
  48.  
  49. ![](https://huggingface.co/blog/assets/30_clip_rsicd/image-augment-loss.png) ![](https://huggingface.co/blog/assets/30_clip_rsicd/image-text-aug-loss.png)
  50.  
  51. _Evaluation and Training loss plots comparing (top) no augmentation vs image augmentation, and (bottom) image augmentation vs text+image augmentation_
  52.  
  53. ### [](https://huggingface.co/blog/fine-tune-clip-rsicd#evaluation)Evaluation
  54.  
  55. #### [](https://huggingface.co/blog/fine-tune-clip-rsicd#metrics)Metrics
  56.  
  57. A subset of the RSICD test set was used for evaluation. We found 30 categories of images in this subset. The evaluation was done by comparing each image with a set of 30 caption sentences of the form `"An aerial photograph of {category}"`. The model produced a ranked list of the 30 captions, from most relevant to least relevant. Categories corresponding to captions with the top k scores (for k=1, 3, 5, and 10) were compared with the category provided via the image file name. The scores are averaged over the entire set of images used for evaluation and reported for various values of k, as shown below.
  58.  
  59. The `baseline` model represents the pre-trained `openai/clip-vit-base-path32` CLIP model. This model was fine-tuned with captions and images from the RSICD dataset, which resulted in a significant performance boost, as shown below.
  60.  
  61. Our best model was trained with image and text augmentation, with batch size 1024 (128 on each of the 8 TPU cores), and the Adam optimizer with learning rate 5e-6. We trained our second base model with the same hyperparameters, except that we used the Adafactor optimizer with learning rate 1e-4. You can download either model from their model repos linked to in the table below.
  62.  
  63. Model-name
  64.  
  65. k=1
  66.  
  67. k=3
  68.  
  69. k=5
  70.  
  71. k=10
  72.  
  73. baseline
  74.  
  75. 0.572
  76.  
  77. 0.745
  78.  
  79. 0.837
  80.  
  81. 0.939
  82.  
  83. bs128x8-lr1e-4-augs/ckpt-2
  84.  
  85. 0.819
  86.  
  87. 0.950
  88.  
  89. 0.974
  90.  
  91. 0.994
  92.  
  93. bs128x8-lr1e-4-imgaugs/ckpt-2
  94.  
  95. 0.812
  96.  
  97. 0.942
  98.  
  99. 0.970
  100.  
  101. 0.991
  102.  
  103. [bs128x8-lr1e-4-imgaugs-textaugs/ckpt-4](https://huggingface.co/flax-community/clip-rsicd)2
  104.  
  105. 0.843
  106.  
  107. 0.958
  108.  
  109. 0.977
  110.  
  111. 0.993
  112.  
  113. bs128x8-lr5e-5-imgaugs-textaugs/ckpt-8
  114.  
  115. 0.831
  116.  
  117. 0.959
  118.  
  119. 0.977
  120.  
  121. 0.994
  122.  
  123. bs128x8-lr5e-5-imgaugs/ckpt-4
  124.  
  125. 0.746
  126.  
  127. 0.906
  128.  
  129. 0.956
  130.  
  131. 0.989
  132.  
  133. bs128x8-lr5e-5-imgaugs-textaugs-2/ckpt-4
  134.  
  135. 0.811
  136.  
  137. 0.945
  138.  
  139. 0.972
  140.  
  141. 0.993
  142.  
  143. bs128x8-lr5e-5-imgaugs-textaugs-3/ckpt-5
  144.  
  145. 0.823
  146.  
  147. 0.946
  148.  
  149. 0.971
  150.  
  151. 0.992
  152.  
  153. bs128x8-lr5e-5-wd02/ckpt-4
  154.  
  155. 0.820
  156.  
  157. 0.946
  158.  
  159. 0.965
  160.  
  161. 0.990
  162.  
  163. [bs128x8-lr5e-6-adam/ckpt-1](https://huggingface.co/flax-community/clip-rsicd-v2)1
  164.  
  165. **0.883**
  166.  
  167. **0.968**
  168.  
  169. **0.982**
  170.  
  171. **0.998**
  172.  
  173. _1 - our best model, 2 - our second best model_
  174.  
  175. #### [](https://huggingface.co/blog/fine-tune-clip-rsicd#demo)Demo
  176.  
  177. You can access the [CLIP-RSICD Demo](https://huggingface.co/spaces/sujitpal/clip-rsicd-demo) here. It uses our fine-tuned CLIP model to provide the following functionality:
  178.  
  179. - Text to Image search
  180. - Image to Image search
  181. - Find text feature in image
  182.  
  183. The first two functionalities use the RSICD test set as its image corpus. They are encoded using our best fine-tuned CLIP model and stored in a [NMSLib](https://github.com/nmslib/nmslib) index which allows Approximate Nearest Neighbor based retrieval. For text-to-image and image-to-image search respectively, the query text or image are encoded with our model and matched against the image vectors in the corpus. For the third functionality, we divide the incoming image into patches and encode them, encode the queried text feature, match the text vector with each image patch vector, and return the probability of finding the feature in each patch.
  184.  
  185. ### [](https://huggingface.co/blog/fine-tune-clip-rsicd#future-work)Future Work
  186.  
  187. We are grateful that we have been given an opportunity to further refine our model. Some ideas we have for future work are as follows:
  188.  
  189. 1. Construct a sequence to sequence model using a CLIP encoder and a GPT-3 decoder and train it for image captioning.
  190. 2. Fine-tune the model on more image caption pairs from other datasets and investigate if we can improve its performance.
  191. 3. Investigate how fine-tuning affects the performance of model on non-RSICD image caption pairs.
  192. 4. Investigate the capability of the fine-tuned model to classify outside the categories it has been fine-tuned on.
  193. 5. Evaluate the model using other criteria such as image classification.
Add Comment
Please, Sign In to add comment