Skip to content

TiC-CLIP: Continual Training of CLIP Models Apple Machine Learning Research

  • by

​This paper was accepted to the workshop on Distribution Shifts in NeurIPS 2023.
Large-scale training of models has become exceedingly more expensive. In an ever changing world where Petabytes of new data is generated every day, we want to be able to continually train models. In this paper, we create a benchmark for continual large-scale training of CLIP models where the data distribution varies only by time. Compared with traditional continual learning literature, there is no hard separation of tasks, i.e., we assume an infinite stream of data in a canonical format arrives that exhibits… This paper was accepted to the workshop on Distribution Shifts in NeurIPS 2023.
Large-scale training of models has become exceedingly more expensive. In an ever changing world where Petabytes of new data is generated every day, we want to be able to continually train models. In this paper, we create a benchmark for continual large-scale training of CLIP models where the data distribution varies only by time. Compared with traditional continual learning literature, there is no hard separation of tasks, i.e., we assume an infinite stream of data in a canonical format arrives that exhibits…  Read More  

Leave a Reply

Your email address will not be published. Required fields are marked *