Interpreting CLIP: Insights on the Robustness to ImageNet Distribution Shifts Apple Machine Learning Research
What distinguishes robust models from non-robust ones? While for ImageNet distribution shifts it has been shown that such differences in robustness can be traced back predominantly to differences in training data, so far it is not known what that translates to in terms of what… Read More »Interpreting CLIP: Insights on the Robustness to ImageNet Distribution Shifts Apple Machine Learning Research