Skip to content

How Far Are We from Intelligent Visual Deductive Reasoning? Apple Machine Learning Research

  • by

​[[{“value”:”This paper was accepted at the How Far Are We from AGI? workshop at ICLR 2024.
Vision-Language Models (VLMs) such as GPT-4V have recently demonstrated incredible strides on diverse vision language tasks. We dig into vision-based deductive reasoning, a more sophisticated but less explored realm, and find previously unexposed blindspots in the current SOTA VLMs. Specifically, we leverage Raven’s Progressive Matrices (RPMs), to assess VLMs’ abilities to perform multi-hop relational and deductive reasoning relying solely on visual clues. We perform comprehensive evaluations of several popular VLMs…”}]] [[{“value”:”This paper was accepted at the How Far Are We from AGI? workshop at ICLR 2024.
Vision-Language Models (VLMs) such as GPT-4V have recently demonstrated incredible strides on diverse vision language tasks. We dig into vision-based deductive reasoning, a more sophisticated but less explored realm, and find previously unexposed blindspots in the current SOTA VLMs. Specifically, we leverage Raven’s Progressive Matrices (RPMs), to assess VLMs’ abilities to perform multi-hop relational and deductive reasoning relying solely on visual clues. We perform comprehensive evaluations of several popular VLMs…”}]]  Read More  

Leave a Reply

Your email address will not be published. Required fields are marked *