Skip to content

Make ChatGPT See Again: This AI Approach Explores Link-Context Learning to Enable Multimodal Learning Ekrem Çetinkaya Artificial Intelligence Category – MarkTechPost

  • by

Language models have revolutionized the way we communicate with computers by their ability to generate coherent and contextually relevant text. Large Language Models (LLMs) have been at the forefront of this progress, trained on massive amounts of text data to learn the patterns and nuances of human language. ChatGPT, the pioneer of the LLM revolution, is extremely popular among people in different disciplines.

LLMs have made various tasks easier to tackle thanks to their extreme ability. We use them to summarize texts, help us write emails, automate coding tasks, explain documents, etc. All these tasks were quite time-consuming just a year ago, but nowadays, they take just a couple of minutes to complete.

However, with the increasing demand for multimodal understanding, where models need to process and generate content across different modalities like text, images, and even videos, the need for Multimodal Large Language Models (MLLMs) has emerged. MLLMs combine the power of language models with visual understanding, enabling machines to comprehend and generate content in a more comprehensive and contextually aware manner.

Once the ChatGPT craze settled down a bit, MLLMs took the AI world by storm, enabling machines to understand and generate content across different modalities like text and images. These models have shown remarkable performance in tasks like image recognition, visual grounding, and instruction understanding. However, training these models effectively remains a challenge. The biggest challenge is when an MLLM encounters entirely novel scenarios where both the image and the label are unseen.

Moreover, MLLMs tend to get “lost in the middle” when processing longer contexts. These models heavily rely on the beginning and middle positions, which explains the plateau in accuracy as the number of shots increases. Therefore, MLLMs struggle with longer inputs.

Time to meet Link-context-learning (LCL) that tackles various challenges in MLLM.

Demo dialogue of the proposed link-context learning. Source:https://arxiv.org/abs/2308.07891

In MLLM, there are two key training strategies. Multimodal Prompt Tuning (M-PT) and Multimodal Instruction Tuning (M-IT). M-PT involves fine-tuning only a small portion of the model’s parameters while keeping the rest frozen. This approach helps achieve similar results to full fine-tuning while minimizing computational resources. On the other hand, M-IT enhances the zero-shot capability of MLLMs by fine-tuning them on datasets that include instruction descriptions. This strategy improves the model’s ability to understand and respond to new tasks without prior training. These work fine, but they both sacrifice certain aspects. 

Difference between in-context learning and link-context learning. Source: https://arxiv.org/abs/2308.07891

Instead, LCL explores different training strategies: mix strategy, 2-way strategy, 2-way-random, and 2-way-weight. The mixed strategy stands out by significantly boosting zero-shot accuracy and achieving impressive results at 6-shot. However, its performance slightly decreases at 16-shot. On the contrary, the 2-way strategy shows a gradual increase in accuracy from 2-shot to 16-shot, indicating a closer alignment with the trained pattern.

Unlike traditional in-context learning, LCL goes a step further by empowering the model to establish a mapping between the source and target, enhancing its overall performance. By providing demonstrations with causal links, LCL enables MLLMs to discern not only analogies but also the underlying causal associations between data points, allowing them to recognize unseen images and understand novel concepts more effectively. The ISEKAI dataset serves as a crucial resource for evaluating and advancing the capabilities of MLLMs in the context of link-context learning.

Moreover, LCL introduces the ISEKAI dataset, a novel and comprehensive dataset specifically designed to evaluate the capabilities of MLLMs. The ISEKAI dataset comprises entirely generated images and fabricated concepts. It challenges MLLMs to assimilate new concepts from ongoing conversations and retain this knowledge for accurate question-answering. 

In conclusion, LCL provides valuable insights into the training strategies employed for multimodal language models. The mixed strategy and 2-way strategy offer different approaches to enhance the performance of MLLMs, each with its own strengths and limitations. The contextual analysis sheds light on the challenges faced by MLLMs when processing longer inputs, emphasizing the importance of further research in this area. 

Check out the Paper and Code. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 30k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

The post Make ChatGPT See Again: This AI Approach Explores Link-Context Learning to Enable Multimodal Learning appeared first on MarkTechPost.

 Language models have revolutionized the way we communicate with computers by their ability to generate coherent and contextually relevant text. Large Language Models (LLMs) have been at the forefront of this progress, trained on massive amounts of text data to learn the patterns and nuances of human language. ChatGPT, the pioneer of the LLM revolution,
The post Make ChatGPT See Again: This AI Approach Explores Link-Context Learning to Enable Multimodal Learning appeared first on MarkTechPost.  Read More AI Paper Summary, AI Shorts, Applications, Artificial Intelligence, Computer Vision, Editors Pick, Language Model, Large Language Model, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *