Skip to content

CMU Researchers Introduce MultiModal Graph Learning (MMGL): A New Artificial Intelligence Framework for Capturing Information from Multiple Multimodal Neighbors with Relational Structures Among Them Mohammad Arshad Artificial Intelligence Category – MarkTechPost

  • by

Multimodal graph learning is a multidisciplinary field combining concepts from machine learning, graph theory, and data fusion to tackle complex problems involving diverse data sources and their interconnections. Multimodal graph learning can generate descriptive captions for images by combining visual data with textual information. It can improve the accuracy of retrieving relevant images or text documents based on queries. Multimodal graph learning is also used in autonomous vehicles to combine data from various sensors, such as cameras, LiDAR, radar, and GPS, to enhance perception and make informed driving decisions.

The present models depend upon generating images/text on given text/images using pre-trained image encoders and LMs. They use the method of pair modalities with a clear 1-to-1 mapping as an input. In the context of multimodal graph learning, modalities refer to distinct types or modes of data and information sources. Each modality represents a specific category or aspect of data and can take different forms. The problem arises when applying these models to many-to-many mappings among the modalities.  

Researchers at Carnegie Mellon University propose a general and systematic framework of Multimodal graph learning for generative tasks. Their method involves capturing information from multiple multimodal neighbors with relational structures among themselves. They propose to represent the complex relationships as graphs to capture data with any number of modalities and complex relationships between modalities that can flexibly vary from one sample to another.

Their model extracts neighbor encodings and combines them with graph structure, followed by optimizing the model with parameter-efficient finetuning. To fully understand many-many mappings, the team studied neighbor encoding models like self-attention with text and embeddings, self-attention with only embeddings, and cross-attention with embeddings. They used Laplacian eigenvector position encoding(LPE) and graph neural network encoding (GNN) to compare the sequential position encodings. 

Finetuning often requires substantial labeled data specific to the target task. If you already have a relevant dataset or can obtain it at a reasonable cost, finetuning can be cost-effective compared to training a model from scratch. Researchers use Prefix tuning and LoRA for Self-attention with text and embeddings(SA-TE) and Flamingo-style finetuning for cross-attention with embedding models(CA-E). They find that Prefix tuning uses nearly four times fewer parameters with SA-TE neighbor encoding, which decreases the cost.

Their research work is an in-depth analysis to lay the groundwork for future MMGL research and exploration in that field. The researchers say that the future scope of multimodal graph learning is promising and is expected to expand significantly, driven by advancements in machine learning, data collection, and the growing need to handle complex, multi-modal data in various applications.

Check out the Paper and GithubAll Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

We are also on WhatsApp. Join our AI Channel on Whatsapp..

The post CMU Researchers Introduce MultiModal Graph Learning (MMGL): A New Artificial Intelligence Framework for Capturing Information from Multiple Multimodal Neighbors with Relational Structures Among Them appeared first on MarkTechPost.

 Multimodal graph learning is a multidisciplinary field combining concepts from machine learning, graph theory, and data fusion to tackle complex problems involving diverse data sources and their interconnections. Multimodal graph learning can generate descriptive captions for images by combining visual data with textual information. It can improve the accuracy of retrieving relevant images or text
The post CMU Researchers Introduce MultiModal Graph Learning (MMGL): A New Artificial Intelligence Framework for Capturing Information from Multiple Multimodal Neighbors with Relational Structures Among Them appeared first on MarkTechPost.  Read More AI Shorts, Applications, Artificial Intelligence, Deep Learning, Editors Pick, Language Model, Large Language Model, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *