Skip to content

This Machine Learning Research from Tel Aviv University Reveals a Significant Link between Mamba and Self-Attention Layers Muhammad Athar Ganaie Artificial Intelligence Category – MarkTechPost

  • by

​[[{“value”:”

Recent studies have highlighted the efficacy of Selective State Space Layers, also known as Mamba models, across various domains, such as language and image processing, medical imaging, and data analysis. These models offer linear complexity during training and fast inference, significantly boosting throughput and enabling efficient handling of long-range dependencies. However, understanding their information-flow dynamics, learning mechanisms, and interoperability remains challenging, limiting their applicability in sensitive domains requiring explainability.

Several methods have been developed to enhance explainability in deep neural networks, particularly in NLP, computer vision, and attention-based models. Examples include AttentionRollout, which analyzes inter-layer pairwise attention paths, combining LRP scores with attention gradients for class-specific relevance, and treating output token representations as states in a Markov chain improved attributions by treating certain operators as constants.

Tel Aviv University researchers have proposed reformulating Mamba computation to address gaps in understanding using a data-control linear operator. This would reveal hidden attention matrices within the Mamba layer, enabling the application of interpretability techniques from transformer realms to Mamba models. The method sheds light on the fundamental nature of Mamba models, provides interpretability tools based on hidden attention matrices, and compares Mamba models to transformers.

The researchers reformulate selective state-space (S6) layers as self-attention, allowing the extraction of attention matrices. These matrices are leveraged to develop class-agnostic and class-specific tools for explainable AI of Mamba models. The formulation involves converting S6 layers into data-controlled linear operators and simplifying the hidden matrices for interpretation. Class-agnostic tools employ Attention Rollout, while class-specific tools adapt transformer attribution, modifying it to utilize gradients of the S6 mixer and gating mechanisms for better relevance maps.

Visualizations of attention matrices show similarities between Mamba and Transformer models in capturing dependencies. Explainability metrics indicate that Mamba models perform comparably to Transformers in perturbation tests, demonstrating sensitivity to perturbations. Mamba achieves higher pixel accuracy and mean Intersection over Union in segmentation tests, but Transformer-Attribution consistently outperforms Mamba-Attribution. Further adjustments to Mamba-based attribution methods may enhance performance.

In conclusion, the researchers from Tel Aviv University have proposed a work that establishes a direct link between Mamba and self-attention layers, revealing that Mamba layers can be reformulated as an implicit form of causal self-attention. This insight enables the development of explainability techniques for Mamba models, enhancing understanding of their inner representations. These contributions provide valuable tools for evaluating Mamba model performance, fairness, and robustness and open avenues for weakly supervised downstream tasks. 

Check out the Paper and GithubAll credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and Google News. Join our 38k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our Telegram Channel

You may also like our FREE AI Courses….

New!
Mamba is a cool, efficient, and effective DL architecture, but what do we know about Mamba? How does it capture interactions between tokens? Can it be the attention-killer? In our work, “The Hidden Attention of Mamba Models” we provide answers to these questions! [1/4] pic.twitter.com/U78aeTtaay

— Itamar Zimerman (@ItamarZimerman) March 2, 2024

The post This Machine Learning Research from Tel Aviv University Reveals a Significant Link between Mamba and Self-Attention Layers appeared first on MarkTechPost.

“}]] [[{“value”:”Recent studies have highlighted the efficacy of Selective State Space Layers, also known as Mamba models, across various domains, such as language and image processing, medical imaging, and data analysis. These models offer linear complexity during training and fast inference, significantly boosting throughput and enabling efficient handling of long-range dependencies. However, understanding their information-flow dynamics,
The post This Machine Learning Research from Tel Aviv University Reveals a Significant Link between Mamba and Self-Attention Layers appeared first on MarkTechPost.”}]]  Read More AI Paper Summary, AI Shorts, Applications, Artificial Intelligence, Editors Pick, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *