Skip to content

This Machine Learning Research from Yale and Google AI Introduce SubGen: An Efficient Key-Value Cache Compression Algorithm via Stream Clustering Mohammad Asjad Artificial Intelligence Category – MarkTechPost

  • by

​[[{“value”:”

Large language models (LLMs) face challenges in generating long-context tokens due to high memory requirements for storing all previous tokens in the attention module. This arises from key-value (KV) caching. LLMs are pivotal in various NLP applications, relying on the transformer architecture with attention mechanisms. Efficient and accurate token generation is crucial. Autoregressive attention decoding with KV caching is common but faces memory constraints, hindering practical deployment due to linear scaling with context size.

Recent research focuses on efficient token generation for long-range context datasets. Different approaches include greedy eviction, retaining tokens with high initial attention scores, adaptive compression based on attention head structures, and simple eviction mechanisms. While some methods maintain decoding quality with minor degradation and reduce generation latency by exploiting contextual sparsity, none achieve fully sublinear-time memory space for the KV cache.

Yale University and Google researchers introduced SubGen, a novel approach to reduce computational and memory bottlenecks in token generation. SubGen focuses on compressing the KV cache efficiently. By leveraging clustering tendencies in key embeddings and employing online clustering and ℓ2 sampling, SubGen achieves sublinear complexity. This algorithm ensures both sublinear memory usage and runtime, backed by a tight error bound. Empirical tests on long-context question-answering tasks exhibit superior performance and efficiency compared to existing methods.

SubGen aims to efficiently approximate the attention output in token generation with sublinear space complexity. It employs a streaming attention data structure to update efficiently upon the arrival of new tokens. Leveraging clustering tendencies within key embeddings, SubGen constructs a data structure for sublinear-time approximation of the partition function. Through rigorous analysis and proof, SubGen ensures accurate attention output with significantly reduced memory and runtime complexities.

The evaluation of the algorithm on question-answering tasks demonstrates SubGen’s superiority in memory efficiency and performance. Utilizing key embeddings’ clustering tendencies, SubGen achieves higher accuracy in long-context line retrieval tasks than H2O and Attention Sink methods. Even with half the cached KV embeddings, SubGen consistently outperforms, highlighting the significance of embedding information in sustaining language model performance.

To sum up, SubGen is a stream clustering-based KV cache compression algorithm that leverages the inherent clusterability of cached keys. By integrating recent token retention, SubGen achieves superior performance in zero-shot line retrieval tasks compared to other algorithms with identical memory budgets. The analysis demonstrates SubGen‘s ability to ensure a spectral error bound with sublinear time and memory complexity, underscoring its efficiency and effectiveness.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and Google News. Join our 37k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our Telegram Channel

The post This Machine Learning Research from Yale and Google AI Introduce SubGen: An Efficient Key-Value Cache Compression Algorithm via Stream Clustering appeared first on MarkTechPost.

“}]] [[{“value”:”Large language models (LLMs) face challenges in generating long-context tokens due to high memory requirements for storing all previous tokens in the attention module. This arises from key-value (KV) caching. LLMs are pivotal in various NLP applications, relying on the transformer architecture with attention mechanisms. Efficient and accurate token generation is crucial. Autoregressive attention decoding
The post This Machine Learning Research from Yale and Google AI Introduce SubGen: An Efficient Key-Value Cache Compression Algorithm via Stream Clustering appeared first on MarkTechPost.”}]]  Read More AI Shorts, Applications, Artificial Intelligence, Editors Pick, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *