Skip to content

Stability AI Releases Stable Code 3B: A 3 Billion Parameter Large Language Model (LLM) that Allows Accurate and Responsive Code Completion Pragati Jhunjhunwala Artificial Intelligence Category – MarkTechPost

  • by

Stable AI has recently released a new state-of-the-art model, Stable-Code-3B, designed for code completion in various programming languages with multiple additional capabilities. The model is a follow-up on the Stable Code Alpha 3B. It is trained on 1.3 trillion tokens including both natural language data and code data in 18 programming languages and codes.  Compared to existing models CodeLLaMA 7b, the stable-code-3b is 60% smaller, maintaining the high-level performance of the model.

Stable-Code-3B is an auto-regressive language model based on the transformer decoder architecture. It offers many more features uses the concept of the Fill in Middle Capability(FIM), and is trained on 16384 long sequence tokens supporting long contexts. Their two key features are rotary position embeddings and a special tokenizer for in-middle capability, along with other tokens. The training has been done on various open-source large-scale datasets. It is trained on a robust infrastructure utilizing 256 NVIDIA A100 40GB GPUs and optimized using the AdamW in bfloat16 precision. The model operates under 2D parallelism with ZeRO-1, incorporating innovative features like flash-attention and Rotary Embedding kernels from FlashAttention-2. Experiments with 6 existing models with various programming languages showcase the efficiency of the model of achieving around 30% accuracy in languages- CPP, Rust, Python, Java, PHP, and Javascript. Other models showed slightly better performance in either only one of the languages or an extremely large model with 2.5 times that of the Stable-Code-3B.

In conclusion, the stable-code-3b model represents a powerful tool for developers seeking a foundational base in natural language processing applications. However, it’s crucial to note that the model comes with limitations and potential biases. As a base model, it requires careful evaluation and fine-tuning for safe and reliable performance in specific downstream applications. Developers should be aware of possible undesirable behaviors, and it’s recommended to thoroughly assess and correct these aspects before deployment to ensure the model aligns with ethical and safety standards.

The post Stability AI Releases Stable Code 3B: A 3 Billion Parameter Large Language Model (LLM) that Allows Accurate and Responsive Code Completion appeared first on MarkTechPost.

 Stable AI has recently released a new state-of-the-art model, Stable-Code-3B, designed for code completion in various programming languages with multiple additional capabilities. The model is a follow-up on the Stable Code Alpha 3B. It is trained on 1.3 trillion tokens including both natural language data and code data in 18 programming languages and codes.  Compared
The post Stability AI Releases Stable Code 3B: A 3 Billion Parameter Large Language Model (LLM) that Allows Accurate and Responsive Code Completion appeared first on MarkTechPost.  Read More AI Shorts, Applications, Artificial Intelligence, Editors Pick, Language Model, Large Language Model, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *