Stable AI has recently released a new state-of-the-art model, Stable-Code-3B, designed for code completion in various programming languages with multiple additional capabilities. The model is a follow-up on the Stable Code Alpha 3B. It is trained on 1.3 trillion tokens including both natural language data and code data in 18 programming languages and codes. Compared to existing models CodeLLaMA 7b, the stable-code-3b is 60% smaller,…
Read the full article here