The Sber company presented a new version of its innovative neural network Kandinsky 3.1, which is capable of creating images based on text descriptions in Russian and English. The company's press service reports this.
Kandinsky 3.1 has been enriched by training on an expanded dataset, which has significantly improved the quality of visual generation. Initially, limited categories of users received access to the new model: artists, designers and bloggers. However, Sber representatives clarify that soon the capabilities of Kandinsky 3.1 will become available to the general public without restrictions.
First Deputy Chairman of the Board of Sberbank Alexander Vedyakhin noted: “Exactly a year ago we launched version Kandinsky 2.1, and since then our neural network has been constantly improved. Kandinsky 3.1 has become faster, more convenient and more realistic. It is a free, feature-rich tool that allows anyone to become an artist and create unique images. Soon anyone will be able to appreciate the benefits of the new model.”
Among the key improvements in Kandinsky 3.1 are: speeding up the image generation by ten times, the ability to create high-resolution images up to 4K, and the integration of a language model for optimizing text queries. Users will also be able to use features to create image variations, mix images and text, create sticker packs, and make local changes to images without reworking the entire scene.
The near future promises the emergence of a new Kandinsky Video 1.1 model for generating video from text descriptions, with improved quality and double video resolution compared to the previous Kandinsky Video 1.0 model.