Mistral AI: Unchallenged in quantized forms
Llama 2 was talk of the town for a good time, when Open AI launched GPT there are worries that it’s planning to monopolise its capabilities. Its actions in the senate hearing, added fuel. Meta’s decision to giveaway Llama 2 gave a breather. It performed fairly well across all the parameters
Now, comes the new sensation Mistral AI, the one which changes the whole thought of being API based models to becoming one to one running on personal PCs. Of course, if we want to give the live knowledge going on around the world we still have to power with browsing capabilities.
Currently Mistral AI in quantized versions is of size 4GB and can easily on 16GB RAM. This is slightly higher than Llama 2 13B quantized version.
Vectorised Databases for specific uses, has become a trouble as being conducted on specific parameter. However, inter-operability of specific vectorised datasets is something to watch for over the period.
Euphoria of GenAI on Personal Computers is yet to be experienced, everyone is running in this race of making GenAI completely working on the personal consumer devices.
Comments
Post a Comment