Microsoft has unveiled the artificial intelligence model Phi-3, a model that can work on devices, and is provided by the company in several different versions. Versions of this model include Mini's version of about 3.8 billion parameters, Small's version of about 7 billion parameters, and Medium's version of about 14 billion parameters. Notably, parameters are variables in AI models, used to measure the size and capabilities of models. The release is a successor to the previous Phi-2, which was rolled out by the company at the end of last year, having been outperformed by other competing models, such as the newly released Llama-3 model. The smaller model of 3.8 billion parameters brings improvements compared to the previous Phi-2 model with far fewer resources being used than larger language models, and the Mini Phi-3 model beats Meta's Llama- 3 model of 8 billion parameters, and OpenAI's GPT-3 model of 3.5 billion parameters, according to Microsoft's own standards for measuring the performance of AI mode ls. Owing to its small size, the company has dedicated Phi-3 models to low-power devices compared to larger models that usually operate on servers. According to Microsoft Vice President Eric Boyd, the new model is capable of processing advanced natural language directly on the smartphone, making the model well suited to new applications that require artificial intelligence anywhere. While new Microsoft models may outperform competing models in their category, they cannot match the performance of large LLMs trained on data available online, but on the other hand they provide better performance in devices for the small volume of data they work on. Source: Qatar News Agency