Tag: model specifications

  • Hacker News: Every Flop Counts: Scaling a 300B LLM Without Premium GPUs

    Source URL: https://arxiv.org/abs/2503.05139 Source: Hacker News Title: Every Flop Counts: Scaling a 300B LLM Without Premium GPUs Feedly Summary: Comments AI Summary and Description: Yes Summary: This technical report presents advancements in training large-scale Mixture-of-Experts (MoE) language models, namely Ling-Lite and Ling-Plus, highlighting their efficiency and comparable performance to industry benchmarks while significantly reducing training…

  • Simon Willison’s Weblog: Notes on Google’s Gemma 3

    Source URL: https://simonwillison.net/2025/Mar/12/notes-on-googles-gemma-3/ Source: Simon Willison’s Weblog Title: Notes on Google’s Gemma 3 Feedly Summary: Google’s Gemma team released an impressive new model today (under their not-open-source Gemma license). Gemma 3 comes in four sizes – 1B, 4B, 12B, and 27B – and while 1B is text-only the larger three models are all multi-modal for…

  • Hacker News: Mistral Saba

    Source URL: https://mistral.ai/en/news/mistral-saba Source: Hacker News Title: Mistral Saba Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses the launch of Mistral Saba, a specialized regional language model designed to enhance AI fluency across culturally and linguistically diverse regions, specifically in the Middle East and South Asia. It emphasizes the model’s capabilities…

  • Slashdot: DeepSeek Piles Pressure on AI Rivals With New Image Model Release

    Source URL: https://slashdot.org/story/25/01/27/190204/deepseek-piles-pressure-on-ai-rivals-with-new-image-model-release?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: DeepSeek Piles Pressure on AI Rivals With New Image Model Release Feedly Summary: AI Summary and Description: Yes Summary: DeepSeek, a Chinese AI startup, has introduced Janus Pro, a series of open-source multimodal models that reportedly outshine OpenAI’s DALL-E 3 and Stable Diffusion. These models are aimed at enhancing…

  • Slashdot: Cutting-Edge Chinese ‘Reasoning’ Model Rivals OpenAI O1

    Source URL: https://slashdot.org/story/25/01/21/2138247/cutting-edge-chinese-reasoning-model-rivals-openai-o1?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Cutting-Edge Chinese ‘Reasoning’ Model Rivals OpenAI O1 Feedly Summary: AI Summary and Description: Yes Summary: The release of DeepSeek’s R1 model family marks a significant advancement in the availability of high-performing AI models, particularly in the realms of math and coding tasks. With an open MIT license, these models…

  • Simon Willison’s Weblog: DeepSeek_V3.pdf

    Source URL: https://simonwillison.net/2024/Dec/26/deepseek-v3/#atom-everything Source: Simon Willison’s Weblog Title: DeepSeek_V3.pdf Feedly Summary: DeepSeek_V3.pdf The DeepSeek v3 paper (and model card) are out, after yesterday’s mysterious release of the undocumented model weights. Plenty of interesting details in here. The model pre-trained on 14.8 trillion “high-quality and diverse tokens" (not otherwise documented). Following this, we conduct post-training, including…

  • Simon Willison’s Weblog: deepseek-ai/DeepSeek-V3-Base

    Source URL: https://simonwillison.net/2024/Dec/25/deepseek-v3/#atom-everything Source: Simon Willison’s Weblog Title: deepseek-ai/DeepSeek-V3-Base Feedly Summary: deepseek-ai/DeepSeek-V3-Base No model card or announcement yet, but this new model release from Chinese AI lab DeepSeek (an arm of Chinese hedge fund High-Flyer) looks very significant. It’s a huge model – 685B parameters, 687.9 GB on disk (TIL how to size a git-lfs…

  • Slashdot: Microsoft Announces Phi-4 AI Model Optimized for Accuracy and Complex Reasoning

    Source URL: https://slashdot.org/story/24/12/16/0313207/microsoft-announces-phi-4-ai-model-optimized-for-accuracy-and-complex-reasoning?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Microsoft Announces Phi-4 AI Model Optimized for Accuracy and Complex Reasoning Feedly Summary: AI Summary and Description: Yes **Summary:** Microsoft has introduced Phi-4, an advanced AI model optimized for complex reasoning tasks, particularly in STEM areas. With its robust architecture and safety features, Phi-4 underscores the importance of ethical…

  • Simon Willison’s Weblog: Phi-4 Technical Report

    Source URL: https://simonwillison.net/2024/Dec/15/phi-4-technical-report/ Source: Simon Willison’s Weblog Title: Phi-4 Technical Report Feedly Summary: Phi-4 Technical Report Phi-4 is the latest LLM from Microsoft Research. It has 14B parameters and claims to be a big leap forward in the overall Phi series. From Introducing Phi-4: Microsoft’s Newest Small Language Model Specializing in Complex Reasoning: Phi-4 outperforms…

  • Simon Willison’s Weblog: Pixtral Large

    Source URL: https://simonwillison.net/2024/Nov/18/pixtral-large/ Source: Simon Willison’s Weblog Title: Pixtral Large Feedly Summary: Pixtral Large New today from Mistral: Today we announce Pixtral Large, a 124B open-weights multimodal model built on top of Mistral Large 2. Pixtral Large is the second model in our multimodal family and demonstrates frontier-level image understanding. The weights are out on…