Tag: context length
-
Simon Willison’s Weblog: Codestral 25.01
Source URL: https://simonwillison.net/2025/Jan/13/codestral-2501/ Source: Simon Willison’s Weblog Title: Codestral 25.01 Feedly Summary: Codestral 25.01 Brand new code-focused model from Mistral. Unlike the first Codestral this one isn’t (yet) available as open weights. The model has a 256k token context – a new record for Mistral. The new model scored an impressive joint first place with…
-
Hacker News: Phi4 Available on Ollama
Source URL: https://ollama.com/library/phi4 Source: Hacker News Title: Phi4 Available on Ollama Feedly Summary: Comments AI Summary and Description: Yes Summary: The text describes Phi 4, a state-of-the-art language model focusing on generative AI capabilities. It highlights the model’s design, enhancements for safety and accuracy, and its primary and out-of-scope use cases, along with regulatory considerations.…
-
Simon Willison’s Weblog: Finally, a Replacement for BERT: Introducing ModernBERT
Source URL: https://simonwillison.net/2024/Dec/24/modernbert/ Source: Simon Willison’s Weblog Title: Finally, a Replacement for BERT: Introducing ModernBERT Feedly Summary: Finally, a Replacement for BERT: Introducing ModernBERT BERT was an early language model released by Google in October 2018. Unlike modern LLMs it wasn’t designed for generating text. BERT was trained for masked token prediction and was generally…