Tag: context length

  • Simon Willison’s Weblog: Codestral 25.01

    Source URL: https://simonwillison.net/2025/Jan/13/codestral-2501/ Source: Simon Willison’s Weblog Title: Codestral 25.01 Feedly Summary: Codestral 25.01 Brand new code-focused model from Mistral. Unlike the first Codestral this one isn’t (yet) available as open weights. The model has a 256k token context – a new record for Mistral. The new model scored an impressive joint first place with…

  • Hacker News: Phi4 Available on Ollama

    Source URL: https://ollama.com/library/phi4 Source: Hacker News Title: Phi4 Available on Ollama Feedly Summary: Comments AI Summary and Description: Yes Summary: The text describes Phi 4, a state-of-the-art language model focusing on generative AI capabilities. It highlights the model’s design, enhancements for safety and accuracy, and its primary and out-of-scope use cases, along with regulatory considerations.…

  • Cloud Blog: Supervised Fine Tuning for Gemini: A best practices guide

    Source URL: https://cloud.google.com/blog/products/ai-machine-learning/master-gemini-sft/ Source: Cloud Blog Title: Supervised Fine Tuning for Gemini: A best practices guide Feedly Summary: Foundation models such as Gemini have revolutionized how we work, but sometimes they need guidance to excel at specific business tasks. Perhaps their answers are too long, or their summaries miss the mark. That’s where supervised fine-tuning…

  • Hacker News: RWKV Language Model

    Source URL: https://www.rwkv.com/ Source: Hacker News Title: RWKV Language Model Feedly Summary: Comments AI Summary and Description: Yes Summary: The RWKV (RNN with LLM capabilities) presents a significant innovation in language model design by combining the advantages of recurrent neural networks (RNNs) and transformers. Its unique features, including linear time processing and lack of attention…

  • Simon Willison’s Weblog: Things we learned out about LLMs in 2024

    Source URL: https://simonwillison.net/2024/Dec/31/llms-in-2024/#atom-everything Source: Simon Willison’s Weblog Title: Things we learned out about LLMs in 2024 Feedly Summary: A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying…

  • Simon Willison’s Weblog: Finally, a Replacement for BERT: Introducing ModernBERT

    Source URL: https://simonwillison.net/2024/Dec/24/modernbert/ Source: Simon Willison’s Weblog Title: Finally, a Replacement for BERT: Introducing ModernBERT Feedly Summary: Finally, a Replacement for BERT: Introducing ModernBERT BERT was an early language model released by Google in October 2018. Unlike modern LLMs it wasn’t designed for generating text. BERT was trained for masked token prediction and was generally…

  • Hacker News: A Replacement for Bert

    Source URL: https://huggingface.co/blog/modernbert Source: Hacker News Title: A Replacement for Bert Feedly Summary: Comments AI Summary and Description: Yes **Short Summary with Insight:** The text discusses the introduction of ModernBERT, an advanced encoder-only model that surpasses older models like BERT in both performance and efficiency. Boasting an increased context length of 8192 tokens, faster processing…

  • Simon Willison’s Weblog: Meta AI release Llama 3.3

    Source URL: https://simonwillison.net/2024/Dec/6/llama-33/#atom-everything Source: Simon Willison’s Weblog Title: Meta AI release Llama 3.3 Feedly Summary: Meta AI release Llama 3.3 This new Llama-3.3-70B-Instruct model from Meta AI makes some bold claims: This model delivers similar performance to Llama 3.1 405B with cost effective inference that’s feasible to run locally on common developer workstations. I have…

  • Simon Willison’s Weblog: First impressions of the new Amazon Nova LLMs (via a new llm-bedrock plugin)

    Source URL: https://simonwillison.net/2024/Dec/4/amazon-nova/ Source: Simon Willison’s Weblog Title: First impressions of the new Amazon Nova LLMs (via a new llm-bedrock plugin) Feedly Summary: Amazon released three new Large Language Models yesterday at their AWS re:Invent conference. The new model family is called Amazon Nova and comes in three sizes: Micro, Lite and Pro. I built…

  • Hacker News: 32k context length text embedding models

    Source URL: https://blog.voyageai.com/2024/09/18/voyage-3/ Source: Hacker News Title: 32k context length text embedding models Feedly Summary: Comments AI Summary and Description: Yes Summary: The text highlights the launch of the Voyage 3 series embedding models, which provide significant advancements in retrieval quality, latency, and cost-effectiveness compared to existing models like OpenAI’s. Specifically, the Voyage 3 models…