Tag: competitive performance

  • Simon Willison’s Weblog: ICPC medals for OpenAI and Gemini

    Source URL: https://simonwillison.net/2025/Sep/17/icpc/#atom-everything Source: Simon Willison’s Weblog Title: ICPC medals for OpenAI and Gemini Feedly Summary: In July it was the International Math Olympiad (OpenAI, Gemini), today it’s the International Collegiate Programming Contest (ICPC). Once again, both OpenAI and Gemini competed with models that achieved Gold medal performance. OpenAI’s Mostafa Rohaninejad: We received the problems…

  • Slashdot: UAE Lab Releases Open-Source Model to Rival China’s DeepSeek

    Source URL: https://slashdot.org/story/25/09/13/1734225/uae-lab-releases-open-source-model-to-rival-chinas-deepseek Source: Slashdot Title: UAE Lab Releases Open-Source Model to Rival China’s DeepSeek Feedly Summary: AI Summary and Description: Yes Summary: The United Arab Emirates is making significant advancements in the AI arena, exemplified by the release of the K2 Think model from the Institute of Foundation Models. This open-source model, which reportedly…

  • Simon Willison’s Weblog: OpenAI’s new open weight (Apache 2) models are really good

    Source URL: https://simonwillison.net/2025/Aug/5/gpt-oss/ Source: Simon Willison’s Weblog Title: OpenAI’s new open weight (Apache 2) models are really good Feedly Summary: The long promised OpenAI open weight models are here, and they are very impressive. They’re available under proper open source licenses – Apache 2.0 – and come in two sizes, 120B and 20B. OpenAI’s own…

  • Simon Willison’s Weblog: Qwen3-235B-A22B-Thinking-2507

    Source URL: https://simonwillison.net/2025/Jul/25/qwen3-235b-a22b-thinking-2507/#atom-everything Source: Simon Willison’s Weblog Title: Qwen3-235B-A22B-Thinking-2507 Feedly Summary: Qwen3-235B-A22B-Thinking-2507 The third Qwen model release week, following Qwen3-235B-A22B-Instruct-2507 on Monday 21st and Qwen3-Coder-480B-A35B-Instruct on Tuesday 22nd. Those two were both non-reasoning models – a change from the previous models in the Qwen 3 family which combined reasoning and non-reasoning in the same model,…

  • Cloud Blog: New G4 VMs with NVIDIA RTX PRO 6000 Blackwell power AI, graphics, gaming and beyond

    Source URL: https://cloud.google.com/blog/products/compute/introducing-g4-vm-with-nvidia-rtx-pro-6000/ Source: Cloud Blog Title: New G4 VMs with NVIDIA RTX PRO 6000 Blackwell power AI, graphics, gaming and beyond Feedly Summary: Today, we’re excited to announce the preview of our new G4 VMs based on NVIDIA RTX PRO 6000 Blackwell Server edition — the first cloud provider to do so. This follows…

  • Simon Willison’s Weblog: Comma v0.1 1T and 2T – 7B LLMs trained on openly licensed text

    Source URL: https://simonwillison.net/2025/Jun/7/comma/#atom-everything Source: Simon Willison’s Weblog Title: Comma v0.1 1T and 2T – 7B LLMs trained on openly licensed text Feedly Summary: It’s been a long time coming, but we finally have some promising LLMs to try out which are trained entirely on openly licensed text! EleutherAI released the Pile four and a half…

  • Simon Willison’s Weblog: Codestral Embed

    Source URL: https://simonwillison.net/2025/May/28/codestral-embed/#atom-everything Source: Simon Willison’s Weblog Title: Codestral Embed Feedly Summary: Codestral Embed Brand new embedding model from Mistral, specifically trained for code. Mistral claim that: Codestral Embed significantly outperforms leading code embedders in the market today: Voyage Code 3, Cohere Embed v4.0 and OpenAI’s large embedding model. The model is designed to work…

  • Cloud Blog: From LLMs to image generation: Accelerate inference workloads with AI Hypercomputer

    Source URL: https://cloud.google.com/blog/products/compute/ai-hypercomputer-inference-updates-for-google-cloud-tpu-and-gpu/ Source: Cloud Blog Title: From LLMs to image generation: Accelerate inference workloads with AI Hypercomputer Feedly Summary: From retail to gaming, from code generation to customer care, an increasing number of organizations are running LLM-based applications, with 78% of organizations in development or production today. As the number of generative AI applications…

  • Slashdot: Alibaba’s ZeroSearch Teaches AI To Search Without Search Engines, Cuts Training Costs By 88%

    Source URL: https://slashdot.org/story/25/05/09/0113217/alibabas-zerosearch-teaches-ai-to-search-without-search-engines-cuts-training-costs-by-88?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Alibaba’s ZeroSearch Teaches AI To Search Without Search Engines, Cuts Training Costs By 88% Feedly Summary: AI Summary and Description: Yes Summary: Alibaba Group’s “ZeroSearch” technique showcases an innovative approach that enables large language models (LLMs) to develop search capabilities without relying on external search engines, demonstrating significant cost…