Tag: Apache 2.0 license

  • The Register: OpenAI makes good on its name, launches first open weights language models since GPT-2

    Source URL: https://www.theregister.com/2025/08/05/openai_open_gpt/ Source: The Register Title: OpenAI makes good on its name, launches first open weights language models since GPT-2 Feedly Summary: GPT-OSS now available in 120 and 20 billion parameter sizes under Apache 2.0 license OpenAI released its first open weights language models since GPT-2 on Tuesday with the debut of GPT-OSS.… AI…

  • Simon Willison’s Weblog: OpenAI’s new open weight (Apache 2) models are really good

    Source URL: https://simonwillison.net/2025/Aug/5/gpt-oss/ Source: Simon Willison’s Weblog Title: OpenAI’s new open weight (Apache 2) models are really good Feedly Summary: The long promised OpenAI open weight models are here, and they are very impressive. They’re available under proper open source licenses – Apache 2.0 – and come in two sizes, 120B and 20B. OpenAI’s own…

  • OpenAI : Introducing gpt-oss

    Source URL: https://openai.com/index/introducing-gpt-oss Source: OpenAI Title: Introducing gpt-oss Feedly Summary: We’re releasing gpt-oss-120b and gpt-oss-20b—two state-of-the-art open-weight language models that deliver strong real-world performance at low cost. Available under the flexible Apache 2.0 license, these models outperform similarly sized open models on reasoning tasks, demonstrate strong tool use capabilities, and are optimized for efficient deployment…

  • OpenAI : gpt-oss-120b & gpt-oss-20b Model Card

    Source URL: https://openai.com/index/gpt-oss-model-card Source: OpenAI Title: gpt-oss-120b & gpt-oss-20b Model Card Feedly Summary: We introduce gpt-oss-120b and gpt-oss-20b, two open-weight reasoning models available under the Apache 2.0 license and our gpt-oss usage policy. AI Summary and Description: Yes Summary: The introduction of gpt-oss-120b and gpt-oss-20b highlights the development of open-weight reasoning models, which reflects significant…

  • Simon Willison’s Weblog: Qwen3-Coder: Agentic Coding in the World

    Source URL: https://simonwillison.net/2025/Jul/22/qwen3-coder/ Source: Simon Willison’s Weblog Title: Qwen3-Coder: Agentic Coding in the World Feedly Summary: Qwen3-Coder: Agentic Coding in the World It turns out that as I was typing up my notes on Qwen3-235B-A22B-Instruct-2507 the Qwen team were unleashing something much bigger: Today, we’re announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder…

  • Simon Willison’s Weblog: Voxtral

    Source URL: https://simonwillison.net/2025/Jul/16/voxtral/#atom-everything Source: Simon Willison’s Weblog Title: Voxtral Feedly Summary: Voxtral Mistral released their first audio-input models yesterday: Voxtral Small and Voxtral Mini. These state‑of‑the‑art speech understanding models are available in two sizes—a 24B variant for production-scale applications and a 3B variant for local and edge deployments. Both versions are released under the Apache…

  • AWS Open Source Blog: Secure your Express application APIs in 5 minutes with Cedar

    Source URL: https://aws.amazon.com/blogs/opensource/secure-your-application-apis-in-5-minutes-with-cedar/ Source: AWS Open Source Blog Title: Secure your Express application APIs in 5 minutes with Cedar Feedly Summary: Today, the open source Cedar project announced the release of authorization-for-expressjs, an open source package that simplifies using the Cedar policy language and authorization engine to verify application permissions. This release allows developers to…

  • Simon Willison’s Weblog: Shisa V2 405B: Japan’s Highest Performing LLM

    Source URL: https://simonwillison.net/2025/Jun/3/shisa-v2/ Source: Simon Willison’s Weblog Title: Shisa V2 405B: Japan’s Highest Performing LLM Feedly Summary: Shisa V2 405B: Japan’s Highest Performing LLM Leonard Lin and Adam Lensenmayer have been working on Shisa for a while. They describe their latest release as “Japan’s Highest Performing LLM". Shisa V2 405B is the highest-performing LLM ever…

  • Simon Willison’s Weblog: Devstral

    Source URL: https://simonwillison.net/2025/May/21/devstral/#atom-everything Source: Simon Willison’s Weblog Title: Devstral Feedly Summary: Devstral New Apache 2.0 licensed LLM release from Mistral, this time specifically trained for code. Devstral achieves a score of 46.8% on SWE-Bench Verified, outperforming prior open-source SoTA models by more than 6% points. When evaluated under the same test scaffold (OpenHands, provided by…