Tag: AI development
-
The Register: How OpenAI used a new data type to cut inference costs by 75%
Source URL: https://www.theregister.com/2025/08/10/openai_mxfp4/ Source: The Register Title: How OpenAI used a new data type to cut inference costs by 75% Feedly Summary: Decision to use MXFP4 makes models smaller, faster, and more importantly, cheaper for everyone involved Analysis Whether or not OpenAI’s new open weights models are any good is still up for debate, but…
-
Slashdot: Initiative Seeks AI Lab to Build ‘American Truly Open Models’ (ATOM)
Source URL: https://news.slashdot.org/story/25/08/09/1916243/initiative-seeks-ai-lab-to-build-american-truly-open-models-atom?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Initiative Seeks AI Lab to Build ‘American Truly Open Models’ (ATOM) Feedly Summary: AI Summary and Description: Yes Summary: The text discusses the launch of the ATOM Project, aimed at enhancing U.S. open-source AI competitiveness, highlighting a significant gap in open-source AI development in the country compared to China.…
-
The Register: How to run OpenAI’s new gpt-oss-20b LLM on your computer
Source URL: https://www.theregister.com/2025/08/07/run_openai_gpt_oss_locally/ Source: The Register Title: How to run OpenAI’s new gpt-oss-20b LLM on your computer Feedly Summary: All you need is 24GB of RAM, and unless you have a GPU with its own VRAM quite a lot of patience Hands On Earlier this week, OpenAI released two popular open-weight models, both named gpt-oss.…
-
Simon Willison’s Weblog: Qwen3-4B Instruct and Thinking
Source URL: https://simonwillison.net/2025/Aug/6/qwen3-4b-instruct-and-thinking/ Source: Simon Willison’s Weblog Title: Qwen3-4B Instruct and Thinking Feedly Summary: Qwen3-4B Instruct and Thinking Yet another interesting model from Qwen—these are tiny compared to their other recent releases (just 4B parameters, 7.5GB on Hugging Face and even smaller when quantized) but with a 262,144 context length, which Qwen suggest is essential…