Tag: -4o
-
Slashdot: OpenAI Partners With California State University System
Source URL: https://news.slashdot.org/story/25/02/04/2235222/openai-partners-with-california-state-university-system?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: OpenAI Partners With California State University System Feedly Summary: AI Summary and Description: Yes Summary: OpenAI’s partnership with the California State University (CSU) system marks a significant step in the implementation of AI technology within higher education. With ChatGPT Edu, CSU aims to enhance student engagement and support through…
-
The Register: OpenAI unveils deep research agent for ChatGPT
Source URL: https://www.theregister.com/2025/02/03/openai_unveils_deep_research_agent/ Source: The Register Title: OpenAI unveils deep research agent for ChatGPT Feedly Summary: Takes a bit more time to spout a bit less nonsense OpenAI today launched deep research in ChatGPT, a new agent that takes a little longer to perform a deeper dive into the web to come up with a…
-
Hacker News: Notes on OpenAI O3-Mini
Source URL: https://simonwillison.net/2025/Jan/31/o3-mini/ Source: Hacker News Title: Notes on OpenAI O3-Mini Feedly Summary: Comments AI Summary and Description: Yes Summary: The announcement of OpenAI’s o3-mini model marks a significant development in the landscape of large language models (LLMs). With enhanced performance on specific benchmarks and user functionalities that include internet search capabilities, o3-mini aims to…
-
Simon Willison’s Weblog: OpenAI o3-mini, now available in LLM
Source URL: https://simonwillison.net/2025/Jan/31/o3-mini/#atom-everything Source: Simon Willison’s Weblog Title: OpenAI o3-mini, now available in LLM Feedly Summary: o3-mini is out today. As with other o-series models it’s a slightly difficult one to evaluate – we now need to decide if a prompt is best run using GPT-4o, o1, o3-mini or (if we have access) o1 Pro.…
-
The Register: What better place to inject OpenAI’s o1 than Los Alamos national lab, right?
Source URL: https://www.theregister.com/2025/01/30/openai_los_alamos_national_lab/ Source: The Register Title: What better place to inject OpenAI’s o1 than Los Alamos national lab, right? Feedly Summary: Tackling disease, tick. High-energy physics, tick. Nuclear weapon security, also tick OpenAI has announced another deal with Uncle Sam, this time to get its very latest models in the hands of US government…
-
The Register: DeepSeek’s not the only Chinese LLM maker OpenAI and pals have to worry about. Right, Alibaba?
Source URL: https://www.theregister.com/2025/01/30/alibaba_qwen_ai/ Source: The Register Title: DeepSeek’s not the only Chinese LLM maker OpenAI and pals have to worry about. Right, Alibaba? Feedly Summary: Qwen 2.5 Max tops both DS V3 and GPT-4o, cloud giant claims Analysis The speed and efficiency at which DeepSeek claims to be training large language models (LLMs) competitive with…
-
Slashdot: After DeepSeek Shock, Alibaba Unveils Rival AI Model That Uses Less Computing Power
Source URL: https://slashdot.org/story/25/01/29/184223/after-deepseek-shock-alibaba-unveils-rival-ai-model-that-uses-less-computing-power?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: After DeepSeek Shock, Alibaba Unveils Rival AI Model That Uses Less Computing Power Feedly Summary: AI Summary and Description: Yes Summary: Alibaba’s unveiling of the Qwen2.5-Max AI model highlights advancements in AI performance achieved through a more efficient architecture. This development is particularly relevant to AI security and infrastructure…
-
Hacker News: Qwen2.5-Max: Exploring the Intelligence of Large-Scale Moe Model
Source URL: https://qwenlm.github.io/blog/qwen2.5-max/ Source: Hacker News Title: Qwen2.5-Max: Exploring the Intelligence of Large-Scale Moe Model Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses the development and performance evaluation of Qwen2.5-Max, a large-scale Mixture-of-Expert (MoE) model pretrained on over 20 trillion tokens. It highlights significant advancements in model intelligence achieved through scaling…