Tag: nation
-
The Register: China’s DeepSeek applying trial-and-error learning to its AI ‘reasoning’
Source URL: https://www.theregister.com/2025/09/18/chinas_deepseek_ai_reasoning_research/ Source: The Register Title: China’s DeepSeek applying trial-and-error learning to its AI ‘reasoning’ Feedly Summary: Model can also explain its answers, researchers find Chinese AI company DeepSeek has shown it can improve the reasoning of its LLM DeepSeek-R1 through trial-and-error based reinforcement learning, and even be made to explain its reasoning on…
-
The Register: Scale AI says ‘tanks a lot’ to Pentagon for data-classifying deal
Source URL: https://www.theregister.com/2025/09/17/dod_scale_ai_deal/ Source: The Register Title: Scale AI says ‘tanks a lot’ to Pentagon for data-classifying deal Feedly Summary: First up: $41M to use human annotators to label all that unstructured military data. What could go wrong? Data curation firm Scale AI has partnered with the Pentagon to deploy its AI on Top Secret…
-
Slashdot: Gemini AI Solves Coding Problem That Stumped 139 Human Teams At ICPC World Finals
Source URL: https://slashdot.org/story/25/09/17/1923220/gemini-ai-solves-coding-problem-that-stumped-139-human-teams-at-icpc-world-finals?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Gemini AI Solves Coding Problem That Stumped 139 Human Teams At ICPC World Finals Feedly Summary: AI Summary and Description: Yes Summary: Google’s generative AI model, Gemini 2.5, achieved a gold medal at the International Collegiate Programming Contest (ICPC), showcasing advancements towards artificial general intelligence. This performance highlights the…
-
Slashdot: OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance
Source URL: https://slashdot.org/story/25/09/17/1724241/openai-says-models-programmed-to-make-stuff-up-instead-of-admitting-ignorance?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance Feedly Summary: AI Summary and Description: Yes Summary: The text discusses OpenAI’s acknowledgment of the issue of “hallucinations” in AI models, specifically how these models frequently yield false outputs due to a training bias that rewards generating…
-
The Register: OpenAI says models are programmed to make stuff up instead of admitting ignorance
Source URL: https://www.theregister.com/2025/09/17/openai_hallucinations_incentives/ Source: The Register Title: OpenAI says models are programmed to make stuff up instead of admitting ignorance Feedly Summary: Even a wrong answer is right some of the time AI models often produce false outputs, or “hallucinations." Now OpenAI has admitted they may result from fundamental mistakes it makes when training its…