Tag: training method
-
Slashdot: OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance
Source URL: https://slashdot.org/story/25/09/17/1724241/openai-says-models-programmed-to-make-stuff-up-instead-of-admitting-ignorance?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance Feedly Summary: AI Summary and Description: Yes Summary: The text discusses OpenAI’s acknowledgment of the issue of “hallucinations” in AI models, specifically how these models frequently yield false outputs due to a training bias that rewards generating…
-
The Register: OpenAI says models are programmed to make stuff up instead of admitting ignorance
Source URL: https://www.theregister.com/2025/09/17/openai_hallucinations_incentives/ Source: The Register Title: OpenAI says models are programmed to make stuff up instead of admitting ignorance Feedly Summary: Even a wrong answer is right some of the time AI models often produce false outputs, or “hallucinations." Now OpenAI has admitted they may result from fundamental mistakes it makes when training its…
-
Slashdot: LLM Found Transmitting Behavioral Traits to ‘Student’ LLM Via Hidden Signals in Data
Source URL: https://slashdot.org/story/25/08/17/0331217/llm-found-transmitting-behavioral-traits-to-student-llm-via-hidden-signals-in-data?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: LLM Found Transmitting Behavioral Traits to ‘Student’ LLM Via Hidden Signals in Data Feedly Summary: AI Summary and Description: Yes Summary: The study highlights a concerning phenomenon in AI development known as subliminal learning, where a “teacher” model instills traits in a “student” model without explicit instruction. This can…
-
OpenAI : From hard refusals to safe-completions: toward output-centric safety training
Source URL: https://openai.com/index/gpt-5-safe-completions Source: OpenAI Title: From hard refusals to safe-completions: toward output-centric safety training Feedly Summary: Discover how OpenAI’s new safe-completions approach in GPT-5 improves both safety and helpfulness in AI responses—moving beyond hard refusals to nuanced, output-centric safety training for handling dual-use prompts. AI Summary and Description: Yes Summary: The text discusses OpenAI’s…
-
Cloud Blog: Google Public Sector supports AI-optimized HPC infrastructure for researchers at Caltech
Source URL: https://cloud.google.com/blog/topics/public-sector/google-public-sector-supports-ai-optimized-hpc-infrastructure-for-researchers-at-caltech/ Source: Cloud Blog Title: Google Public Sector supports AI-optimized HPC infrastructure for researchers at Caltech Feedly Summary: For decades, institutions like Caltech, have been at the forefront of large-scale artificial intelligence (AI) research. As high-performance computing (HPC) clusters continue to evolve, researchers across disciplines have been increasingly equipped to process massive datasets,…
-
The Register: AI models just don’t understand what they’re talking about
Source URL: https://www.theregister.com/2025/07/03/ai_models_potemkin_understanding/ Source: The Register Title: AI models just don’t understand what they’re talking about Feedly Summary: Researchers find models’ success at tests hides illusion of understanding Researchers from MIT, Harvard, and the University of Chicago have proposed the term “potemkin understanding" to describe a newly identified failure mode in large language models that…