Tag: training method

  • The Register: AI training license will allow LLM builders to pay for content they consume

    Source URL: https://www.theregister.com/2025/04/24/uk_publishing_body_launches_ai/ Source: The Register Title: AI training license will allow LLM builders to pay for content they consume Feedly Summary: UK org backing it promises ‘legal certainty’ for devs, money for creators… but is it too late? A UK non-profit is planning to introduce a new licensing model which will allow developers of…

  • Simon Willison’s Weblog: Quoting Andriy Burkov

    Source URL: https://simonwillison.net/2025/Apr/6/andriy-burkov/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Andriy Burkov Feedly Summary: […] The disappointing releases of both GPT-4.5 and Llama 4 have shown that if you don’t train a model to reason with reinforcement learning, increasing its size no longer provides benefits. Reinforcement learning is limited only to domains where a reward can…

  • Slashdot: OpenAI’s Motion to Dismiss Copyright Claims Rejected by Judge

    Source URL: https://news.slashdot.org/story/25/04/05/0323213/openais-motion-to-dismiss-copyright-claims-rejected-by-judge?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: OpenAI’s Motion to Dismiss Copyright Claims Rejected by Judge Feedly Summary: AI Summary and Description: Yes Summary: The ongoing lawsuit filed by The New York Times against OpenAI raises significant issues regarding copyright infringement related to AI training datasets. The case underscores the complex intersection of AI technology, copyright…

  • Hacker News: Tao: Using test-time compute to train efficient LLMs without labeled data

    Source URL: https://www.databricks.com/blog/tao-using-test-time-compute-train-efficient-llms-without-labeled-data Source: Hacker News Title: Tao: Using test-time compute to train efficient LLMs without labeled data Feedly Summary: Comments AI Summary and Description: Yes Summary: The text introduces a new model tuning method for large language models (LLMs) called Test-time Adaptive Optimization (TAO) that enhances model quality without requiring large amounts of labeled…

  • Hacker News: IETF setting standards for AI preferences

    Source URL: https://www.ietf.org/blog/aipref-wg/ Source: Hacker News Title: IETF setting standards for AI preferences Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses the formation of the AI Preferences (AIPREF) Working Group, aimed at standardizing how content preferences are expressed for AI model training, amid concerns from content publishers about unauthorized use. This…

  • Hacker News: Understanding R1-Zero-Like Training: A Critical Perspective

    Source URL: https://github.com/sail-sg/understand-r1-zero Source: Hacker News Title: Understanding R1-Zero-Like Training: A Critical Perspective Feedly Summary: Comments AI Summary and Description: Yes Summary: The text presents a novel approach to LLM training called R1-Zero-like training, emphasizing a new reinforcement learning method termed Dr. GRPO that enhances reasoning capabilities. It highlights significant improvements in model performance through…

  • Slashdot: Nvidia Says ‘the Age of Generalist Robotics Is Here’

    Source URL: https://hardware.slashdot.org/story/25/03/18/2312229/nvidia-says-the-age-of-generalist-robotics-is-here?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Nvidia Says ‘the Age of Generalist Robotics Is Here’ Feedly Summary: AI Summary and Description: Yes Summary: Nvidia announced the Isaac GR00T N1, an open-source, customizable foundation model aimed at revolutionizing humanoid robotics. The model features a dual-system architecture that enhances robot learning and behavior, facilitating more advanced robot…

  • Simon Willison’s Weblog: Quoting Ai2

    Source URL: https://simonwillison.net/2025/Mar/13/ai2/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Ai2 Feedly Summary: Today we release OLMo 2 32B, the most capable and largest model in the OLMo 2 family, scaling up the OLMo 2 training recipe used for our 7B and 13B models released in November. It is trained up to 6T tokens and post-trained…

  • Hacker News: Meta must defend claim it stripped copyright info from Llama’s training fodder

    Source URL: https://www.theregister.com/2025/03/11/meta_dmca_copyright_removal_case/ Source: Hacker News Title: Meta must defend claim it stripped copyright info from Llama’s training fodder Feedly Summary: Comments AI Summary and Description: Yes Summary: A federal judge has ruled that Meta must face claims of copyright infringement related to the removal of copyright management information (CMI) from materials used to train…