Tag: math

  • Rekt: zkLend – Rekt

    Source URL: https://www.rekt.news/ Source: Rekt Title: zkLend – Rekt Feedly Summary: A rounding error exploit bled $9.57M from zkLend vaults on Starknet. After Railgun showed them the door, the attacker ignored their Valentine’s Day bounty deadline, letting the stolen funds sit idle. Same operator behind EraLend’s 2023 hack? On-chain evidence suggests yes. AI Summary and…

  • Hacker News: Why cryptography is not based on NP-complete problems

    Source URL: https://blintzbase.com/posts/cryptography-is-not-based-on-np-hard-problems/ Source: Hacker News Title: Why cryptography is not based on NP-complete problems Feedly Summary: Comments AI Summary and Description: Yes **Summary:** The text explores the intrinsic reasons why cryptography does not rely on NP-complete problems, highlighting the critical distinction between ‘worst-case’ and ‘average-case’ hardness in cryptographic contexts. This is significant for professionals…

  • Hacker News: LIMO: Less Is More for Reasoning

    Source URL: https://arxiv.org/abs/2502.03387 Source: Hacker News Title: LIMO: Less Is More for Reasoning Feedly Summary: Comments AI Summary and Description: Yes Summary: The paper titled “LIMO: Less is More for Reasoning” presents groundbreaking insights into how complex reasoning can be achieved with fewer training examples in large language models. This challenges traditional beliefs about data…

  • Hacker News: Bolt: Bootstrap Long Chain-of-Thought in LLMs Without Distillation [pdf]

    Source URL: https://arxiv.org/abs/2502.03860 Source: Hacker News Title: Bolt: Bootstrap Long Chain-of-Thought in LLMs Without Distillation [pdf] Feedly Summary: Comments AI Summary and Description: Yes Summary: The paper introduces BOLT, a method designed to enhance the reasoning capabilities of large language models (LLMs) by generating long chains of thought (LongCoT) without relying on knowledge distillation. The…

  • Hacker News: Experience the DeepSeek R1 Distilled ‘Reasoning’ Models on Ryzen AI and Radeon

    Source URL: https://community.amd.com/t5/ai/experience-the-deepseek-r1-distilled-reasoning-models-on-amd/ba-p/740593 Source: Hacker News Title: Experience the DeepSeek R1 Distilled ‘Reasoning’ Models on Ryzen AI and Radeon Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses the DeepSeek R1 model, a newly developed reasoning model in the realm of large language models (LLMs). It highlights its unique ability to perform…

  • Rekt: Ionic Money – Rekt

    Source URL: https://www.rekt.news/ionic-money-rekt Source: Rekt Title: Ionic Money – Rekt Feedly Summary: Fake LBTC, real losses. Social engineering artists convinced Ionic Money on Mode Network to accept counterfeit collateral, walked away with $6.9M, and left sister protocols holding toxic bags. Previously exploited twice as Midas – third time rekt’s the charm. AI Summary and Description:…

  • Hacker News: Understanding Reasoning LLMs

    Source URL: https://magazine.sebastianraschka.com/p/understanding-reasoning-llms Source: Hacker News Title: Understanding Reasoning LLMs Feedly Summary: Comments AI Summary and Description: Yes Summary: The text explores advancements in reasoning models associated with large language models (LLMs), focusing particularly on the development of DeepSeek’s reasoning model and various approaches to enhance LLM capabilities through structured training methodologies. This examination is…

  • Slashdot: Researchers Created an Open Rival To OpenAI’s o1 ‘Reasoning’ Model for Under $50

    Source URL: https://slashdot.org/story/25/02/06/1445231/researchers-created-an-open-rival-to-openais-o1-reasoning-model-for-under-50?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Researchers Created an Open Rival To OpenAI’s o1 ‘Reasoning’ Model for Under $50 Feedly Summary: AI Summary and Description: Yes Summary: The research collaboration between Stanford and the University of Washington is notable for developing an AI reasoning model called s1 for less than $50 in cloud compute credits.…