Tag: in-context learning

  • Cloud Blog: The secret to document intelligence: Box builds Enhanced Extract Agents using Google’s Agent-2-Agent framework

    Source URL: https://cloud.google.com/blog/topics/customers/box-ai-agents-with-googles-agent-2-agent-protocol/ Source: Cloud Blog Title: The secret to document intelligence: Box builds Enhanced Extract Agents using Google’s Agent-2-Agent framework Feedly Summary: Box is one of the original information sharing and collaboration platforms of the digital era. They’ve helped define how we work, and have continued to evolve those practices alongside successive waves of…

  • Hacker News: Bolt: Bootstrap Long Chain-of-Thought in LLMs Without Distillation [pdf]

    Source URL: https://arxiv.org/abs/2502.03860 Source: Hacker News Title: Bolt: Bootstrap Long Chain-of-Thought in LLMs Without Distillation [pdf] Feedly Summary: Comments AI Summary and Description: Yes Summary: The paper introduces BOLT, a method designed to enhance the reasoning capabilities of large language models (LLMs) by generating long chains of thought (LongCoT) without relying on knowledge distillation. The…

  • Hacker News: AI Engineer Reading List

    Source URL: https://www.latent.space/p/2025-papers Source: Hacker News Title: AI Engineer Reading List Feedly Summary: Comments AI Summary and Description: Yes **Summary:** The text focuses on providing a curated reading list for AI engineers, particularly emphasizing recent advancements in large language models (LLMs) and related AI technologies. It is a practical guide designed to enhance the knowledge…

  • Hacker News: BERTs Are Generative In-Context Learners

    Source URL: https://arxiv.org/abs/2406.04823 Source: Hacker News Title: BERTs Are Generative In-Context Learners Feedly Summary: Comments AI Summary and Description: Yes Summary: The paper titled “BERTs are Generative In-Context Learners” explores the capabilities of masked language models, specifically DeBERTa, in performing generative tasks akin to those of causal language models like GPT. This demonstrates a significant…

  • Cloud Blog: When to use supervised fine-tuning for Gemini

    Source URL: https://cloud.google.com/blog/products/ai-machine-learning/supervised-fine-tuning-for-gemini-llm/ Source: Cloud Blog Title: When to use supervised fine-tuning for Gemini Feedly Summary: Have you ever wished you could get a foundation model to respond in a particular style, exhibit domain-specific expertise, or excel at a specific task? While foundation models like Gemini demonstrate remarkable capabilities out-of-the-box, there can be a gap…