Tag: Inference

  • Simon Willison’s Weblog: Quoting François Chollet

    Source URL: https://simonwillison.net/2024/Dec/20/francois-chollet/#atom-everything Source: Simon Willison’s Weblog Title: Quoting François Chollet Feedly Summary: OpenAI’s new o3 system – trained on the ARC-AGI-1 Public Training set – has scored a breakthrough 75.7% on the Semi-Private Evaluation set at our stated public leaderboard $10k compute limit. A high-compute (172x) o3 configuration scored 87.5%. This is a surprising…

  • Cloud Blog: The Year in Google Cloud – 2024

    Source URL: https://cloud.google.com/blog/products/gcp/top-google-cloud-blogs/ Source: Cloud Blog Title: The Year in Google Cloud – 2024 Feedly Summary: If you’re a regular reader of this blog, you know that 2024 was a busy year for Google Cloud. From AI to Zero Trust, and everything in between, here’s a chronological recap of our top blogs of 2024, according…

  • Simon Willison’s Weblog: December in LLMs has been a lot

    Source URL: https://simonwillison.net/2024/Dec/20/december-in-llms-has-been-a-lot/#atom-everything Source: Simon Willison’s Weblog Title: December in LLMs has been a lot Feedly Summary: I had big plans for December: for one thing, I was hoping to get to an actual RC of Datasette 1.0, in preparation for a full release in January. Instead, I’ve found myself distracted by a constant barrage…

  • Simon Willison’s Weblog: Gemini 2.0 Flash "Thinking mode"

    Source URL: https://simonwillison.net/2024/Dec/19/gemini-thinking-mode/#atom-everything Source: Simon Willison’s Weblog Title: Gemini 2.0 Flash "Thinking mode" Feedly Summary: Those new model releases just keep on flowing. Today it’s Google’s snappily named gemini-2.0-flash-thinking-exp, their first entrant into the o1-style inference scaling class of models. I posted about a great essay about the significance of these just this morning. From…

  • Simon Willison’s Weblog: Is AI progress slowing down?

    Source URL: https://simonwillison.net/2024/Dec/19/is-ai-progress-slowing-down/#atom-everything Source: Simon Willison’s Weblog Title: Is AI progress slowing down? Feedly Summary: Is AI progress slowing down? This piece by Arvind Narayanan and Sayash Kapoor is the single most insightful essay about AI and LLMs I’ve seen in a long time. It’s long and worth reading every inch of it – it…

  • Hacker News: A Replacement for Bert

    Source URL: https://huggingface.co/blog/modernbert Source: Hacker News Title: A Replacement for Bert Feedly Summary: Comments AI Summary and Description: Yes **Short Summary with Insight:** The text discusses the introduction of ModernBERT, an advanced encoder-only model that surpasses older models like BERT in both performance and efficiency. Boasting an increased context length of 8192 tokens, faster processing…

  • Hacker News: Apple collaborates with Nvidia to research faster LLM performance

    Source URL: https://9to5mac.com/2024/12/18/apple-collaborates-with-nvidia-to-research-faster-llm-performance/ Source: Hacker News Title: Apple collaborates with Nvidia to research faster LLM performance Feedly Summary: Comments AI Summary and Description: Yes Summary: Apple has announced a collaboration with NVIDIA to enhance the performance of large language models (LLMs) through a new technique called Recurrent Drafter (ReDrafter). This approach significantly accelerates text generation,…

  • Hacker News: On-silicon real-time AI compute governance from Nvidia, Intel, EQTY Labs

    Source URL: https://www.eqtylab.io/blog/verifiable-compute-press-release Source: Hacker News Title: On-silicon real-time AI compute governance from Nvidia, Intel, EQTY Labs Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses the launch of the Verifiable Compute AI framework by EQTY Lab in collaboration with Intel and NVIDIA, representing a notable advancement in AI security and governance.…

  • The Register: Boffins trick AI model into giving up its secrets

    Source URL: https://www.theregister.com/2024/12/18/ai_model_reveal_itself/ Source: The Register Title: Boffins trick AI model into giving up its secrets Feedly Summary: All it took to make an Google Edge TPU give up model hyperparameters was specific hardware, a novel attack technique … and several days Computer scientists from North Carolina State University have devised a way to copy…

  • Hacker News: New LLM optimization technique slashes memory costs up to 75%

    Source URL: https://venturebeat.com/ai/new-llm-optimization-technique-slashes-memory-costs-up-to-75/ Source: Hacker News Title: New LLM optimization technique slashes memory costs up to 75% Feedly Summary: Comments AI Summary and Description: Yes Summary: Researchers at Sakana AI have developed a novel technique called “universal transformer memory” that enhances the efficiency of large language models (LLMs) by optimizing their memory usage. This innovation…