Tag: Inference
-
Simon Willison’s Weblog: Quoting François Chollet
Source URL: https://simonwillison.net/2024/Dec/20/francois-chollet/#atom-everything Source: Simon Willison’s Weblog Title: Quoting François Chollet Feedly Summary: OpenAI’s new o3 system – trained on the ARC-AGI-1 Public Training set – has scored a breakthrough 75.7% on the Semi-Private Evaluation set at our stated public leaderboard $10k compute limit. A high-compute (172x) o3 configuration scored 87.5%. This is a surprising…
-
Cloud Blog: The Year in Google Cloud – 2024
Source URL: https://cloud.google.com/blog/products/gcp/top-google-cloud-blogs/ Source: Cloud Blog Title: The Year in Google Cloud – 2024 Feedly Summary: If you’re a regular reader of this blog, you know that 2024 was a busy year for Google Cloud. From AI to Zero Trust, and everything in between, here’s a chronological recap of our top blogs of 2024, according…
-
Simon Willison’s Weblog: Gemini 2.0 Flash "Thinking mode"
Source URL: https://simonwillison.net/2024/Dec/19/gemini-thinking-mode/#atom-everything Source: Simon Willison’s Weblog Title: Gemini 2.0 Flash "Thinking mode" Feedly Summary: Those new model releases just keep on flowing. Today it’s Google’s snappily named gemini-2.0-flash-thinking-exp, their first entrant into the o1-style inference scaling class of models. I posted about a great essay about the significance of these just this morning. From…
-
Hacker News: Apple collaborates with Nvidia to research faster LLM performance
Source URL: https://9to5mac.com/2024/12/18/apple-collaborates-with-nvidia-to-research-faster-llm-performance/ Source: Hacker News Title: Apple collaborates with Nvidia to research faster LLM performance Feedly Summary: Comments AI Summary and Description: Yes Summary: Apple has announced a collaboration with NVIDIA to enhance the performance of large language models (LLMs) through a new technique called Recurrent Drafter (ReDrafter). This approach significantly accelerates text generation,…
-
Hacker News: On-silicon real-time AI compute governance from Nvidia, Intel, EQTY Labs
Source URL: https://www.eqtylab.io/blog/verifiable-compute-press-release Source: Hacker News Title: On-silicon real-time AI compute governance from Nvidia, Intel, EQTY Labs Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses the launch of the Verifiable Compute AI framework by EQTY Lab in collaboration with Intel and NVIDIA, representing a notable advancement in AI security and governance.…
-
The Register: Boffins trick AI model into giving up its secrets
Source URL: https://www.theregister.com/2024/12/18/ai_model_reveal_itself/ Source: The Register Title: Boffins trick AI model into giving up its secrets Feedly Summary: All it took to make an Google Edge TPU give up model hyperparameters was specific hardware, a novel attack technique … and several days Computer scientists from North Carolina State University have devised a way to copy…
-
Hacker News: New LLM optimization technique slashes memory costs up to 75%
Source URL: https://venturebeat.com/ai/new-llm-optimization-technique-slashes-memory-costs-up-to-75/ Source: Hacker News Title: New LLM optimization technique slashes memory costs up to 75% Feedly Summary: Comments AI Summary and Description: Yes Summary: Researchers at Sakana AI have developed a novel technique called “universal transformer memory” that enhances the efficiency of large language models (LLMs) by optimizing their memory usage. This innovation…