Tag: transformer

  • Simon Willison’s Weblog: AbsenceBench: Language Models Can’t Tell What’s Missing

    Source URL: https://simonwillison.net/2025/Jun/20/absencebench/#atom-everything Source: Simon Willison’s Weblog Title: AbsenceBench: Language Models Can’t Tell What’s Missing Feedly Summary: AbsenceBench: Language Models Can’t Tell What’s Missing Here’s another interesting result to file under the “jagged frontier" of LLMs, where their strengths and weaknesses are often unintuitive. Long context models have been getting increasingly good at passing "Needle…

  • Cloud Blog: New G4 VMs with NVIDIA RTX PRO 6000 Blackwell power AI, graphics, gaming and beyond

    Source URL: https://cloud.google.com/blog/products/compute/introducing-g4-vm-with-nvidia-rtx-pro-6000/ Source: Cloud Blog Title: New G4 VMs with NVIDIA RTX PRO 6000 Blackwell power AI, graphics, gaming and beyond Feedly Summary: Today, we’re excited to announce the preview of our new G4 VMs based on NVIDIA RTX PRO 6000 Blackwell Server edition — the first cloud provider to do so. This follows…

  • Simon Willison’s Weblog: Qwen3 Embedding

    Source URL: https://simonwillison.net/2025/Jun/8/qwen3-embedding/#atom-everything Source: Simon Willison’s Weblog Title: Qwen3 Embedding Feedly Summary: Qwen3 Embedding New family of embedding models from Qwen, in three sizes: 0.6B, 4B, 8B – and two categories: Text Embedding and Text Reranking. The full collection can be browsed on Hugging Face. The smallest available model is the 0.6B Q8 one, which…

  • Transformer Circuits Thread: Circuits Updates

    Source URL: https://transformer-circuits.pub/2025/april-update/index.html Source: Transformer Circuits Thread Title: Circuits Updates Feedly Summary: AI Summary and Description: Yes **Summary:** The text discusses emerging research and methodologies in the field of machine learning interpretability, specifically focusing on large language models (LLMs). It examines the mechanisms by which these models respond to harmful requests (like making bomb instructions)…

  • CSA: Exploiting Trusted AI: GPTs in Cyberattacks

    Source URL: https://abnormal.ai/blog/how-attackers-exploit-trusted-ai-tools Source: CSA Title: Exploiting Trusted AI: GPTs in Cyberattacks Feedly Summary: AI Summary and Description: Yes Summary: The text discusses the emergence of malicious AI, particularly focusing on how generative pre-trained transformers (GPTs) are being exploited by cybercriminals. It highlights the potential risks posed by these technologies, including sophisticated fraud tactics and…

  • Simon Willison’s Weblog: Gemini Diffusion

    Source URL: https://simonwillison.net/2025/May/21/gemini-diffusion/ Source: Simon Willison’s Weblog Title: Gemini Diffusion Feedly Summary: Gemini Diffusion Another of the announcements from Google I/O yesterday was Gemini Diffusion, Google’s first LLM to use diffusion (similar to image models like Imagen and Stable Diffusion) in place of transformers. Google describe it like this: Traditional autoregressive language models generate text…