Tag: transformer architecture

  • Simon Willison’s Weblog: Quoting François Chollet

    Source URL: https://simonwillison.net/2024/Oct/16/francois-chollet/ Source: Simon Willison’s Weblog Title: Quoting François Chollet Feedly Summary: A common misconception about Transformers is to believe that they’re a sequence-processing architecture. They’re not. They’re a set-processing architecture. Transformers are 100% order-agnostic (which was the big innovation compared to RNNs, back in late 2016 — you compute the full matrix of…

  • The Register: Nobel Chemistry Prize goes to AlphaFold, Rosetta creators – another win for AI

    Source URL: https://www.theregister.com/2024/10/09/alphafold_rosetta_nobel_chemistry_prize/ Source: The Register Title: Nobel Chemistry Prize goes to AlphaFold, Rosetta creators – another win for AI Feedly Summary: Let’s just hope they don’t give the literature award to a bot, too This year’s Nobel Prizes are shaping up to be a triumph for AI. After awarding the physics prize to early…

  • Slashdot: Researchers Claim New Technique Slashes AI Energy Use By 95%

    Source URL: https://science.slashdot.org/story/24/10/08/2035247/researchers-claim-new-technique-slashes-ai-energy-use-by-95?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Researchers Claim New Technique Slashes AI Energy Use By 95% Feedly Summary: AI Summary and Description: Yes Summary: Researchers at BitEnergy AI, Inc. have introduced Linear-Complexity Multiplication (L-Mul), a novel technique that reduces AI model power consumption by up to 95% by replacing floating-point multiplications with integer additions. This…

  • Hacker News: Trap – Transformers in APL

    Source URL: https://github.com/BobMcDear/trap Source: Hacker News Title: Trap – Transformers in APL Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses an implementation of autoregressive transformers in APL, specifically focused on GPT2, highlighting its unique approach to handling performance and simplicity in deep learning. It offers insights that are particularly relevant to…

  • Hacker News: Transfusion: Predict the Next Token and Diffuse Images with One Multimodal Model

    Source URL: https://www.arxiv.org/abs/2408.11039 Source: Hacker News Title: Transfusion: Predict the Next Token and Diffuse Images with One Multimodal Model Feedly Summary: Comments AI Summary and Description: Yes Summary: The text introduces “Transfusion,” a novel multi-modal model that integrates language modeling and image diffusion within a unified framework. It emphasizes superior scaling properties and efficiency in…