Tag: large language model

  • Simon Willison’s Weblog: New Pleias 1.0 LLMs trained exclusively on openly licensed data

    Source URL: https://simonwillison.net/2024/Dec/5/pleias-llms/#atom-everything Source: Simon Willison’s Weblog Title: New Pleias 1.0 LLMs trained exclusively on openly licensed data Feedly Summary: New Pleias 1.0 LLMs trained exclusively on openly licensed data I wrote about the Common Corpus public domain dataset back in March. Now Pleias, the team behind Common Corpus, have released the first family of…

  • Simon Willison’s Weblog: Claude 3.5 Haiku price drops by 20%

    Source URL: https://simonwillison.net/2024/Dec/5/claude-35-haiku-price-drops-by-20/#atom-everything Source: Simon Willison’s Weblog Title: Claude 3.5 Haiku price drops by 20% Feedly Summary: Claude 3.5 Haiku price drops by 20% Buried in this otherwise quite dry post about Anthropic’s ongoing partnership with AWS: To make this model even more accessible for a wide range of use cases, we’re lowering the price…

  • The Register: Wish there was a benchmark for ML safety? Allow us to AILuminate you…

    Source URL: https://www.theregister.com/2024/12/05/mlcommons_ai_safety_benchmark/ Source: The Register Title: Wish there was a benchmark for ML safety? Allow us to AILuminate you… Feedly Summary: Very much a 1.0 – but it’s a solid start MLCommons, an industry-led AI consortium, on Wednesday introduced AILuminate – a benchmark for assessing the safety of large language models in products.… AI…

  • Hacker News: Bringing K/V context quantisation to Ollama

    Source URL: https://smcleod.net/2024/12/bringing-k/v-context-quantisation-to-ollama/ Source: Hacker News Title: Bringing K/V context quantisation to Ollama Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses K/V context cache quantisation in the Ollama platform, a significant enhancement that allows for the use of larger AI models with reduced VRAM requirements. This innovation is valuable for professionals…

  • Simon Willison’s Weblog: Quoting Steve Yegge

    Source URL: https://simonwillison.net/2024/Dec/4/steve-yegge/ Source: Simon Willison’s Weblog Title: Quoting Steve Yegge Feedly Summary: In the past, these decisions were so consequential, they were basically one-way doors, in Amazon language. That’s why we call them ‘architectural decisions!’ You basically have to live with your choice of database, authentication, JavaScript UI framework, almost forever. But that’s changing…

  • Hacker News: AI hallucinations: Why LLMs make things up (and how to fix it)

    Source URL: https://www.kapa.ai/blog/ai-hallucination Source: Hacker News Title: AI hallucinations: Why LLMs make things up (and how to fix it) Feedly Summary: Comments AI Summary and Description: Yes Summary: The text addresses a critical issue in AI, particularly with Large Language Models (LLMs), known as “AI hallucination.” This phenomenon presents significant challenges in maintaining the reliability…

  • Wired: OpenAI Is Working With Anduril to Supply the US Military With AI

    Source URL: https://www.wired.com/story/openai-anduril-defense/ Source: Wired Title: OpenAI Is Working With Anduril to Supply the US Military With AI Feedly Summary: The ChatGPT maker is the latest AI giant to reveal it’s working with the defense industry, following similar announcements by Meta and Anthropic. AI Summary and Description: Yes Summary: OpenAI’s partnership with defense startup Anduril…

  • Hacker News: Test Driven Development (TDD) for your LLMs? Yes please, more of that please

    Source URL: https://blog.helix.ml/p/building-reliable-genai-applications Source: Hacker News Title: Test Driven Development (TDD) for your LLMs? Yes please, more of that please Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses the challenges and solutions associated with testing LLM-based applications in software development, emphasizing the novel approach of utilizing an AI model for automated…

  • Wired: AI-Powered Robots Can Be Tricked Into Acts of Violence

    Source URL: https://www.wired.com/story/researchers-llm-ai-robot-violence/ Source: Wired Title: AI-Powered Robots Can Be Tricked Into Acts of Violence Feedly Summary: Researchers hacked several robots infused with large language models, getting them to behave dangerously—and pointing to a bigger problem ahead. AI Summary and Description: Yes Summary: The text delves into the vulnerabilities associated with large language models (LLMs)…