Tag: llm
-
Simon Willison’s Weblog: Quoting Ethan Mollick
Source URL: https://simonwillison.net/2024/Dec/7/ethan-mollick/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Ethan Mollick Feedly Summary: A test of how seriously your firm is taking AI: when o-1 (& the new Gemini) came out this week, were there assigned folks who immediately ran the model through internal, validated, firm-specific benchmarks to see how useful it as? Did you…
-
Simon Willison’s Weblog: New Gemini model: gemini-exp-1206
Source URL: https://simonwillison.net/2024/Dec/6/gemini-exp-1206/#atom-everything Source: Simon Willison’s Weblog Title: New Gemini model: gemini-exp-1206 Feedly Summary: New Gemini model: gemini-exp-1206 Google’s Jeff Dean: Today’s the one year anniversary of our first Gemini model releases! And it’s never looked better. Check out our newest release, Gemini-exp-1206, in Google AI Studio and the Gemini API! I upgraded my llm-gemini…
-
Embrace The Red: Terminal DiLLMa: LLM-powered Apps Can Hijack Your Terminal Via Prompt Injection
Source URL: https://embracethered.com/blog/posts/2024/terminal-dillmas-prompt-injection-ansi-sequences/ Source: Embrace The Red Title: Terminal DiLLMa: LLM-powered Apps Can Hijack Your Terminal Via Prompt Injection Feedly Summary: Last week Leon Derczynski described how LLMs can output ANSI escape codes. These codes, also known as control characters, are interpreted by terminal emulators and modify behavior. This discovery resonates with areas I had…
-
Hacker News: Show HN: Prompt Engine – Auto pick LLMs based on your prompts
Source URL: https://jigsawstack.com/blog/jigsawstack-mixture-of-agents-moa-outperform-any-single-llm-and-reduce-cost-with-prompt-engine Source: Hacker News Title: Show HN: Prompt Engine – Auto pick LLMs based on your prompts Feedly Summary: Comments AI Summary and Description: Yes **Short Summary with Insight:** The JigsawStack Mixture-Of-Agents (MoA) offers a novel framework for leveraging multiple Language Learning Models (LLMs) in applications, effectively addressing challenges in prompt management, cost…
-
Simon Willison’s Weblog: Roaming RAG – make the model find the answers
Source URL: https://simonwillison.net/2024/Dec/6/roaming-rag/#atom-everything Source: Simon Willison’s Weblog Title: Roaming RAG – make the model find the answers Feedly Summary: Roaming RAG – make the model find the answers Neat new RAG technique (with a snappy name) from John Berryman: The big idea of Roaming RAG is to craft a simple LLM application so that the…
-
Simon Willison’s Weblog: datasette-enrichments-llm
Source URL: https://simonwillison.net/2024/Dec/5/datasette-enrichments-llm/#atom-everything Source: Simon Willison’s Weblog Title: datasette-enrichments-llm Feedly Summary: datasette-enrichments-llm Today’s new alpha release is datasette-enrichments-llm, a plugin for Datasette 1.0a+ that provides an enrichment that lets you run prompts against data from one or more column and store the result in another column. So far it’s a light re-implementation of the existing…