Tag: llm

  • Docker: You are Doing MCP Wrong: 3 Big Misconceptions

    Source URL: https://www.docker.com/blog/mcp-misconceptions-tools-agents-not-api/ Source: Docker Title: You are Doing MCP Wrong: 3 Big Misconceptions Feedly Summary: MCP is not an API. Tools are not agents. MCP is more than tools. Here’s what this means in practice. Most developers misread the Model Context Protocol because they map it onto familiar API mental models. That mistake breaks…

  • Slashdot: FreeBSD Project Isn’t Ready To Let AI Commit Code Just Yet

    Source URL: https://developers.slashdot.org/story/25/09/03/1649201/freebsd-project-isnt-ready-to-let-ai-commit-code-just-yet?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: FreeBSD Project Isn’t Ready To Let AI Commit Code Just Yet Feedly Summary: AI Summary and Description: Yes Summary: The FreeBSD Project’s recent status report reveals a cautious approach towards the use of code generated by LLMs (Large Language Models) due to licensing concerns. They are working on establishing…

  • Schneier on Security: Indirect Prompt Injection Attacks Against LLM Assistants

    Source URL: https://www.schneier.com/blog/archives/2025/09/indirect-prompt-injection-attacks-against-llm-assistants.html Source: Schneier on Security Title: Indirect Prompt Injection Attacks Against LLM Assistants Feedly Summary: Really good research on practical attacks against LLM agents. “Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous” Abstract: The growing integration of LLMs into applications has introduced new security risks,…

  • The Register: In the rush to adopt hot new tech, security is often forgotten. AI is no exception

    Source URL: https://www.theregister.com/2025/09/02/exposed_ollama_servers_insecure_research/ Source: The Register Title: In the rush to adopt hot new tech, security is often forgotten. AI is no exception Feedly Summary: Cisco finds hundreds of Ollama servers open to unauthorized access, creating various nasty risks Cisco’s Talos security research team has found over 1,100 Ollama servers exposed to the public internet,…

  • Simon Willison’s Weblog: Introducing gpt-realtime

    Source URL: https://simonwillison.net/2025/Sep/1/introducing-gpt-realtime/#atom-everything Source: Simon Willison’s Weblog Title: Introducing gpt-realtime Feedly Summary: Introducing gpt-realtime Released a few days ago (August 28th), gpt-realtime is OpenAI’s new “most advanced speech-to-speech model". It looks like this is a replacement for the older gpt-4o-realtime-preview model that was released last October. This is a slightly confusing release. The previous realtime…

  • Simon Willison’s Weblog: Cloudflare Radar: AI Insights

    Source URL: https://simonwillison.net/2025/Sep/1/cloudflare-radar-ai-insights/ Source: Simon Willison’s Weblog Title: Cloudflare Radar: AI Insights Feedly Summary: Cloudflare Radar: AI Insights Cloudflare launched this dashboard back in February, incorporating traffic analysis from Cloudflare’s network along with insights from their popular 1.1.1.1 DNS service. I found this chart particularly interesting, showing which documented AI crawlers are most active collecting…

  • Cisco Security Blog: Detecting Exposed LLM Servers: A Shodan Case Study on Ollama

    Source URL: https://feedpress.me/link/23535/17131153/detecting-exposed-llm-servers-shodan-case-study-on-ollama Source: Cisco Security Blog Title: Detecting Exposed LLM Servers: A Shodan Case Study on Ollama Feedly Summary: We uncovered 1,100+ exposed Ollama LLM servers—20% with open models—revealing critical security gaps and the need for better LLM threat monitoring. AI Summary and Description: Yes Summary: The text highlights the discovery of over 1,100…

  • The Register: LegalPwn: Tricking LLMs by burying badness in lawyerly fine print

    Source URL: https://www.theregister.com/2025/09/01/legalpwn_ai_jailbreak/ Source: The Register Title: LegalPwn: Tricking LLMs by burying badness in lawyerly fine print Feedly Summary: Trust and believe – AI models trained to see ‘legal’ doc as super legit Researchers at security firm Pangea have discovered yet another way to trivially trick large language models (LLMs) into ignoring their guardrails. Stick…

  • Simon Willison’s Weblog: Claude Opus 4.1 and Opus 4 degraded quality

    Source URL: https://simonwillison.net/2025/Aug/30/claude-degraded-quality/#atom-everything Source: Simon Willison’s Weblog Title: Claude Opus 4.1 and Opus 4 degraded quality Feedly Summary: Claude Opus 4.1 and Opus 4 degraded quality Notable because often when people complain of degraded model quality it turns out to be unfounded – Anthropic in the past have emphasized that they don’t change the model…