Tag: release notes

  • Simon Willison’s Weblog: llm-openrouter 0.3

    Source URL: https://simonwillison.net/2024/Dec/8/llm-openrouter-03/#atom-everything Source: Simon Willison’s Weblog Title: llm-openrouter 0.3 Feedly Summary: llm-openrouter 0.3 New release of my llm-openrouter plugin, which allows LLM to access models hosted by OpenRouter. Quoting the release notes: Enable image attachments for models that support images. Thanks, Adam Montgomery. #12 Provide async model access. #15 Fix documentation to list correct…

  • Simon Willison’s Weblog: Meta AI release Llama 3.3

    Source URL: https://simonwillison.net/2024/Dec/6/llama-33/#atom-everything Source: Simon Willison’s Weblog Title: Meta AI release Llama 3.3 Feedly Summary: Meta AI release Llama 3.3 This new Llama-3.3-70B-Instruct model from Meta AI makes some bold claims: This model delivers similar performance to Llama 3.1 405B with cost effective inference that’s feasible to run locally on common developer workstations. I have…

  • Simon Willison’s Weblog: LLM 0.19

    Source URL: https://simonwillison.net/2024/Dec/1/llm-019/ Source: Simon Willison’s Weblog Title: LLM 0.19 Feedly Summary: LLM 0.19 I just released version 0.19 of LLM, my Python library and CLI utility for working with Large Language Models. I released 0.18 a couple of weeks ago adding support for calling models from Python asyncio code. 0.19 improves on that, and…

  • Cloud Blog: Boost your Continuous Delivery pipeline with Generative AI

    Source URL: https://cloud.google.com/blog/topics/developers-practitioners/boost-your-continuous-delivery-pipeline-with-generative-ai/ Source: Cloud Blog Title: Boost your Continuous Delivery pipeline with Generative AI Feedly Summary: In the domain of software development, AI-driven assistance is emerging as a transformative force to enhance developer experience and productivity and ultimately optimize overall software delivery performance. Many organizations started to leverage AI-based assistants, such as Gemini Code…