Tag: Model Access

  • Docker: Introducing Docker Model Runner: A Better Way to Build and Run GenAI Models Locally

    Source URL: https://www.docker.com/blog/introducing-docker-model-runner/ Source: Docker Title: Introducing Docker Model Runner: A Better Way to Build and Run GenAI Models Locally Feedly Summary: Docker Model Runner is a faster, simpler way to run and test AI models locally, right from your existing workflow. AI Summary and Description: Yes Summary: The text discusses the launch of Docker…

  • Simon Willison’s Weblog: deepseek-ai/DeepSeek-V3-0324

    Source URL: https://simonwillison.net/2025/Mar/24/deepseek/ Source: Simon Willison’s Weblog Title: deepseek-ai/DeepSeek-V3-0324 Feedly Summary: deepseek-ai/DeepSeek-V3-0324 Chinese AI lab DeepSeek just released the latest version of their enormous DeepSeek v3 model, baking the release date into the name DeepSeek-V3-0324. The license is MIT, the README is empty and the release adds up a to a total of 641 GB…

  • Simon Willison’s Weblog: deepseek-ai/DeepSeek-V3-0324

    Source URL: https://simonwillison.net/2025/Mar/24/deepseek/ Source: Simon Willison’s Weblog Title: deepseek-ai/DeepSeek-V3-0324 Feedly Summary: deepseek-ai/DeepSeek-V3-0324 Chinese AI lab DeepSeek just released the latest version of their enormous DeepSeek v3 model, baking the release date into the name DeepSeek-V3-0324. The license is MIT, the README is empty and the release adds up a to a total of 641 GB…

  • Simon Willison’s Weblog: Mistral Small 3.1

    Source URL: https://simonwillison.net/2025/Mar/17/mistral-small-31/#atom-everything Source: Simon Willison’s Weblog Title: Mistral Small 3.1 Feedly Summary: Mistral Small 3.1 Mistral Small 3 came out in January and was a notable, genuinely excellent local model that used an Apache 2.0 license. Mistral Small 3.1 offers a significant improvement: it’s multi-modal (images) and has an increased 128,000 token context length,…

  • Simon Willison’s Weblog: LLM 0.22, the annotated release notes

    Source URL: https://simonwillison.net/2025/Feb/17/llm/#atom-everything Source: Simon Willison’s Weblog Title: LLM 0.22, the annotated release notes Feedly Summary: I released LLM 0.22 this evening. Here are the annotated release notes: model.prompt(…, key=) for API keys chatgpt-4o-latest llm logs -s/–short llm models -q gemini -q exp llm embed-multi –prepend X Everything else model.prompt(…, key=) for API keys Plugins…