Tag: GPT-4o

  • Simon Willison’s Weblog: Introducing 4o Image Generation

    Source URL: https://simonwillison.net/2025/Mar/25/introducing-4o-image-generation/#atom-everything Source: Simon Willison’s Weblog Title: Introducing 4o Image Generation Feedly Summary: Introducing 4o Image Generation When OpenAI first announced GPT-4o back in May 2024 one of the most exciting features was true multi-modality in that it could both input and output audio and images. The “o" stood for "omni", and the image…

  • OpenAI : Introducing 4o Image Generation

    Source URL: https://openai.com/index/introducing-4o-image-generation Source: OpenAI Title: Introducing 4o Image Generation Feedly Summary: At OpenAI, we have long believed image generation should be a primary capability of our language models. That’s why we’ve built our most advanced image generator yet into GPT‑4o. The result—image generation that is not only beautiful, but useful. AI Summary and Description:…

  • OpenAI : Addendum to GPT-4o System Card: 4o image generation

    Source URL: https://openai.com/index/gpt-4o-image-generation-system-card-addendum Source: OpenAI Title: Addendum to GPT-4o System Card: 4o image generation Feedly Summary: 4o image generation is a new, significantly more capable image generation approach than our earlier DALL·E 3 series of models. It can create photorealistic output. It can take images as inputs and transform them. AI Summary and Description: Yes…

  • Simon Willison’s Weblog: Qwen2.5-VL-32B: Smarter and Lighter

    Source URL: https://simonwillison.net/2025/Mar/24/qwen25-vl-32b/#atom-everything Source: Simon Willison’s Weblog Title: Qwen2.5-VL-32B: Smarter and Lighter Feedly Summary: Qwen2.5-VL-32B: Smarter and Lighter The second big open weight LLM release from China today – the first being DeepSeek v3-0324. Qwen’s previous vision model was Qwen2.5 VL, released in January in 3B, 7B and 72B sizes. Today’s release is a 32B…

  • Simon Willison’s Weblog: New audio models from OpenAI, but how much can we rely on them?

    Source URL: https://simonwillison.net/2025/Mar/20/new-openai-audio-models/#atom-everything Source: Simon Willison’s Weblog Title: New audio models from OpenAI, but how much can we rely on them? Feedly Summary: OpenAI announced several new audio-related API features today, for both text-to-speech and speech-to-text. They’re very promising new models, but they appear to suffer from the ever-present risk of accidental (or malicious) instruction…

  • Simon Willison’s Weblog: OpenAI platform: o1-pro

    Source URL: https://simonwillison.net/2025/Mar/19/o1-pro/ Source: Simon Willison’s Weblog Title: OpenAI platform: o1-pro Feedly Summary: OpenAI platform: o1-pro OpenAI have a new most-expensive model: o1-pro can now be accessed through their API at a hefty $150/million tokens for input and $600/million tokens for output. That’s 10x the price of their o1 and o1-preview models and a full…

  • CSA: Is GPT-4o a Privacy Risk for Businesses?

    Source URL: https://cloudsecurityalliance.org/articles/privacy-concerns-and-corporate-caution-the-double-edged-sword-of-generative-ai Source: CSA Title: Is GPT-4o a Privacy Risk for Businesses? Feedly Summary: AI Summary and Description: Yes Summary: The text discusses the evolving generative AI technologies, particularly OpenAI’s GPT-4o, emphasizing the potential risks to data privacy associated with their use in business settings. It underscores the concerns surrounding data collection, corporate restrictions,…

  • The Register: Show top LLMs buggy code and they’ll finish off the mistakes rather than fix them

    Source URL: https://www.theregister.com/2025/03/19/llms_buggy_code/ Source: The Register Title: Show top LLMs buggy code and they’ll finish off the mistakes rather than fix them Feedly Summary: One more time, with feeling … Garbage in, garbage out, in training and inference Researchers have found that large language models (LLMs) tend to parrot buggy code when tasked with completing…

  • Hacker News: Mlx-community/OLMo-2-0325-32B-Instruct-4bit

    Source URL: https://simonwillison.net/2025/Mar/16/olmo2/ Source: Hacker News Title: Mlx-community/OLMo-2-0325-32B-Instruct-4bit Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses the OLMo 2 model, which claims to be a superior, fully open alternative to GPT-3.5 Turbo and GPT-4o mini. It provides installation instructions for running this model on a Mac, highlighting its ease of access…

  • Simon Willison’s Weblog: Mistral Small 3.1

    Source URL: https://simonwillison.net/2025/Mar/17/mistral-small-31/#atom-everything Source: Simon Willison’s Weblog Title: Mistral Small 3.1 Feedly Summary: Mistral Small 3.1 Mistral Small 3 came out in January and was a notable, genuinely excellent local model that used an Apache 2.0 license. Mistral Small 3.1 offers a significant improvement: it’s multi-modal (images) and has an increased 128,000 token context length,…