Simon Willison’s Weblog: o3-pro

Source URL: https://simonwillison.net/2025/Jun/10/o3-pro/
Source: Simon Willison’s Weblog
Title: o3-pro

Feedly Summary: o3-pro
OpenAI released o3-pro today, which they describe as a “version of o3 with more compute for better responses".
It’s only available via the newer Responses API. I’ve added it to my llm-openai-plugin plugin which uses that new API, so you can try it out like this:
llm install -U llm-openai-plugin
llm -m openai/o3-pro "Generate an SVG of a pelican riding a bicycle"

It’s slow – generating this pelican took 124 seconds! OpenAI suggest using their background mode for o3 prompts, which I haven’t tried myself yet.
o3-pro is priced at $20/million input tokens and $80/million output tokens – 10x the price of regular o3 after its 80% price drop this morning.
Ben Hylak had early access and published his notes so far in God is hungry for Context: First thoughts on o3 pro. It sounds like this model needs to be applied very thoughtfully. It comparison to o3:

It’s smarter. much smarter.
But in order to see that, you need to give it a lot more context. and I’m running out of context. […]
My co-founder Alexis and I took the the time to assemble a history of all of our past planning meetings at Raindrop, all of our goals, even record voice memos: and then asked o3-pro to come up with a plan.
We were blown away; it spit out the exact kind of concrete plan and analysis I’ve always wanted an LLM to create — complete with target metrics, timelines, what to prioritize, and strict instructions on what to absolutely cut.
The plan o3 gave us was plausible, reasonable; but the plan o3 Pro gave us was specific and rooted enough that it actually changed how we are thinking about our future.
This is hard to capture in an eval.

It sounds to me like o3-pro works best when combined with tools. I don’t have tool support in llm-openai-plugin yet, here’s the relevant issue.
Tags: llm, openai, llm-reasoning, llm-pricing, o3, ai, llms, llm-release, generative-ai, pelican-riding-a-bicycle

AI Summary and Description: Yes

Summary: OpenAI’s release of o3-pro, a more powerful variant of its LLM, showcases advancements in generative AI capabilities. Key features include enhanced intelligence, substantial pricing adjustments, and tailored user experiences, highlighting practical applications in planning and decision-making.

Detailed Description: OpenAI’s recent launch of o3-pro signifies a notable enhancement within the realm of generative AI, particularly for users relying on large language models (LLMs). The model is described as having significantly more computational power, enabling it to provide better, contextually richer responses. This release is directly relevant to categories such as AI Security and Generative AI Security, given the implications for secure and effective use of AI technologies in professional environments.

Key Points:
– **Enhanced Capability**:
– o3-pro is characterized as “much smarter” than its predecessor, o3, though it requires more context to fully utilize its enhanced intelligence.
– **Performance Insights**:
– The user experiences some limitations, noting that generating a specific request (an SVG of a pelican riding a bicycle) took 124 seconds.
– OpenAI suggests using a background mode for handling o3 prompts to potentially improve response times.
– **Pricing Model**:
– The pricing for o3-pro is set at $20 per million input tokens and $80 per million output tokens, which is significantly higher than the earlier model, especially after a recent 80% price drop of o3.
– **User Experience**:
– Users, including Raindrop co-founders, found that o3-pro provides concrete and actionable planning outputs that can greatly influence future strategies.
– The model’s results included specific target metrics, timelines, and prioritized actions, indicating a higher level of practical application for professional use cases.

This development emphasizes the ongoing evolution and increasing sophistication of generative AI tools, necessitating that security and compliance professionals keep apprised of new capabilities, potential security vulnerabilities, and compliance considerations surrounding the use of such advanced AI systems in organizational settings. Moreover, understanding how these AI models interact with data and require careful context handling is crucial to ensure secure implementation.