Simon Willison’s Weblog: microsoft/phi-4

Source URL: https://simonwillison.net/2025/Jan/8/phi-4/
Source: Simon Willison’s Weblog
Title: microsoft/phi-4

Feedly Summary: microsoft/phi-4
Here’s the official release of Microsoft’s Phi-4 LLM, now officially under an MIT license.
A few weeks ago I covered the earlier unofficial versions, where I talked about how the model used synthetic training data in some really interesting ways.
It benchmarks favorably compared to GPT-4o, suggesting this is yet another example of a GPT-4 class model that can run on a good laptop.
The model already has several available community quantizations. I ran the mlx-community/phi-4-4bit one (a 7.7GB download) using mlx-llm like this:
uv run –with ‘numpy<2' --with mlx-lm python -c ' from mlx_lm import load, generate model, tokenizer = load("mlx-community/phi-4-4bit") prompt = "Generate an SVG of a pelican riding a bicycle" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True, max_tokens=2048) print(response)' Here's what I got back. Tags: phi, generative-ai, ai, microsoft, llms, uv, pelican-riding-a-bicycle AI Summary and Description: Yes **Summary:** The text discusses the release of Microsoft's Phi-4 language model (LLM) under an MIT license, highlighting its performance compared to GPT-4 and its accessibility for running on standard laptops. The discussion includes practical implementation details and showcases the model's capability to generate creative outputs. **Detailed Description:** The release of Microsoft's Phi-4 LLM marks a significant development in the landscape of generative AI and language models, particularly in terms of accessibility and performance. The following key points summarize the significance of this release: - **Official Release:** Microsoft has officially released the Phi-4 LLM under an MIT license, making it available for wider use and integration. - **Performance Benchmarking:** The Phi-4 model shows favorable benchmarking results when compared to GPT-4, indicating that it is a strong competitor within the same class of models. - **Synthetic Training Data:** The earlier unofficial versions of the model utilized synthetic training data, showcasing innovative approaches in model training that could influence future AI developments. - **Run on Standard Hardware:** The Phi-4 model can run efficiently on laptops, broadening the scope for developers and researchers to utilize advanced AI without needing high-end infrastructure. - **Community Support:** There are several community quantizations available for Phi-4, which enhances its usability and encourages collaborative development. - **Implementation Example:** The provided code snippet illustrates how to load the Phi-4 model using a community quantized version and generate creative outputs, exemplifying user-friendly interactions with the model. This release opens up new possibilities in generative AI, especially for those working in AI security and infrastructure, as it demonstrates advancements in model efficiency and use-case applications. Additionally, the use of MIT licensing fosters collaboration and innovation in AI methodologies, contributing to the overall evolution of the sector. - **Potential Implications for Security and Compliance:** - The accessibility of powerful LLMs like Phi-4 could raise questions about the ethical use of such technologies and the need for governance and compliance frameworks in entities utilizing them. - Considerations around data handling, privacy, and the regulatory landscape surrounding generative AI usage will become increasingly important as these technologies proliferate.