At DevDay 2025, people expected huge announcements like GPT‑6 with memory and personality, a voice assistant that can make calls, and a full ChatGPT “operating system.” Most of that didn’t happen, but OpenAI has still revealed something big: ChatGPT is now a platform, turning from a text assistant into a full-fledged ecosystem of interactive applications. It signals a clear shift of the company from consumer products toward a reliable, scalable platform for developers and the corporate segment.
Here are the key announcements and their potential impact on the industry.
Apps in ChatGPT
Users will now be able to interact with services such as Booking, Expedia, Zillow, Figma, Canva, or Spotify directly in the ChatGPT window. These are no longer just plugins—they're true "mini-apps" with their own interface, logic, and capabilities that run inside ChatGPT. For example, you can write: “Figma, turn this sketch into a diagram” — and the system will automatically open the Figma interactive interface. Or, write “Coursera, teach me machine learning” — and the assistant will show training courses without leaving the chat. In the Zillow demo, a user asked Zillow to find apartments in their area at a certain price. ChatGPT not only generated a response, but also showed an interactive map with the results. It also continued the conversation so a user could refine the parameters or learn details about each property.
ChatGPT will now be also able to independently suggest apps if it finds it useful. For example, if a user asks to create a playlist for a party, ChatGPT can automatically call apps like Spotify. Integration with other popular services like DoorDash, TripAdvisor, Instacart, Uber, AllTrails and others is expected in the future.
The new system runs on the Model Context Protocol, which makes ChatGPT much more interactive. Apps can perform actions like placing orders or opening files, show dynamic interfaces right in the chat, and even embed videos or other media that users can interact with through the conversation. Subscribers to certain services can log in directly and access premium features without extra steps. On top of that, OpenAI is introducing monetization options for developers, including an Instant Checkout feature that lets users pay for products or services right in the chat.
Previous attempts by OpenAI to expand ChatGPT capabilities already included the GPT Store. But when previously the user had to go to a separate app store, now all integrations are available directly in the chat window. It means that the user can call third-party services
with a regular command in the ChatGPT dialog.
At the same time, the company is launching a preview of the Apps SDK, a toolkit that allows developers to create new fully-fledged interactive apps that run directly within ChatGPT. The SDK covers everything from the frontend to the backend. It allows developers to create, connect, and execute entire workflows within a single location. It can connect to external APIs and databases, so the chat app can fetch or send real-time data ( for example, checking flight availability or pulling product specs). The SDK lets the app perform real actions, not just display information, such as booking a ticket, creating a report, or updating a document right through the chat. It incorporates interactive and dynamic UI components, like buttons, forms, tables, etc., as a part of the chat experience instead of only regular text. It even has a built-in payment protocol, called the "Agentic Commerce Protocol," to enable secure purchases or transactions through the chat.
The developer community has mixed reactions on these novelties. On the one hand, many are calling it a "brilliant move," turning ChatGPT into something akin to an operating system. On the other, there are concerns about the "death of startups," especially those building wrapper apps on top of the OpenAI API.
For the enterprise segment, this opens up opportunities to create custom solutions without having to develop their own frontend. For independent developers, it offers the chance to monetize their skills through the upcoming app marketplace.
AgentKit
With the launch of applications in ChatGPT, OpenAI introduced AgentKit, which is OpenAI's answer to the growing demand for automation through AI agents. It offers a set of tools that allows developers and companies to quickly create, test, and launch their own autonomous AI agents without writing any code. Previously, this task required weeks of preparation, settings, and front-end work. But now everything can be done in a few hours in a single interface.
According to OpenAI, AgentKit is already used by companies to automate customer support, internal processes, and data management. The platform is designed to make creating AI assistants accessible even to small teams and become another step towards the emergence of personalized agents within the ChatGPT ecosystem. AgentKit includes:
Agent Builder: a visual builder for creating agent workflows from ready-made blocks. Developers simply drag and drop elements onto the canvas, configure logic, add security, and immediately test the agent’s performance. Agents can reason, make decisions, and perform multi-step tasks autonomously. It's similar to a combination of Zapier and n8n, but with the integration of deep language models.
ChatKit: a set of ready-made components for creating conversational interfaces with agents. It can be easily embedded into a website or application and styled to fit your brand. Ideal for creating specialized chatbots with minimal effort.
Connector Registry: an integration management center. It allows you to consolidate all connections, from Google Drive and Dropbox to Microsoft Teams, into a common panel, which is convenient for corporate users.
Built-in Evals: an agent evaluation and testing system that allows measuring their performance and reliability, which is critical for enterprise use.
Speaking of security, AgentKit supports the Guardrails system, which limits unwanted
actions by agents, filters confidential information, and prevents the execution of risky commands. Some developers note that the functionality is"not particularly revolutionary" compared to existing tools, but for businesses without their own development, it could be an entry point into the world of AI automation.
Codex
OpenAI has also unveiled an updated AI programming assistant, Codex, that has left research preview beta status and is now available for general use. This coding agent offers expanded capabilities - now it has an increased speed and accuracy of code generation, a built-in widget gallery, and MCP support. It integrates with Slack - the agent reads chats and understands context, and offers Codex SDK, so it is possible to embed the agent anywhere, even in an IoT lightbulb.
At the conference, OpenAI demonstrated Codex's solution to the problem of creating camera control software. The agent succeeded on the first try. Developers who have tested the new version report a significant improvement in the quality of generated code and more granular control over the result. Codex is positioned as a tool for accelerating development in high-performance teams.
The New Models in the API
One of the final announcements has been that OpenAI has significantly expanded the range of models available via the API.
GPT-5 Pro: The flagship model for tasks requiring deep reasoning and analysis. Its main applications are complex data analysis, strategic planning, scientific research, and enterprise applications with high-quality requirements. Pricing is $15.00/1 million input tokens and $120.00/1 million output tokens.
Sora 2: The second generation of the video creation model has received significant improvements. It can remix existing videos, changing resolution and format, and offer personalization (adding products, logos, even pets). Pricing is $0.10/second for Sora-2 and $0.30-0.50/second for Sora-2-pro. A demo with Mattel, where concept sketches were transformed into full-fledged presentation videos, attracted special attention. It opens up new possibilities for content creation for e-commerce and marketing.
Gpt-realtime-mini: An optimized model for real-time voice applications, 70% cheaper than the full-size model. Its main applications are chatbots, voice assistants, and online interaction systems. Pricing is $0.60/1 million input tokens and $2.40/1 million output tokens, making it a much more affordable option compared to the full-size GPT-4o model.
Gpt-image-1-mini: A budget model for image generation, which is 80% cheaper than previous full-size image generation models from OpenAI or similar AI providers. Its main applications are mass creation of visual content, illustrations, and design prototyping. Pricing is $2.00 per 1 million text input tokens, $2.50 per 1 million image input tokens, and $8.00 per 1 million image output tokens, making it a cost-effective option for generating or editing images compared to larger models.
Bottom Line
DevDay 2025 demonstrates a clear strategy of OpenAI - the creation of a self-sufficient ecosystem, where ChatGPT becomes not just a chatbot, but a platform for the development and distribution of applications. The Apps SDK and tools such as AgentKit bring a new level of accessibility to AI: developers will be able to develop apps that they can sell, businesses can use AI without coding, and cheaper models for text, voice, and images will lead to more AI-powered apps for everyone. It also sends a strong signal to the wider software ecosystem: if your SaaS app can’t integrate with ChatGPT via an SDK, you risk losing users and revenue, as the platform becomes the central hub for interaction. Even automation tools like Zapier and Make face competition, and browsers without AI integration may be reduced to simple page renderers. In short, ChatGPT is evolving into the core platform for apps, automation, and web access, and companies must adapt or risk being left behind. OpenAI is evidently investing in a future where AI functions as an integrated ecosystem as opposed to standalone models, and DevDay 2025 displayed how committed the company is to pursuing that vision.
FAQ
What is the big takeaway from DevDay 2025?
ChatGPT is no longer just a chatbot—it’s now a full platform for apps, automation, and AI agents, signaling a shift toward a scalable ecosystem for developers and businesses.
What has been the biggest announcement at OpenAI’s DevDay 2025?
The biggest announcement has been the launch of the Apps SDK, which transforms ChatGPT into a platform for running fully interactive applications. Developers can now create apps with their own interfaces and logic that operate directly within ChatGPT, making it a hub for third-party services like Figma, Spotify, and Zillow.
What is the Apps SDK?
The Apps SDK is a tool for developers to build fully functional apps that run directly in ChatGPT. Unlike simple API integrations, these apps have their own interfaces, logic, and capabilities, operating as "mini-apps" in the ChatGPT chat window. It covers both frontend and backend, supports dynamic UI components, and even includes a payment protocol.
What is AgentKit?
AgentKit is a no-code tool to create, test, and deploy autonomous AI agents. It’s designed for businesses and small teams to automate tasks like customer support, workflows, and data management.
What improvements came to Codex?
Codex is OpenAI’s AI programming assistant. The new version generates code faster and more accurately, integrates with Slack, offers a widget gallery, and can be embedded into external tools or devices.
What new models have become available in the API?
Four new models have become available in the API. They are GPT-5 Pro for advanced reasoning and analysis, Sora 2 for video generation and personalization, gpt-realtime-mini for real-time voice applications and gpt-image-1-mini for an affordable image generation.
What does OpenAI’s platform shift mean for the software industry?
OpenAI’s focus on turning ChatGPT into a platform for apps and automation signals a future where AI ecosystems dominate. SaaS companies, automation tools like Zapier, and even browsers risk losing relevance if they don’t integrate with ChatGPT, as it aims to become the central hub for user interactions.