Industry News

Xcode dives into agent coding with OpenAI depth and Anthropic integration


Apple brings agent codes to Xcode. On Tuesday, the company announced the release of Xcode 26.3, which will allow developers to use agent tools, including Anthropic’s Claude Agent and OpenAI’s Codex, directly in Apple’s official app development system.

Xcode Release Candidate 26.3 is available to all Apple Developers today from the developer website and will arrive in the App Store later.

This latest update comes after the release of Xcode 26 last year, which introduced support for ChatGPT and Claude within Apple’s integrated development environment (IDE) used by those building apps for iPhone, iPad, Mac, Apple Watch, and other Apple hardware platforms.

The integration of agent coding tools allows AI models to tap into other Xcode features to perform their tasks and perform complex automation.

Models will also have access to Apple’s current developer documentation to ensure they are using the latest APIs and following best practices as they build.

At launch, agents can help developers test their project, understand its structure and metadata, then build the project and run tests to see if there are any errors and fix them, if any.

Photo credits:an apple

In preparation for this launch, Apple said it worked closely with Anthropic and OpenAI to design the new experience. Specifically, the company said it has done a lot of work to improve tokenization and tool calling, so that agents will work well in Xcode.

Xcode uses MCP (Model Context Protocol) to expose its capabilities to agents and connect them to its tools. That means Xcode can now work with any external MCP-compatible agent for things like project discovery, changes, file management, previews and previews, and access to recent documents.

Developers who want to try out the agent code feature should first download the agents they want to use from the Xcode settings. They can also connect their accounts with AI providers by signing in or adding their API key. A drop-down menu within the application allows developers to select which version of the model they want to use (eg GPT-5.2-Codex vs. GPT-5.1 mini).

In the command box on the left side of the screen, developers can tell the agent what kind of project they want to build or switch to the code they want to execute using natural language commands. For example, they can direct Xcode to add a feature to their application that uses one of the frameworks provided by Apple, and how it should appear and work.

Photo credits:an apple

As the agent starts running, it breaks tasks down into smaller steps, so it’s easy to see what’s happening and how the code is changing. It will also check the documentation it needs before it starts coding. Changes are highlighted visually within the code, and the project text on the side of the screen lets developers learn what’s going on under the hood.

This transparency can especially help new developers learning to code, Apple believes. To that end, the company is hosting a “code once” workshop Thursday on its developer site, where users can watch and learn how to use the agent’s coding tools as they write in real time with their copy of Xcode.

At the end of its process, the AI ​​agent verifies that the code it created works as expected. Armed with the results of his assessment in this case, the agent can repeat the project if there is a need to correct errors or other problems. (Apple notes that asking the agent to think about its plans before writing code can sometimes help improve the process, as it forces the agent to do some pre-planning.)

And, if developers aren’t happy with the results, they can easily revert their code to the original at any point in time, as Xcode creates a milestone every time the agent makes a change.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button