
At launch, the agents can explore a developer’s project, interpret its structure and metadata, build the project, run tests, and help identify and fix errors. Apple says the models will also have access to current developer documentation, helping ensure code is written against the latest APIs and best practices.
To support the integration, Apple worked closely with Anthropic and OpenAI to optimize token usage and tool calling so the agents run efficiently within Xcode. The system is built on the Model Context Protocol (MCP), which means Xcode can connect not only to Claude and Codex but also to other MCP-compatible agents for tasks like file management, previews, snippets, and documentation lookup.
Developers can enable the feature by downloading agents through Xcode’s settings and connecting accounts via sign-in or API keys. A drop-down menu allows users to select different model versions, and a prompt box within the IDE lets developers describe changes or new features in natural language.

Apple is also emphasizing transparency in how the agents work. Tasks are broken into visible steps, code changes are highlighted, and a project transcript shows what the agent is doing as it progresses. Xcode also creates milestones as changes are made, making it easy to revert if developers don’t like the results.
Apple believes the feature could be particularly helpful for newer developers learning how projects are structured and built. The company is hosting a “code-along” workshop Thursday on its developer site to demonstrate how to use agentic tools in real time.
With Xcode 26.3, Apple is positioning AI agents as a deeper part of the app development workflow — bringing autonomous coding, testing, and iteration directly into the tools used to build software across the Apple ecosystem.
This analysis is based on reporting from TechCrunch.
Images courtesy of Apple.
This article was generated with AI assistance and reviewed for accuracy and quality.