Explore the key takeaways from Apple’s WWDC 2025, including the new Liquid Glass design system, AI and ChatGPT integration in Xcode, major updates to Swift, iPadOS, VisionOS, and widgets. A deep dive into the tools, trends, and technologies shaping the future of iOS development.
The main keynote of Apple’s WWDC 2025 has just concluded, the primary thematic sessions have been announced, and it’s time to discuss the main trends in iOS development that await us. We’ll cover what we will learn, what practices we will master, and which presentations to listen to and watch at the events.
The sessions can be found on the portal. Note that the site’s design has changed, and the sessions are now broken down by topic. It’s unclear if this is just a portion of the sessions or if Apple decided to publish everything at once as Google does, but they have clearly decided to move away from their previous approach.
So, what new features were introduced?
The most significant announcement was not the AI enhancements, but the Liquid Glass design system. The new “glass effect” has been applied to the UI of all devices in the Apple ecosystem. For many, the translucent, glowing icons, along with special animations and movement mechanics, are reminiscent of both Windows Vista and Material Design. Information on how to work with the new design system is included in almost all the videos for the themed week. Relatively little time during the keynote was dedicated to other new developments in APIs and Xcode. As is tradition, all the detailed information will be in the sessions.
Let’s discuss some important technological innovations in more detail, specifically from a development perspective.
We’ll start with what’s new in the Swift language. Apple engineers have done significant work on performance, memory optimization and management, as well as parallelism and multithreading. Structured concurrency is evolving into “Approachable Concurrency,” which ensures thread safety and protection against data races and other conflicts.
Improvements have also been made to macros and their development. Libraries for working with subprocesses, such as calling external processes within an application, were also introduced. A great deal of attention is also being paid to tools for testing and debugging applications. The mechanism for notifying about value changes using Notifications is also being updated and optimized.
Apple engineers are also placing a special emphasis on Swift’s availability in other IDEs (like VS Code integration), on other operating systems (Linux, FreeBSD), and on interoperability with other languages—not just C/C++ but also Java. This is a big step towards direct interoperability with Kotlin. For more details, see the “Explore Swift and Java interoperability” and “Safely mix C, C++, and Swift” sessions.
Last year, Apple announced a major effort to integrate AI into macOS and its entire ecosystem. In reality, they delivered only a few features (emoji generation, an assistant for mail and notifications, text analysis), and these were not available in all regions. The integration of AI into the Xcode development tools was significantly more modest than Google’s Gemini AI assistant and was limited to autocomplete. Many were disappointed but continued to wait for Apple’s response to its competitors. A year later, the wait is over.
Apple introduced ChatGPT integration in a number of system applications and features. For example, there is a very convenient (at first glance) Screen Search feature that allows searching for content within installed applications by image. Apple’s contextual recommendations, a feature that has been confusing for several years, are now powered by a newly patented technology.
We are much more interested in what “smart” features have appeared in the development tools. Apple has opened access to on-device AI models, enabling the creation of new tools and applications. In Xcode 26, large language models like ChatGPT are integrated. However, there’s a catch: besides GPT not being available everywhere, there is also a limit on the number of requests you can make through Xcode. Therefore, you can connect your own account with your own API key. You can also configure integration with another model (like Anthropic) or connect local models. This will immediately solve the availability problem for the “smart” functionality. Support for LM Studio will also allow you to include any of your own models, not just Llama, but also, for example, Deepseek. You can also set up switching between models using a special menu. This is very reminiscent not only of what Google has for Android Studio but also of custom solutions from enthusiasts.
A new simplified menu allows for the automatic application of changes to the selected code. The model can analyze the context of the entire project, which allows it to understand references and make appropriate changes. What is also very useful is that Xcode allows you to store the history of code changes as snapshots to revert to a previous version of the code.
AI for development is not limited to just Xcode. Apple introduced functionality to support “smart” features in your application and integration using intents, shortcuts, and Siri. The Foundation Models framework for working with AI on the device deserves special attention. You can use it for powerful content analysis and generation in your applications based on your own configured prompts and templates, as well as for your own Xcode extensions.
However, what we should definitely thank Apple engineers for is MLX LM. With this tool, we will be able to work with powerful LMs, and train and fine-tune them on a Mac. As is known, Mac does not support virtualization and requires enormous resources for training LLMs. For more details, see “Explore large language models on Apple Silicon with MLX,” “Get started with MLX for Apple Silicon,” “Discover machine learning & AI frameworks on Apple platforms,” and others. I also recommend checking out the tutorial on generative AI from Apple.
This is another response to Google and their Live Updates. Now, widgets and Live Activities will be supported on almost all devices in the Apple ecosystem and will sync with the iPhone through mirroring. A great deal of attention is being paid to the development, performance, and UX of widgets and Live Activities on watches and the VisionOS smart glasses.
A dark theme is being introduced, as well as the glass effect in the style of the new Liquid Glass design system. The updates concern not only the appearance, but also improvements to widget updates and their performance.
Traditionally, large and powerful functions require special attention to issues of performance, power consumption, memory optimization, and network requests. For the first time in several years, Apple is introducing a new tutorial on working with background tasks, their API, and state changes called “Finish tasks in the background.” Attention is also being given to working with background resources. The Network Framework has also received improvements, including built-in mechanisms for secure connections, working with TLS and QUIC, increasing connection stability, and new methods for device discovery and connection setup with NetworkConnections, including Wi-Fi connections.
Let’s conclude today’s overview with SwiftUI and its usage. Apple is not abandoning the use of UIKit and continues to develop it. Both frameworks receive support for the Liquid Glass design system, and enhancements to support development on different devices. For example, iPadOS has received support for multi-window applications and a menu bar, just like in macOS.
The new design system and effects require improvements to the rendering engine and memory optimization. How this is implemented can be learned from the videos “What’s new in UIKit” and “What’s new in SwiftUI.” In the SwiftUI section, you can also find tutorials on the new features of SwiftUI and their use in widgets, performance improvements, working with watchOS, spatial computing (VisionOS), and much more.
Author: Anna Zharkova, Head of the mobile practice at Usetech