Swift Concurrency Model

Jan 25, 2022iosswiftconcurrency

Swift 5.5 is a massive release includes newly introduced language capabilities for concurrent programming like async/await, structured concurrency, tasks, actors, and async sequences.

This concurrency model intends to provide a safe programming model that statically detects data races and other common concurrency bugs, tightly integrated with the language syntax, and built on top of threads.

Concurrent Programming

The concept of concurrent programming is frequently confused with the related but distinct concept of parallel programming. Many developers still get confused when they come across following concepts:

  • Parallelism: Refers to techniques to make programs faster by performing several computations at the same time, requires hardware with multiple processing units to do more at the same time.
  • Concurrency: The ability of different parts of a program to be executed out-of-order or in partial order, without affecting the final outcome. Concurrency is about structure, handling (asynchronous, nondeterministic) events, but not necessarily be parallelizable.
  • Synchronous: When you execute something synchronously, you wait for it to finish before moving on to another task.
  • Asynchronous: When you execute something asynchronously, you can move on to another task before it finishes.
  • Concurrency model: Specifies how a system implements concurrency, responsible for executing the code, collecting and processing events, and executing queued tasks.

It’s important to note that parallelism requires concurrency, but concurrency does not guarantee parallelism. Basically, concurrency is about structure while parallelism is about execution.

In iOS, a process or application consists of one or more threads. The operating system scheduler manages the threads independently of each other. Each thread can execute concurrently, but it’s up to the system to decide if this happens, when this happens, and how it happens.

Single-core devices achieve concurrency through a method called time-slicing. They run one thread, perform a context switch, then run another thread.

Multi-core devices, on the other hand, execute multiple threads at the same time via parallelism.

Using new Swift’s language-level support for concurrency in code that needs to be concurrent means Swift can help you catch problems at compile time. Swift uses the term concurrency to refer to common combination of asynchronous and parallel code.

Although it’s possible to write concurrent code using callback closures, more complex code with deep nesting can quickly become unwieldy.

listPhotos(inGallery: "Summer Vacation") { photoNames in
  let sortedNames = photoNames.sorted()
  let name = sortedNames[0]
  downloadPhoto(named: name) { photo in

Grand Central Dispatch (GCD)

For years, writing powerful and safe concurrent apps with Swift could easily turn into a daunting task, full of race conditions and unexplained crashes hidden in a massive nesting of callback closures.

You used to rely on GCD to run asynchronous code via dispatch queues — an abstraction over threads. You also used thread wrapper APIs like Operation and Thread, or even interacting with the C-based pthread library directly.

DispatchQueue.global(qos: .userInitiated).async { [weak self] in
  guard let self = self else { return }
  let overlayImage = self.faceOverlayImageFrom(self.image)
  DispatchQueue.main.async { [weak self] in

Those APIs all use the same foundation: POSIX threads, a standardized execution model that doesn’t rely on any given programming language. Each execution flow is a thread, and multiple threads might overlap and run at the same time.

Thread wrappers like Operation and Thread require you to manually manage execution. In other words, you’re responsible for creating and destroying threads, deciding the order of execution for concurrent jobs and synchronizing shared data across threads. This is error-prone and tedious work.

GCD’s queue-based model worked well. However, it would often cause issues, like:

  • Thread explosion: Creating too many concurrent threads requires constantly switching between active threads. This ultimately slows down your app.
  • Priority inversion: When arbitrary, low-priority tasks block the execution of high-priority tasks waiting in the same queue.
  • Lack of execution hierarchy: Asynchronous code blocks lacked an execution hierarchy, meaning each task was managed independently. This made it difficult to cancel or access running tasks. It also made it complicated for a task to return a result to its caller.

New Swift concurrency model transparently manages a pool of threads to ensure it doesn’t exceed the number of CPU cores available. This way, the runtime doesn’t need to create and destroy threads or constantly perform expensive thread switching. Instead, your code can suspend and, later on, resume very quickly on any of the available threads in the pool.


Swift’s new async/await syntax lets the compiler and the runtime know that a piece of code might suspend and resume execution one or more times in the future. The runtime handles this for you seamlessly, so you don’t have to worry about threads and cores.

As a wonderful bonus, the new language syntax often removes the need to weakly or strongly capture self or other variables because you don’t need to use escaping closures as callbacks.

The async keyword defines a function as asynchronous. await lets you wait in a non-blocking fashion for the result of the asynchronous function.

func chopVegetables() async throws -> [Vegetable] {}
func marinateMeat() async -> Meat {}
func preheatOven(temperature: Double) async throws -> Oven {}

func makeDinner() async throws -> Meal {
  let veggies = try await chopVegetables()
  let meat = await marinateMeat()
  let oven = try await preheatOven(temperature: 350)
  let dish = Dish(ingredients: [veggies, meat])
  return try await oven.cook(dish, duration: .hours(3))

It’s important to remember that each of these await-annotated partial tasks might run on a different thread at the system’s discretion. Not only can the thread change, but you shouldn’t make assumptions about the app’s state after an await; although two lines of code appear one after another, they might execute some time apart. Awaiting takes an arbitrary amount of time, and the app state might change significantly in the meantime.

This syntax lets the compiler guide you in writing safe and solid code, while the runtime optimizes for a well-coordinated use of shared system resources.

Structured Concurrency

Any concurrency system must offer certain basic tools. There must be some way to create a new thread that will run concurrently with existing threads. There must also be some way to make a thread wait until another thread signals it to continue. These are powerful tools, and you can write very sophisticated systems with them. But they’re also very primitive tools: they make very few assumptions, but in return they give you very little support.

Structured concurrency asks programmers to organize their use of concurrency into high-level tasks and their child component tasks. These tasks become the primary units of concurrency, rather than lower-level concepts like threads. Structuring concurrency this way allows information to naturally flow up and down the hierarchy of tasks which would otherwise require carefully-written support at every level of abstraction and on every thread transition. This in turn permits many high-level problems to be addressed with relative ease.

Structured concurrency enables concurrent execution of asynchronous code with a model that is ergonomic, predictable, and admits efficient implementation.


Swift’s new Task type enables us to encapsulate, observe, and control a unit of asynchronous work — which in turn lets us call async-marked APIs, and perform background work, even within code that’s otherwise completely synchronous. That way, we can gradually introduce async functions and the rest of Swift’s new concurrency system, even within applications that weren’t designed with those new features in mind.

class ProfileViewController: UIViewController {
  private let userID: User.ID
  private let loader: UserLoader
  private var user: User?

  override func viewWillAppear(_ animated: Bool) {

    Task {
      do {
        let user = try await loader.loadUser(withID: userID)
      } catch {

Tasks run in a strict hierarchy, so the runtime knows who’s the parent of a task and which features new tasks should inherit.

Task runs on the actor that called it. To create the same task without it being a part of the actor, use Task.detached(priority:operation:). when your code creates a Task from the main thread, that task will run on the main thread, too. Therefore, you know you can update the app’s UI safely.

Remember, you learned that every use of await is a suspension point, and your code might resume on a different thread. The first piece of your code runs on the main thread because the task initially runs on the main actor. But after the first await, your code can execute on any thread. You need to explicitly route any UI-driving code back to the main thread.

Task Groups

A task group defines a scope in which one can create new child tasks programmatically. As with all child tasks, the child tasks within the task group scope must complete when the scope exits, and will be implicitly cancelled first if the scope exits with a thrown error.

func makeDinner() async throws -> Meal {
  async let veggies = chopVegetables()
  async let meat = marinateMeat()
  async let oven = preheatOven(temperature: 350)

  let dish = Dish(ingredients: await [try veggies, meat])
  return try await oven.cook(dish, duration: .hours(3))


Swift includes classes, which provide a mechanism for declaring mutable state that is shared across the program. Classes, however, are notoriously difficult to correctly use within concurrent programs, requiring error-prone manual synchronization to avoid data races.

The actor model defines entities called actors that are perfect for this task. Actors allow you as a programmer to declare that a bag of state is held within a concurrency domain and then define multiple operations that act upon it.

Each actor protects its own data through data isolation, ensuring that only a single thread will access that data at a given time, even when many clients are concurrently making requests of the actor.

As part of the Swift Concurrency Model, actors provide the same race and memory safety properties as structured concurrency, but provide the familiar abstraction and reuse features that other explicitly declared types in Swift enjoy.

An actor is a reference type that protects access to its mutable state, and is introduced with the keyword actor:

actor BankAccount {
  let accountNumber: Int
  var balance: Double

  init(accountNumber: Int, initialDeposit: Double) {
    self.accountNumber = accountNumber
    self.balance = initialDeposit

Like other Swift types, actors can have initializers, methods, properties, and subscripts. They can be extended and conform to protocols, be generic, and be used with generics.

The primary difference is that actors protect their state from data races. This is enforced statically by the Swift compiler through a set of limitations on the way in which actors and their instance members can be used, collectively called actor isolation.

Actor isolation is how actors protect their mutable state. For actors, the primary mechanism for this protection is by only allowing their stored instance properties to be accessed directly on self.

Global Actor

This actor represents a globally-unique actor that can be used to isolate various declarations anywhere in the program.

A type that conforms to the GlobalActor protocol and is marked with the @globalActor attribute can be used as a custom attribute.

struct MediaActor {
  actor ActorType {}
  static let shared: ActorType = ActorType()

struct Videogame {
  let id = UUID()
  let name: String
  let releaseYear: Int
  let developer: String

@MediaActor var videogames: [Videogame] = []

When using such a declaration from another actor (or from nonisolated code), synchronization is performed through the shared actor instance to ensure mutually-exclusive access to the declaration.

Main Actor

Before the modern concurrency system, we could simply call DispatchQueue.main.async and pass in a completion block. This block would run on main, making it safe to update our UI from there.

Because the new concurrency system may jump around different threads, suspending tasks, resuming others (which may do so in different threads), and so on, we need another mechanism to update our main thread.

The main actor, written as @MainActor, is a singleton actor whose executor is equivalent to the main dispatch queue.

class IconViewController: NSViewController {
  @MainActor @objc private dynamic var icons: [[String: Any]] = []

  @MainActor var url: URL? {
    didSet {
      // Asynchronously perform an update
      Task.detached { [url] in // not isolated to any actor
        guard let url = url else { return }
        let newIcons = self.gatherContents(url)
        await self.updateIcons(newIcons) // 'await' required so we can hop over to the main actor

  @MainActor private func updateIcons(_ iconArray: [[String: Any]]) {
    icons = iconArray

Asynchronous Sequences

This feature creates an intuitive, built-in way to write and use functions that return many values over time. You can naturally loop over an asynchronous sequence over time by using a for try await loop syntax.

for await i in Counter(howHigh: 10) {
    print(i, terminator: " ")

AsyncSequence is a protocol describing a sequence that can produce elements asynchronously. Its surface API is identical to the Swift standard library’s Sequence, with one difference: You need to await the next element, since it might not be immediately available, as it would in a regular Sequence

An AsyncSequence doesn’t generate or contain the values; it just defines how you access them. Along with defining the type of values as an associated types, it returns an instance of type AsyncIterator.