Chapters

Hide chapters

Swift Internals

First Edition · iOS 26 · Swift 6.2 · Xcode 26

4. Embracing Structured Concurrency
Written by Aaqib Hussain

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

Ever since Apple introduced async/await and actors, writing concurrent code has changed fundamentally. Structured concurrency offers a level of simplicity and safety that was missing in older APIs such as DispatchQueue and OperationQueue. If writing asynchronous code with DispatchQueue was often a matter of “Somehow, I manage”, then writing it with structured concurrency is confidently “Of course, I manage.”

This modern system enables you to write thread-safe code that is less prone to race conditions from the start, as the compiler actively guides you away from potential issues.

This chapter examines Apple’s entire async ecosystem. You will go beyond the basics to master Task hierarchies, ensure UI safety with the Main Actor, and process asynchronous data streams. Brace yourself, an adventure is coming…

Mastering Structured Concurrency

If you often write asynchronous code, you’re likely aware of how it can create a chaotic web of completion blocks and disconnected queues. This makes it difficult to track the lifecycle of work items or to handle cancellations properly.

On the other hand, while async/await introduces clean syntax, its real power lies in the structure it provides. It not only enforces a formal hierarchy but also provides a clear and predictable order to that chaos. Additionally, it provides compile-time safety and a runtime system that automatically manages complex scenarios, such as parallel execution and cancellation, which helps prevent common bugs and resource leaks.

The Task Hierarchy: More Than Just a Closure

In Swift, a Task is not just a closure that runs on a background thread, but a container that concurrently runs work that the system actively manages. Each Task has a priority, can be cancelled, and exists within its own hierarchy, the Task Tree.

struct UserProfile {
  var name: String
  var handle: String
}

struct ActivityItem {
  var description: String
}

func fetchProfile() async throws -> UserProfile {
  print("Child 1 (Profile): Fetching...")
  try await Task.sleep(for: .seconds(1)) // Simulate work
  print("Child 1 (Profile): Finished.")
  return UserProfile(name: "Michael Scott", handle: "@michaelscott")
}

func fetchFeed() async throws -> [ActivityItem] {
  print("Child 2 (Feed): Starting loop...")
  for i in 0..<100 {
    // This sleep is a cancellation point
    try await Task.sleep(for: .milliseconds(500))
    // This line will not be printed after cancellation
    print("Child 2 (Feed): Completed iteration \(i)")
  }
  return []
}

func loadUserProfileAndActivityFeed() async {
  print("Parent: Starting to fetch data.")
  // 1
  async let profileTask = fetchProfile()
  async let feedTask = fetchFeed()
  
  do {
    let (profile, feed) = try await (profileTask, feedTask) // 2
    print("Parent: Successfully loaded profile for \(profile) and \(feed.count) activity items.")
  } catch {
    print("Parent: One of the child's tasks was cancelled or threw an error.")
  }
}
let mainTask = Task {
  await loadUserProfileAndActivityFeed()
}
Parent: Starting to fetch data.
Child 1 (Profile): Fetching...
Child 2 (Feed): Starting loop...
Parent: Starting to fetch data.
Child 1 (Profile): Fetching...
Child 2 (Feed): Starting loop...
Child 2 (Feed): Completed iteration 0
Child 2 (Feed): Completed iteration 1
Child 2 (Feed): Completed iteration 2
Child 1 (Profile): Finished.
Child 2 (Feed): Completed iteration 3
Child 2 (Feed): Completed iteration 4
Child 2 (Feed): Completed iteration 5
...
mainTask.cancel()
Parent: One of the child's tasks was cancelled or threw an error.
func loadUserProfileAndActivityFeed() async {
  print("Parent: Starting to fetch data.")
  
  do {
    let profileTask = try await fetchProfile() // 1
    let feedTask = try await fetchFeed() // 2
    
    let (profile, feed) = (profileTask, feedTask)
    print("Parent: Successfully loaded profile for \(profile) and \(feed!.count) activity items.")
  } catch {
    print("Parent: One of the child's tasks was cancelled or threw an error.")
  }
}
enum FetchResult {
  case profile(UserProfile)
  case feed([ActivityItem])
}

func loadUserProfileAndActivityFeed() async {
  print("Parent: Starting to fetch data.")
  
  do {
    // Create variables to hold the results from the group
    var profile: UserProfile?
    var feed: [ActivityItem]?
    
    try await withThrowingTaskGroup(of: FetchResult.self) { group in // 1
      // Add child tasks to the group. They run in parallel.
      group.addTask {
        return .profile(try await fetchProfile()) // 2
      }
      
      group.addTask {
        return .feed(try await fetchFeed()) // 3
      }
      
      // Collect the results as they complete
      for try await result in group { // 4
        switch result {
        case let .profile(fetchedProfile):
          profile = fetchedProfile
        case let .feed(fetchedFeed):
          feed = fetchedFeed
        }
      }
    }
    
    // The group has finished, and you can now use the results.
    print("Parent: Successfully loaded profile for \(profile) and \(feed!.count) activity items.")
  } catch {
    print("Parent: One of the child's tasks was cancelled or threw an error.")
  }
}

Understanding Task Priority

Every task you create has a priority, which indicates how important its work is to the system. The system uses this priority to decide which task to schedule on an available thread, especially when there are more tasks ready to run than CPU cores available.

Task(priority: .background) {
  // Perform cleanup work here...
  print("Cleaning up old files on priority: \(Task.currentPriority)")
}

Task Cancellation

Some asynchronous tasks might take longer than expected. For example, downloading a large image or a PDF could cause the user to cancel the process. In such cases, each task should check for cancellation. There are two ways to do this: using Task.isCancelled or by using try Task.checkCancellation(). Here’s how you do it:

func fetchFeed() async throws -> [ActivityItem] {
  print("Child 2 (Feed): Starting loop...")
  for i in 0..<100 {
    // Cancellation check
    if Task.isCancelled {     //
      throw CancellationError()  //  1
    }        //
    
    // This sleep is a cancellation point
    try await Task.sleep(for: .milliseconds(500))
    
    // This line will not be printed after cancellation
    print("Child 2 (Feed): Completed iteration \(i)")
  }
  return []
}
func loadUserProfileAndActivityFeed() async {
  print("Parent: Starting to fetch data.")
  
  do {
    // ...
    
    try await withThrowingTaskGroup(of: FetchResult.self) { group in // 1
      // Add child tasks to the group. They run in parallel.
      group.addTask {
        return .profile(try await fetchProfile())
      }
      
      group.addTaskUnlessCancelled {
        return .feed(try await fetchFeed())
      }
      
      // ...
    }
  } catch {
    print("Parent: One of the child's tasks was cancelled or threw an error.")
  }
}

Cooperative Cancellation with Task.yield()

In concurrent systems, it’s important for long-running tasks to be considerate. A task that performs heavy CPU-based computation without taking any breaks can monopolize a thread, blocking other tasks from executing. To address this, Swift offers Task.yield().

let taskA = Task {
  print("Task A: Starting a long loop.")
  for i in 0..<10 {
    print("Task A: Now on iteration \(i)")
  }
  print("Task A: Finished")
}

let taskB = Task {
  print("Task B: Starting a long loop.")
  for i in 0..<10 {
    print("Task B: Now on iteration \(i)")
  }
  print("Task B: Finished")
}
Task A: Starting a long loop.
Task A: Now on iteration 0
...
Task A: Finished
Task B: Starting a long loop.
Task B: Now on iteration 0
...
Task B: Finished
let taskA = Task {
  print("Task A: Starting a long loop.")
  for i in 0..<5 {
    await Task.yield()
    print("Task A: Now on iteration \(i)")
  }
  print("Task A: Finished")
}

let taskB = Task {
  print("Task B: Starting a long loop.")
  for i in 0..<5 {
    await Task.yield()
    print("Task B: Now on iteration \(i)")
  }
  print("Task B: Finished")
}

Task A: Starting a long loop.
Task B: Starting a long loop.
Task A: Now on iteration 0
Task B: Now on iteration 0
Task A: Now on iteration 1
Task B: Now on iteration 1
...

Tasks: Breaking the Structure

Swift provides robustness and control through a parent-child hierarchy in structured concurrency, which you learned previously. In addition, Swift provides Unstructured Concurrency. Unlike tasks that are in a parent-child relationship, an unstructured task is independent and doesn’t rely on a parent task. It provides complete flexibility to manage tasks however you need. It inherits the surrounding context; for example, if created in a @MainActor scope, it inherits that isolation. It also inherits priority and task-local values. You can use @TaskLocal static var to create a task-scoped value that is visible to child tasks.

struct RequestInfo {
  @TaskLocal static var requestID: UUID?
}

func handleTaskRequest() async {
  await RequestInfo.$requestID.withValue(UUID()) {
    if let id = RequestInfo.requestID {
      print("Processing order with ID: \(id)") // 1
    }
    
    // Create a child task
    let childTask = Task {
      // The child task "gets a copy" of the parent's task-local values
      if let id = RequestInfo.requestID {
        print("Child task logging for ID: \(id)") // 2
      }
    }
    await childTask.value
  }
}
func handleDetachedRequest() async {
  await RequestInfo.$requestID.withValue(UUID()) {
    
    if let id = RequestInfo.requestID {
      print("Processing order with ID: \(id)") // 1
    }
    
    let detachedTask = Task.detached { // 2
      print("Detached Task: Starting...")
      
      if let id = RequestInfo.requestID { // 3
        print("Detached Task: Inherited request ID \(id)")
      } else {
        print("Detached Task: I have no request ID. I am independent.")
      }
    }
    
    await detachedTask.value
  }
}

Data Isolation

Because an app often handles many concurrent tasks, two (or more) tasks can try to update a shared state at the same time, leading to a data race. To prevent this, Swift enforces data isolation to ensure that your data is always correct when accessed and that no other thread modifies it concurrently. There are three ways to isolate data.

Advanced Actors and Data Safety

Actors are fundamental to modern Swift concurrency. They offer a robust, compiler-verified way to prevent data races. By isolating state and enforcing serialized access, they address many traditional issues in multithreaded programming. However, actors are not a perfect solution. They introduce their own challenges and complex behaviors that must be understood to maximize efficiency. Below, you’ll learn some of the challenges and advanced techniques for controlling actor execution and understanding their place in the broader ecosystem of thread-safety patterns.

The Reentrancy Problem Explained

An actor’s primary feature is to execute methods one at a time, preventing multiple threads from accessing its state simultaneously. However, there is an exception known as Actor Reentrancy.

actor ProgressTracker {
  var loadedValues: [String] = []
  
  func load(_ value: String) async {
    // 1
    let expectedCount = loadedValues.count + 1
    print("Starting load for '\(value)'. Expecting count to be \(expectedCount).")
    
    loadedValues.append(value)
    
    // 2
    try? await Task.sleep(for: .seconds(1))
    
    // 4
    print("Finished load for '\(value)'. Expected \(expectedCount), but actual count is now: \(loadedValues.count)")
  }
}

let tracker = ProgressTracker()
Task { await tracker.load("A") }
Task { await tracker.load("B") } // 3
Starting load for 'A'. Expecting count to be 1.
Starting load for 'B'. Expecting count to be 2.
Finished load for 'A'. Expected 1, but actual count is now: 2
Finished load for 'B'. Expected 2, but actual count is now: 2

Preventing Reentrancy

You can eliminate this problem using the following rules:

Customizing Execution with SerialExecutor

By default, an actor’s code runs on a shared global concurrency thread pool managed by the Swift runtime. At any given time, the system determines the most efficient execution strategy. While this generally works well, in certain cases, you might want the actor’s code to execute on a particular thread or a serial queue. This can be achieved with a custom executor.

final class BackgroundQueueExecutor: SerialExecutor {
  // A shared instance for all actors that might use it
  static let shared = BackgroundQueueExecutor()
  
  // The specific queue you want your actor's code to run on
  private let backgroundQueue = DispatchQueue(label: "com.kodeco.background-executor", qos: .background)
  
  func enqueue(_ job: UnownedJob) { // 1
    backgroundQueue.async {
      job.runSynchronously(on: self.asUnownedSerialExecutor())
    }
  }
}
actor LegacyAPIBridge {
  private let _unownedExecutor: UnownedSerialExecutor
  init(unownedExecutor: UnownedSerialExecutor = BackgroundQueueExecutor.shared.asUnownedSerialExecutor()) {
    _unownedExecutor = unownedExecutor
  }
  
  nonisolated var unownedExecutor: UnownedSerialExecutor {
    _unownedExecutor
  }
  
  func performUnsafeWork() {
    // Thanks to our custom executor, this code is now guaranteed
    // to run on `BackgroundQueueExecutor.shared.backgroundQueue`.
    print("Performing work on a specific queue...")
  }
}

Bridging Concurrency Realms

Swift Concurrency did not emerge in isolation. For years, Combine served as Apple’s modern, declarative framework for managing asynchronous events. It brought a powerful functional approach to handling streams of values over time. As a result, many mature and reliable codebases have a significant investment in Combine publishers, subscribers, and operators.

From Combine to AsyncSequence

The most common situation you might encounter is using an existing Combine publisher from a ViewModel or an API layer in new async/await code. Swift makes this process quite straightforward. Every publisher provided by Combine has a property called values that is inherently an AsyncSequence. Much like the standard Sequence protocol allows you to iterate over a collection with a for...in loop, the AsyncSequence protocol lets you iterate over the values emitted by the publisher with a for await...in loop.

import Combine

enum UserActionEvent: String {
  case loginButtonTapped
  case dismissButtonTapped
  case logoutButtonTapped
}

let subject = PassthroughSubject<UserActionEvent, Never>()

// This task will run indefinitely, waiting for new values from the publisher.
let combineListenerTask = Task {
  print("Listener: Waiting for values from Combine...")
  for await value in subject.values {
    print("Listener: Received '\(value)' from the publisher.")
  }
  print("Listener: Finished.")
}

// In another part of your code, you can send values through the subject.
try await Task.sleep(for: .seconds(1))
subject.send(.loginButtonTapped)
try await Task.sleep(for: .seconds(1))
subject.send(.dismissButtonTapped)
try await Task.sleep(for: .seconds(1))
subject.send(.logoutButtonTapped)
combineListenerTask.cancel()

From async/await to Combine

The reverse case is also possible, where you have the latest code written with async/await, and you need to provide compatibility with an older part of the code that is built with Combine and expects a publisher. The standard approach here is to wrap the async call in a Future publisher.

enum FetchError: Error { case networkError }

func fetchUserName(id: Int) async throws -> String {
  try await Task.sleep(for: .seconds(1))
  if id == 123 {
    return "Ray Wenderlich"
  } else {
    throw FetchError.networkError
  }
}
func userNamePublisher(for id: Int) -> Future<String, Error> {
  return Future { promise in
    Task {
      do {
        let username = try await fetchUserName(id: id)
        promise(.success(username))
      } catch {
        promise(.failure(error))
      }
    }
  }
}
var cancellable: Set<AnyCancellable> = []
userNamePublisher(for: 123)
  .sink(
    receiveCompletion: { completion in
      switch completion {
      case .finished:
        print("Finished successfully")
      case let .failure(error):
        print("Failed with error: \(error)")
      }
    },
    receiveValue: { username in
      print("Received username: \(username)")
    }
  ).store(in: &cancellable)
func userNamePublisher(for id: Int) -> AnyPublisher<String, Error> {
  return Deferred {
    Future<String, Error> { promise in
      Task {
        do {
          let username = try await fetchUserName(id: id)
          promise(.success(username))
        } catch {
          promise(.failure(error))
        }
      }
    }
  }.eraseToAnyPublisher()
}

Strategic Migration: When to Bridge and When to Rewrite

With these bridging tools, you face a decision when working with a mixed codebase: should you continue bridging the two realms or rewrite older Combine code to async/await?

Best Practices & Testability

The async/await syntax makes writing concurrent code much easier. While the keywords eliminate the complexity of callback hell, they don’t automatically ensure a solid architecture in your implementation. Writing production-quality concurrent code requires following best practices to keep it clean, maintainable, efficient, and performant.

Best Practice 1: Focused async/await Methods

An async method should have a single, clear purpose. It’s often easy to write an async function that handles a long chain of unrelated tasks, which can make the code hard to read, debug, and test.

func setupDashboard() async {
  // 1
  guard let user = try? await APIClient.shared.fetchUser() else { return }
  
  // 2
  let friends = try? await APIClient.shared.fetchFriends(for: user)
  
  // 3
  var userImages: [UIImage] = []
  if let photoURLs = try? await APIClient.shared.fetchPhotoURLs(for: user) {
    for url in photoURLs {
      if let data = try? await APIClient.shared.downloadImage(url: url) {
        // 4
        let processedImage = await processImage(data)
        userImages.append(processedImage)
      }
    }
  }
  // ... update UI with all this data ...
}
func fetchUser() async throws -> User { /* ... */ }
func fetchFriends(for user: User) async throws -> [Friend] { /* ... */ }
func fetchAllImages(for user: User) async -> [UIImage] { /* ... */ }

func setupDashboard() async {
  do {
    let user = try await fetchUser()
    // Run remaining fetches in parallel for performance
    async let friends = fetchFriends(for: user)
    async let images = fetchAllImages(for: user)
    
    let (userFriends, userImages) = try await (friends, images)
    // ... update UI ...
  } catch {
    // ... handle error ...
  }
}

Best Practice 2: Re-read State After await

This is the most important rule for writing correct code inside an actor. As mentioned earlier, any await is a suspension point where the actor can be re-entered by another task, which may change its state. Never assume that the state you read before an await will stay the same after it resumes. If your logic depends on the most up-to-date state, you must re-read it from the actor’s properties after the await finishes.

Best Practice 3: Be Deliberate with @MainActor

You can annotate entire classes or view models with @MainActor to address UI update issues. While sometimes effective, it can also cause performance problems by forcing non-UI tasks (like data processing or file I/O) onto the main thread, making your app less responsive and more likely to hang. Be precise and only isolate the specific properties or methods that genuinely need to interact with the UI.

Best Practice 4: Make Methods async to Control Execution

Perhaps the biggest challenge async/await introduces is testability. When a function is only called inside a Task within an object, it’s hard to write tests for that function because you’re left testing only the side effects it creates. You don’t have control over the function at all, like when it gets called, exactly when it finishes, and so on. This makes the tests flaky most of the time. To clarify this further, consider a UserProfileViewModel that calls fetchUserProfile().

struct UserProfile {
  // ...
}

protocol UserProfileRepository {
  func fetchUserProfile() async -> UserProfile
}

class UserProfileViewModel {
  private let repository: UserProfileRepository
  init(repository: UserProfileRepository) {
    self.repository = repository
  }
  
  func fetchUserProfile() {
    Task {
      let userProfile = await repository.fetchUserProfile()
      // ...
      // display the profile
    }
  }
}
class UserProfileRepositoryMock: UserProfileRepository {
  var fetchUserProfileCallsCount = 0
  // ...
  func fetchUserProfile() async -> UserProfile {
    fetchUserProfileCallsCount += 1
    return UserProfile()
  }
}

func testFetchProfile() throws {
  let repository = UserProfileRepositoryMock()
  let viewModel = UserProfileViewModel(repository: repository)
  viewModel.fetchUserProfile()
  XCTAssertEqual(repository.fetchUserProfileCallsCount, 1)
}
func fetchUserProfile() async {
  let userProfile = await repository.fetchUserProfile()
  // ...
  // display the profile
}
func testFetchProfile() async throws {
  let repository = UserProfileRepositoryMock()
  let viewModel = UserProfileViewModel(repository: repository)
  await viewModel.fetchUserProfile()
  XCTAssertEqual(repository.fetchUserProfileCallsCount, 1)
}

Key Points

Where to Go From Here?

You’re no longer just using async/await; you’re equipped with the architectural mindset to build robust concurrent features. The real victory lies in applying these tools in practical scenarios. Consider how you can prevent actor reentrancy, develop systems free of data leaks, and leverage the power of Task Trees.

Have a technical question? Want to report a bug? You can ask questions and report bugs to the book authors in our official book forum here.
© 2026 Kodeco Inc.

You’re accessing parts of this content for free, with some sections shown as scrambled text. Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now