Skip to content →

Category: Algorithms

Delayed Preemptive Dispatch

DelayedPreemptiveDispatchQueueDemoIf you’re looking to run a task closure concurrently using Grand Central Dispatch after some delay … with the option to delay it again, replace it with another task, or cancel it altogether, the concept of delayed preemptive dispatch presented here may be of interest to you. For a quick peek, skip to Implementation; for use cases, read on.

Problem Scenarios

Consider these problem scenarios, all taken from some of my recent work. How would you implement the described functionality?

  1. You want to allow a user to make three incorrect passcode entries within a minute of each other before locking them out. If more than a minute elapses between entries, the try count resets to zero.
  2. You are using a third party framework that presents a panning view. Its API provides the means for notification as each pan motion stops…both when the pan comes to a rest and when the user interrupts one pan motion to start another. However, you need to know when all panning has stopped.
  3. You are interfacing with a third party library that concurrently retrieves an unknown quantity of online resources on your behalf. It only notifies you as each resource is downloaded. However, you need to take some action once all resources have loaded.

My approach to these problems was a common one: use a system I call  delayed preemptive dispatch (the term “dispatch” coming from Apple’s Grand Central Dispatch (GCD) libdispatch library available on all their operating systems from the Watch to the TV).

The gist of this system is I want to perform a task after a brief delay, but only if I don’t preempt it with another task before the prior task’s timer expires (in effect canceling the prior one); it is at the conclusion of the last submitted delay that the last submitted task is executed.

Here’s how I apply this to the above three problems, identified by the event I wish to be notified of:

  1. Invalid passcode expired. I start a one-minute timer when a passcode has been wrongly entered that, upon expiration, runs a task to reset the entry count. However, I don’t want that closure to run if, in the meantime, the user has re-entered the passcode. Instead, I want to preempt the first closure, replacing it with a second one-minute timer that will itself reset the entry count. Importantly, once the user enters the passcode correctly, I want to cancel the task altogether.
  2. Panning stopped. Here I start a timer upon receipt of a pan-stopped notification that, upon expiration, declares all-stop on panning. However, I don’t want that to occur if a subsequent pan-stop event occurs in the meantime. Rather, I want to preempt the preceding closure with the next and, yes, initiate a new delayed notification closure.
  3. All resources loaded. Finally, in this case I start a brief timer after each resource loads that, upon expiration, signals all resources loaded. However, I don’t want that signal to fire if a subsequent resource loads before the timer expires. Instead, I want to preempt the preceding notification closure and replace it with another one that, if not itself preempted, will be the one to notify all resources have been loaded.

Implementation

At this GitHub repo you will find an implementation of what I call a DelayedPreemptiveDispatchQueue, written in Swift 3, which includes unit tests and the above-illustrated UI demo.

With this class you can delay execution of a task until expiration of a timer, with the option to later reset the timer, preempt the task with another, or cancel the task altogether. It makes use of the Swift 3 GCD DispatchGroup to manage concurrent timers and fire off the delayed task once all the timers have expired.

Here’s a peek at the delay(_:task:) method in that class (see the repo for the full source):

/// Executes `task` no earlier than after the given delay. 
///
/// Subsequent calls to this method will preempt execution of this `task`. The task may also be provided via the class initializer. Cancel execution of a task via `cancel()`.
///
/// - parameter seconds: The number of seconds before this or the initializer-givn task may be executed. The delay will be longer if there is a pending delay that expires after this period.
/// - parameter task: The closure to run after _all_ pending delays have expired. If `nil` (the default), the task to be run is that given at initialization. If no task was given during initialization or at any call to this method, it takes no action.
func delay(_ seconds: TimeInterval, task: (() -> Void)? = nil) {

  // Make sure we have a defined task task
  if task != nil {
    self.task = task
  }
  guard self.task != nil else { return }

  // Newly submitted task preempts prior cancellation
  cancelled = false

  // Add delay timer to the dispatch group
  group.enter()
  self.queue.asyncAfter(deadline: DispatchTime.now() + seconds) {
    self.group.leave()
  }

  // Only assign GCD notify action when first timer added to group.
  // The notify action, which performs the task, runs once all timers have left the group.
  if empty {
    self.empty = false
    group.notify(queue: .main) {
      if !self.cancelled {
        self.task()
      }
      self.empty = true
    }
  }
}

The task may be provided via init(label:task:) (not shown), delay(_:task:), or a combination of both. Neither the delay times nor tasks need be the same with each call. Only the last task submitted will be called once all concurrent delay timers have expired. While you cannot cancel the timers themselves, you can cancel execution of the task via cancel() (also not shown). Once all timers have expired, you can reuse the DelayedPreemptiveDispatchQueue with or without specifying a new task.

While this class makes use of the DispatchGroup to manage a group of timers, the functionality given by DelayedPreemptiveDispatchQueue is not inherent to it. For example, you cannot provide a task to a DispatchGroup at initialization, but only via its instance notify() method. Also, all tasks submitted to a dispatch group execute; there is no concept of “preemption” as there is in this class.

For documentation on the Swift 3 implementation of Grand Central Dispatch, reference this current pre-release Swift 3 evolution proposal 88, “Modernize libdispatch for Swift 3 naming conventions.”

Leave a Comment

extension Queue: SequenceType

Previously, I presented a linked-list-based Queue class in the context of a discussion around Automatic Reference Counting (ARC). Here I extend that class to conform to the SequenceType protocol, and then add some bonus initializers that mirror those given for Swift’s collections.

In simplest terms, it is the SequenceType protocol that allows us to iterate over the elements in the Queue:

for element in queue {
  // do something with element
}

But that’s just the beginning. Taking a look at the Appe documentation and this excellent post by Nate Cook on the Swift Collection Protocols, we find that because of this iterability, we gain significant functionality for free, a subset of which includes the likes of enumerate, map–filter–reduce, reverse, sort, and so on. Good stuff!

Now, a SequenceType makes no guarantees as to whether the sequence is consumed, so if we were interested only in iteration and didn’t care about that (or perhaps wanted it to be consumed), we could destructively iterate like so:

while let element = queue.dequeue() {
  // do something with element
}

However, our Queue is constructed in such a way that it need not be consumed on iteration.

Generator

So let’s give Queue an iterator (generator in Swift vernacular) and declare its conformance to the SequenceType protocol. There’s not much to it, at least as far as Queue is concerned:

extension Queue: SequenceType {

  /**
   Returns a `SequenceType` generator for iterating over the queue.
   */
  func generate() -> QueueGenerator<T> {
    return QueueGenerator(queue: self)
  }
}

Beyond that, we need only define the QueueGenerator returned by generate(). It takes our Queue object and, with its front property and each node’s next property, non-destructively iterates over the elements in the queue:

/**
 The generator by which `Queue` conforms to `SequenceType`.
 */
struct QueueGenerator<T>: GeneratorType {
  
  /// The queue being iterated over.
  var queue: Queue<T>
  
  /// The current node.
  var node: QueueNode<T>?
  
  /**
   Initializes iteration at the front of the given `queue`.
   */
  init(queue: Queue<T>) {
    self.queue = queue
    node = queue.front
  }
  
  /**
   Returns the next element in the queue, or `nil` if the queue is empty or all nodes have been iterated.
   */
  mutating func next() -> T? {
    let element = node?.element
    node = node?.next
    return element
  }
}

Notice that like the previously defined dequeue() method, the next() method is not returning a queue node, but rather the element of type T that is wrapped by the node.

And that’s all there is to for SequenceType conformance!

Initializers

But before we move on, let’s go back to our original Queue definition and drop in some initializers that allow Queue construction to work like Swift’s own Array and Set types:

class Queue<T>: SequenceType, ArrayLiteralConvertible { // continued

  /**
   Construct an empty queue.
   */
  init() { }
  
  /**
   Construct an instance containing `elements`.
   */
  required init(arrayLiteral elements: T...) {
    for element in elements {
      enqueue(element)
    }
  }
  
  /**
   Construct an instance from an arbitrary sequence with elements of type `Element`.
   */
  init<S: SequenceType where S.Generator.Element == T>(_ sequence: S) {
    for element in sequence {
      enqueue(element)
    }
  }
}

The first constructor was implied by the previous omission of any initializers, but now that we’ve added the other two, we must explicitly define it. However, because Queue consists only of optional properties, they are automatically initialized to nil. Since this is the “default” state, there is nothing for it to initialize.

The second initializer uses the Swift ellipses syntax to indicate a comma-separated list of literal values, which it converts into an Array. This is all that’s needed for ArrayLiteralConvertible protocol conformance, so I add that to the class. This in turn necessitates the use of the required modifier on the initializer, as the initializer is required by the protocol.

Finally, the third initializer allows us to build a queue from any SequenceType, such as an Array, Set or Dictionary. The syntax used here is that of the generic parameter clause, which adds a constraint to the Queue‘s generic argument T. That constraint says S conforms to the SequenceType protocol and the associated type S.Generator.Element is of type T. In other words, we can pass in any sequence of type T elements.

Here’s what those initializers look like in application.

let array = [3, 7, 2, 9]
let set = Set(arrayLiteral: 3, 7, 2, 9)
let dict = ["A": 3, "B": 7, "C": 2, "D": 9]
let queue1 = Queue<Int>()
let queue2 = Queue(arrayLiteral: 3, 7, 2, 9)
let queue3 = Queue(array)
let queue4 = Queue(set)
let queue5 = Queue(dict.values)
let queue6 = Queue(queue2)

Some points of note:

  • We’re able to drop the <Int> type specification on any but the first initializer, thanks to Swift’s type inference.
  • We can now initialize one Queue with another.
  • Only the Queue and Array types maintain their order as initialized, so the arrangement of the elements for queue4 and queue5 are not necessarily 3, 7, 2 and 9.
Leave a Comment

A Generic Swift Queue and ARC

In my current project, I set off to implement a generic queue in Swift, and settled on the use of a linked list for doing so, where my queue is a collection of singly linked nodes. As it turned out, this ended up being fertile ground for illustrating Swift’s use of Automatic Reference Counting (ARC) for memory management, something I take on in this post.

For the visual learners, here is an illustration to assist the discussion, courtesy of Paper by 53 and my iPad Pro (with Apple Pencil, of course):

IMG_0040

The top row represents three people waiting in line at Walmart customer service (who hasn’t been there?). The second row shows the first person has been helped (dequeued), leaving two. And the third row shows two new customers have arrived (enqueued), eager to make their returns, before anyone else has been helped. The orange numbers correspond to reference counts (more on that in a moment).

Queue Node

So to begin, let’s take a look at the building block of the queue, the node (represented by the circles in the diagram):

/**
 The component of a queue that provides the queue's content and sequence.
 */
class QueueNode<T> {

  /// The data element stored in the queue.
  var element: T

  /// The node that is "next in line", or `nil` if there is no subsequent node.
  var next: QueueNode?

  /**
   Wraps the given `element` into a node, leaving `next` set to `nil`.
   */
  init(element: T) {
    self.element = element
  }
}

Being part of a linked list, it is necessary for the node to be a class, as linked lists rely on reference pointers, and only Swift classes—as reference types—can be “pointed” to. (In contrast, any assignment of a structure or enumeration stores a copy of the object in the receiving variable, property, or parameter.)

Queue

Next, let’s begin taking a look at the class that takes the QueueNode building block and with it forms a fully functional Queue data structure:

/**
 A generic collection that provides standard enqueueing and dequeuing functionality.
 As elements are enqueued, they are conceptually added to "the back of the line;" 
 as they are dequeued, they are removed from "the front of the line." 
 */
class Queue<T> {

  /// The first enqueued node or that following the last dequeued node.
  private var front: QueueNode<T>?

  /// The last enqueued node.
  private var back: QueueNode<T>?

  /// The number of elements in the queue.
  private(set) var count = 0
  
  /// True if there are no elements in the queue, false otherwise.
  var isEmpty: Bool {
    return count == 0
  }
 

I want to interrupt here to point out that the class properties front and back defined on lines 9 and 12 are effectively pointers, as is the function variable node on line 28 below (reference the diagram).

  /**
   Adds the given `element` to the back of the queue.
    - complexity: O(1)
   */
  func enqueue(element: T) {
    
    let node = QueueNode<T>(element: element)
    
    // First node
    if front == nil {
      front = node
      back = node
    }
      
    // Subsequent nodes
    else {
      back!.next = node
      back = node
    }

    count += 1
  }
 

It is on line 28 that this class allocates memory for, creates, initializes and assigns a reference to a new QueueNode object (all done behind the scenes by Swift). In accordance with ARC, as long as this object has a property, variable or parameter pointing to it, it will remain allocated. This node is short-lived, though, so the only potential direct references held by Queue are front and back. It is the nodes themselves that maintain the next references keeping the chain of nodes alive (again, the diagram).

  /**
   Removes and returns the element at the front of the queue.
   - complexity: O(1)
   */
  func dequeue() -> T? {
     
    // Queue is empty
    if front == nil { return nil }
     
    // Remove first node/element
    let element = front!.element
    front = front!.next
     
    // Deallocate dequeued back node
    if front == nil {
      back = nil
    }
 
    count -= 1
     
    return element
  }
 

Consider line 56 in dequeue(), where front is reassigned to the object pointed to by its own next property. This leaves the node formerly known as first with nothing referring to it, triggering implicit and immediate deallocation; there’s no need to explicitly deallocate it, such as with a nil assignment.

In contrast, if we have dequeued the last node, we need to explicitly release it by reassigning back to nil (line 60), otherwise back will retain that node until the queue itself is deallocated (at which time it is automatically set to nil).

Automatic Queue Deallocation

In the context of queue deallocation, such as when the reference count for a Queue object drops to zero, front is assigned to nil. When that happens the associated node’s reference count also drops to zero, triggering its deallocation. This causes the setting of the node’s next property to nil, which then decrements its adjacent node’s reference count to zero, thereby triggering that node’s deallocation, and so on. This chain reaction continues until the last node in the linked list is reached.

As discussed in the context of dequeue(), this last node is also pointed to by the queue’s back property, thus preventing its reference count from dropping to zero, and thereby keeping the object alive. It is not until back is set to nil (line 60 above) that the reference count drops to zero and the last node is deallocated.

The process of deallocating the nodes between front and back happens automatically courtesy of ARC memory management built into Swift and Objective-C. You can see it in action by using this script and addition to QueueNode:

class QueueNode<T> { // continued
  deinit {
    print("\(#function)")
  }
}

func testQueue {
  let n = 10
  var queue = Queue<Int>()
  for i in 1...n {
    queue.enqueue(i)
  }
}
testQueue()
print("done")

Once testQueue() exits, you’ll see its queue variable is automatically deallocated as described, which in this case would result in 10 debugger console output lines that read “deinit” before “done” appears.

Large Queues

Now you might find that for large queues, trouble ensues in the form of a EXC_BAD_ACCESS runtime error. I found taking the reigns on deallocating the individual QueueNodes ensured the deallocation of any sized queue works flawlessly.

Returning to our Queue definition, this is a simple matter of adding a deinit method that calls upon a removeAll() method familiar to users of Swift’s collection types:

  /**
   Explicitly deinitializes all enqueued nodes.
   - complexity: O(n)
   */
  deinit {
    removeAll()
  }

  /**
   Removes all elements from the queue.
   - complexity: O(n)
   */
  func removeAll() {
    while front != nil {
      front = front!.next
    }
    back = nil
    count = 0
  }
}

Note we haven’t completely given up on ARC. As it did in line 56 previously, the same implicit deallocation occurs here on line 82: as soon as the front = front!.next assignment is made, the node that was the front node is deallocated. Where we short-circuit the automatic chain-reaction deallocation discussed above is in that front reassignment. Were we to replace lines 81–83 with a simple front = nil, we would be back to the default behavior we had without a deinit method.

And yes, for you lines-of-code hounds out there, I could have written removeAll() as a simple one-liner with the loop while dequeue() != nil { }. I contend, though, the given approach is superior for performance reasons, however negligible. There’s a bit more overhead in calling dequeue() 100 million times, let’s say, than in dereferencing that many nodes directly (in a single test, this amounted to 45 versus 65 seconds…a near 150% increase).

More to Come

That does it for now. In my next post I conform the Queue to SequenceType so it can be iterated with the likes of:

for element in queue {
  // do something with element
}
Leave a Comment