How to implement a Thread Safe HashTable (PhoneBook) Data Structure in Swift? - swift

I am trying to implement a Thread-Safe PhoneBook object. The phone book should be able to add a person, and look up a person based on their name and phoneNumber. From an implementation perspective this simply involves two hash tables, one associating name -> Person and another associating phone# -> Person.
The caveat is I want this object to be threadSafe. This means I would like to be able to support concurrent lookups in the PhoneBook while ensuring only one thread can add a Person to the PhoneBook at a time. This is the basic reader-writers problem, and I am trying to solve this using GrandCentralDispatch and dispatch barriers. I am struggling to solve this though as I am running into issues.. Below is my Swift playground code:
//: Playground - noun: a place where people can play
import UIKit
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
public class Person: CustomStringConvertible {
public var description: String {
get {
return "Person: \(name), \(phoneNumber)"
}
}
public var name: String
public var phoneNumber: String
private var readLock = ReaderWriterLock()
public init(name: String, phoneNumber: String) {
self.name = name
self.phoneNumber = phoneNumber
}
public func uniquePerson() -> Person {
let randomID = UUID().uuidString
return Person(name: randomID, phoneNumber: randomID)
}
}
public enum Qos {
case threadSafe, none
}
public class PhoneBook {
private var qualityOfService: Qos = .none
public var nameToPersonMap = [String: Person]()
public var phoneNumberToPersonMap = [String: Person]()
private var readWriteLock = ReaderWriterLock()
public init(_ qos: Qos) {
self.qualityOfService = qos
}
public func personByName(_ name: String) -> Person? {
var person: Person? = nil
if qualityOfService == .threadSafe {
readWriteLock.concurrentlyRead { [weak self] in
guard let strongSelf = self else { return }
person = strongSelf.nameToPersonMap[name]
}
} else {
person = nameToPersonMap[name]
}
return person
}
public func personByPhoneNumber( _ phoneNumber: String) -> Person? {
var person: Person? = nil
if qualityOfService == .threadSafe {
readWriteLock.concurrentlyRead { [weak self] in
guard let strongSelf = self else { return }
person = strongSelf.phoneNumberToPersonMap[phoneNumber]
}
} else {
person = phoneNumberToPersonMap[phoneNumber]
}
return person
}
public func addPerson(_ person: Person) {
if qualityOfService == .threadSafe {
readWriteLock.exclusivelyWrite { [weak self] in
guard let strongSelf = self else { return }
strongSelf.nameToPersonMap[person.name] = person
strongSelf.phoneNumberToPersonMap[person.phoneNumber] = person
}
} else {
nameToPersonMap[person.name] = person
phoneNumberToPersonMap[person.phoneNumber] = person
}
}
}
// A ReaderWriterLock implemented using GCD and OS Barriers.
public class ReaderWriterLock {
private let concurrentQueue = DispatchQueue(label: "com.ReaderWriterLock.Queue", attributes: DispatchQueue.Attributes.concurrent)
private var writeClosure: (() -> Void)!
public func concurrentlyRead(_ readClosure: (() -> Void)) {
concurrentQueue.sync {
readClosure()
}
}
public func exclusivelyWrite(_ writeClosure: #escaping (() -> Void)) {
self.writeClosure = writeClosure
concurrentQueue.async(flags: .barrier) { [weak self] in
guard let strongSelf = self else { return }
strongSelf.writeClosure()
}
}
}
// MARK: Testing the synchronization and thread-safety
for _ in 0..<5 {
let iterations = 1000
let phoneBook = PhoneBook(.none)
let concurrentTestQueue = DispatchQueue(label: "com.PhoneBookTest.Queue", attributes: DispatchQueue.Attributes.concurrent)
for _ in 0..<iterations {
let person = Person(name: "", phoneNumber: "").uniquePerson()
concurrentTestQueue.async {
phoneBook.addPerson(person)
}
}
sleep(10)
print(phoneBook.nameToPersonMap.count)
}
To test my code I run 1000 concurrent threads that simply add a new Person to the PhoneBook. Each Person is unique so after the 1000 threads complete I am expecting the PhoneBook to contain a count of 1000. Everytime I perform a write I perform a dispatch_barrier call, update the hash tables, and return. To my knowledge this is all we need to do; however, after repeated runs of the 1000 threads I get the number of entries in the PhoneBook to be inconsistent and all over the place:
Phone Book Entries: 856
Phone Book Entries: 901
Phone Book Entries: 876
Phone Book Entries: 902
Phone Book Entries: 912
Can anyone please help me figure out what is going on? Is there something wrong with my locking code or even worse something wrong with how my test is constructed? I am very new to this multi-threaded problem space, thanks!

The problem is your ReaderWriterLock. You are saving the writeClosure as a property, and then asynchronously dispatching a closure that calls that saved property. But if another exclusiveWrite came in during the intervening period of time, your writeClosure property would be replaced with the new closure.
In this case, it means that you can be adding the same Person multiple times. And because you're using a dictionary, those duplicates have the same key, and therefore don't result in you're seeing all 1000 entries.
You can actually simplify ReaderWriterLock, completely eliminating that property. I’d also make concurrentRead a generic, returning the value (just like sync does), and rethrowing any errors (if any).
public class ReaderWriterLock {
private let queue = DispatchQueue(label: "com.domain.app.rwLock", attributes: .concurrent)
public func concurrentlyRead<T>(_ block: (() throws -> T)) rethrows -> T {
return try queue.sync {
try block()
}
}
public func exclusivelyWrite(_ block: #escaping (() -> Void)) {
queue.async(flags: .barrier) {
block()
}
}
}
A couple of other, unrelated observations:
By the way, this simplified ReaderWriterLock happens to solves another concern. That writeClosure property, which we've now removed, could have easily introduced a strong reference cycle.
Yes, you were scrupulous about using [weak self], so there wasn't any strong reference cycle, but it was possible. I would advise that wherever you employ a closure property, that you set that closure property to nil when you're done with it, so any strong references that closure may have accidentally entailed will be resolved. That way a persistent strong reference cycle is never possible. (Plus, the closure itself and any local variables or other external references it has will be resolved.)
You're sleeping for 10 seconds. That should be more than enough, but I'd advise against just adding random sleep calls (because you never can be 100% sure). Fortunately, you have a concurrent queue, so you can use that:
concurrentTestQueue.async(flags: .barrier) {
print(phoneBook.count)
}
Because of that barrier, it will wait until everything else you put on that queue is done.
Note, I did not just print nameToPersonMap.count. This array has been carefully synchronized within PhoneBook, so you can't just let random, external classes access it directly without synchronization.
Whenever you have some property which you're synchronizing internally, it should be private and then create a thread-safe function/variable to retrieve whatever you need:
public class PhoneBook {
private var nameToPersonMap = [String: Person]()
private var phoneNumberToPersonMap = [String: Person]()
...
var count: Int {
return readWriteLock.concurrentlyRead {
nameToPersonMap.count
}
}
}
You say you're testing thread safety, but then created PhoneBook with .none option (achieving no thread-safety). In that scenario, I'd expect problems. You have to create your PhoneBook with the .threadSafe option.
You have a number of strongSelf patterns. That's rather unswifty. It is generally not needed in Swift as you can use [weak self] and then just do optional chaining.
Pulling all of this together, here is my final playground:
PlaygroundPage.current.needsIndefiniteExecution = true
public class Person {
public let name: String
public let phoneNumber: String
public init(name: String, phoneNumber: String) {
self.name = name
self.phoneNumber = phoneNumber
}
public static func uniquePerson() -> Person {
let randomID = UUID().uuidString
return Person(name: randomID, phoneNumber: randomID)
}
}
extension Person: CustomStringConvertible {
public var description: String {
return "Person: \(name), \(phoneNumber)"
}
}
public enum ThreadSafety { // Changed the name from Qos, because this has nothing to do with quality of service, but is just a question of thread safety
case threadSafe, none
}
public class PhoneBook {
private var threadSafety: ThreadSafety
private var nameToPersonMap = [String: Person]() // if you're synchronizing these, you really shouldn't expose them to the public
private var phoneNumberToPersonMap = [String: Person]() // if you're synchronizing these, you really shouldn't expose them to the public
private var readWriteLock = ReaderWriterLock()
public init(_ threadSafety: ThreadSafety) {
self.threadSafety = threadSafety
}
public func personByName(_ name: String) -> Person? {
if threadSafety == .threadSafe {
return readWriteLock.concurrentlyRead { [weak self] in
self?.nameToPersonMap[name]
}
} else {
return nameToPersonMap[name]
}
}
public func personByPhoneNumber(_ phoneNumber: String) -> Person? {
if threadSafety == .threadSafe {
return readWriteLock.concurrentlyRead { [weak self] in
self?.phoneNumberToPersonMap[phoneNumber]
}
} else {
return phoneNumberToPersonMap[phoneNumber]
}
}
public func addPerson(_ person: Person) {
if threadSafety == .threadSafe {
readWriteLock.exclusivelyWrite { [weak self] in
self?.nameToPersonMap[person.name] = person
self?.phoneNumberToPersonMap[person.phoneNumber] = person
}
} else {
nameToPersonMap[person.name] = person
phoneNumberToPersonMap[person.phoneNumber] = person
}
}
var count: Int {
return readWriteLock.concurrentlyRead {
nameToPersonMap.count
}
}
}
// A ReaderWriterLock implemented using GCD concurrent queue and barriers.
public class ReaderWriterLock {
private let queue = DispatchQueue(label: "com.domain.app.rwLock", attributes: .concurrent)
public func concurrentlyRead<T>(_ block: (() throws -> T)) rethrows -> T {
return try queue.sync {
try block()
}
}
public func exclusivelyWrite(_ block: #escaping (() -> Void)) {
queue.async(flags: .barrier) {
block()
}
}
}
for _ in 0 ..< 5 {
let iterations = 1000
let phoneBook = PhoneBook(.threadSafe)
let concurrentTestQueue = DispatchQueue(label: "com.PhoneBookTest.Queue", attributes: .concurrent)
for _ in 0..<iterations {
let person = Person.uniquePerson()
concurrentTestQueue.async {
phoneBook.addPerson(person)
}
}
concurrentTestQueue.async(flags: .barrier) {
print(phoneBook.count)
}
}
Personally, I'd be inclined to take it a step further and
move the synchronization into a generic class; and
change the model to be an array of Person object, so that:
The model supports multiple people with the same or phone number; and
You can use value types if you want.
For example:
public struct Person {
public let name: String
public let phoneNumber: String
public static func uniquePerson() -> Person {
return Person(name: UUID().uuidString, phoneNumber: UUID().uuidString)
}
}
public struct PhoneBook {
private var synchronizedPeople = Synchronized([Person]())
public func people(name: String? = nil, phone: String? = nil) -> [Person]? {
return synchronizedPeople.value.filter {
(name == nil || $0.name == name) && (phone == nil || $0.phoneNumber == phone)
}
}
public func append(_ person: Person) {
synchronizedPeople.writer { people in
people.append(person)
}
}
public var count: Int {
return synchronizedPeople.reader { $0.count }
}
}
/// A structure to provide thread-safe access to some underlying object using reader-writer pattern.
public class Synchronized<T> {
/// Private value. Use `public` `value` computed property (or `reader` and `writer` methods)
/// for safe, thread-safe access to this underlying value.
private var _value: T
/// Private reader-write synchronization queue
private let queue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".synchronized", qos: .default, attributes: .concurrent)
/// Create `Synchronized` object
///
/// - Parameter value: The initial value to be synchronized.
public init(_ value: T) {
_value = value
}
/// A threadsafe variable to set and get the underlying object, as a convenience when higher level synchronization is not needed
public var value: T {
get { reader { $0 } }
set { writer { $0 = newValue } }
}
/// A "reader" method to allow thread-safe, read-only concurrent access to the underlying object.
///
/// - Warning: If the underlying object is a reference type, you are responsible for making sure you
/// do not mutating anything. If you stick with value types (`struct` or primitive types),
/// this will be enforced for you.
public func reader<U>(_ block: (T) throws -> U) rethrows -> U {
return try queue.sync { try block(_value) }
}
/// A "writer" method to allow thread-safe write with barrier to the underlying object
func writer(_ block: #escaping (inout T) -> Void) {
queue.async(flags: .barrier) {
block(&self._value)
}
}
}

In some cases you use might NSCache class. The documentation claims that it's thread safe:
You can add, remove, and query items in the cache from different threads without having to lock the cache yourself.
Here is an article that describes quite useful tricks related to NSCache

I don’t think you are using it wrong :).
The original (on macos) generates:
0 swift 0x000000010c9c536a PrintStackTraceSignalHandler(void*) + 42
1 swift 0x000000010c9c47a6 SignalHandler(int) + 662
2 libsystem_platform.dylib 0x00007fffbbdadb3a _sigtramp + 26
3 libsystem_platform.dylib 000000000000000000 _sigtramp + 1143284960
4 libswiftCore.dylib 0x0000000112696944 _T0SSwcp + 36
5 libswiftCore.dylib 0x000000011245fa92 _T0s24_VariantDictionaryBufferO018ensureUniqueNativeC0Sb11reallocated_Sb15capacityChangedtSiF + 1634
6 libswiftCore.dylib 0x0000000112461fd2 _T0s24_VariantDictionaryBufferO17nativeUpdateValueq_Sgq__x6forKeytF + 1074
If you remove the ‘.concurrent’ from your ReaderWriter queue, "the problem disappears”.©
If you restore the .concurrent, but change the async invocation in the writer side to be sync:
swift(10504,0x70000896f000) malloc: *** error for object 0x7fcaa440cee8: incorrect checksum for freed object - object was probably modified after being freed.
Which would be a bit astonishing if it weren’t swift?
I dug in, replaced your ‘string’ based array with an Int one by interposing a hash function, replaced the sleep(10) with a barrier dispatch to flush any laggardly blocks through, and that made it more reproducibly crash with the somewhat more helpful:
x(10534,0x700000f01000) malloc: *** error for object 0x7f8c9ee00008: incorrect checksum for freed object - object was probably modified after being freed.
But when a search of the source revealed no malloc or free, perhaps the stack dump is more useful.
Anyways, best way to solve your problem: use go instead; it actually makes sense.

Related

PropertyWrapper subscript is not called. WHY?

I am implementing my own AtomicDictionary property wrapper as follows:
#propertyWrapper
public class AtomicDictionary<Key: Hashable, Value>: CustomDebugStringConvertible {
public var wrappedValue = [Key: Value]()
private let queue = DispatchQueue(label: "atomicDictionary.\(UUID().uuidString)",
attributes: .concurrent)
public init() {}
public subscript(key: Key) -> Value? {
get {
queue.sync {
wrappedValue[key]
}
}
set {
queue.async(flags: .barrier) { [weak self] in
self?.wrappedValue[key] = newValue
}
}
}
public var debugDescription: String {
return wrappedValue.debugDescription
}
}
now, when I use it as follows:
class ViewController: UIViewController {
#AtomicDictionary var a: [String: Int]
override func viewDidLoad() {
super.viewDidLoad()
self.a["key"] = 5
}
}
The subscript function of the AtomicDicationary is not called!!
Does anybody have any explanation as to why that is?
Property wrappers merely provide an interface for the basic accessor methods, but that’s it. It’s not going to intercept subscripts or other methods.
The original property wrapper proposal SE-0258 shows us what is going on behind the scenes. It contemplates a hypothetical property wrapper, Lazy, in which:
The property declaration
#Lazy var foo = 1738
translates to:
private var _foo: Lazy<Int> = Lazy<Int>(wrappedValue: 1738)
var foo: Int {
get { return _foo.wrappedValue }
set { _foo.wrappedValue = newValue }
}
Note that foo is just an Int computed property. The _foo is the Lazy<Int>.
So, in your a["key"] = 5 example, it will not use your property wrapper’s subscript operator. It will get the value associated with a, use the dictionary’s own subscript operator to update that value (not the property wrapper’s subscript operator), and then it will set the value associated with a.
That’s all the property wrapper is doing, providing the get and set accessors. E.g., the declaration:
#AtomicDictionary var a: [String: Int]
translates to:
private var _a: AtomicDictionary<String, Int> = AtomicDictionary<String, Int>(wrappedValue: [:])
var a: [String: Int] {
get { return _a.wrappedValue }
set { _a.wrappedValue = newValue }
}
Any other methods you define are only accessible through _a in this example, not a (which is just a computed property that gets and sets the wrappedValue of _a).
So, you’re better off just defining a proper type for your “atomic dictionary”:
public class AtomicDictionary<Key: Hashable, Value> {
private var wrappedValue: [Key: Value]
private let queue = DispatchQueue(label: "atomicDictionary.\(UUID().uuidString)", attributes: .concurrent)
init(_ wrappedValue: [Key: Value] = [:]) {
self.wrappedValue = wrappedValue
}
public subscript(key: Key) -> Value? {
get {
queue.sync {
wrappedValue[key]
}
}
set {
queue.async(flags: .barrier) {
self.wrappedValue[key] = newValue
}
}
}
}
And
let a = AtomicDictionary<String, Int>()
That gives you the behavior you want.
And if you are going to supply CustomDebugStringConvertible conformance, make sure to use your synchronization mechanism there, too:
extension AtomicDictionary: CustomDebugStringConvertible {
public var debugDescription: String {
queue.sync { wrappedValue.debugDescription }
}
}
All interaction with the wrapped value must be synchronized.
Obviously you can use this general pattern with whatever synchronization mechanism you want, e.g., the above reader-writer pattern, GCD serial queue, locks, actors, etc. (The reader-writer pattern has a natural appeal, but, in practice, there are generally better mechanisms.)
Needless to say, the above presumes that subscript-level atomicity is sufficient. One should always be wary about general purpose thread-safe collections as often the correctness of our code relies on a higher-level of synchronization.

Performance issues on Realm List

I'm having some memory performance issues when doing operations on Realm List's. I have two objects similar to this one:
final class Contact: Object {
let phones = List<Phone>()
let emails = List<Email>()
}
Now I'm trying to find possible similarities between two objects of the same type (e.g. at least one element in common) that could potentially have duplicate emails or phones. In order to do so I was using Set operations.
func possibleDuplicateOf(contact: Contact) {
return !Set(emails).isDisjoint(with: Set(contact.emails)) || !Set(phones).isDisjoint(with: Set(contact.phones))
}
This is a function inside the Contact object. I know it has a performance hit when transforming Realm List into a Set or an Array, and I'm feeling this heavily when I have a large amount of Contacts (10k or more), memory consumption jumps to more then 1GB.
So I tried replacing the above function with this one:
func possibleDuplicateOf(contact: Contact) {
let emailsInCommon = emails.contains(where: contact.emails.contains)
let phonesInCommon = phones.contains(where: contact.phones.contains)
return emailsInCommon || phonesInCommon
}
This has the same performance has using the sets.
The isEqual method on the Emails and Phones is a simple string comparision:
extension Email {
static func ==(lhs: Email, rhs: Email) -> Bool {
return (lhs.email == rhs.email)
}
override func isEqual(_ object: Any?) -> Bool {
guard let object = object as? Email else { return false }
return object == self
}
override var hash: Int {
return email.hashValue
}
}
Email.swift
final class Email: Object {
enum Attribute: String { case primary, secondary }
#objc dynamic var email: String = ""
#objc dynamic var label: String = ""
/* Cloud Properties */
#objc dynamic var attribute_raw: String = ""
var attribute: Attribute {
get {
guard let attributeEnum = Attribute(rawValue: attribute_raw) else { return .primary }
return attributeEnum
}
set { attribute_raw = newValue.rawValue }
}
override static func ignoredProperties() -> [String] {
return ["attribute"]
}
convenience init(email: String, label: String = "email", attribute: Attribute) {
self.init()
self.email = email
self.label = label
self.attribute = attribute
}
}
I'm a bit out of options in here, I've spent the entire day trying to come up with a different approach to this problem but without any luck. If anyone has a better idea, I would love to hear it out :)
Thank you
Whenever something like this happens, a good start is using Instruments to find out where CPU cycles and memory are consumed. Here's a good tutorial: Using Time Profiler in Instruments
You omitted the code making the actual comparison, but I suspect it might be nested for loops or something along those lines. Realm doesn't know your use case and isn't caching appropriately for something like that.
Using Instruments, it's fairly easy to find the bottlenecks. In your case, this should work:
final class Contact: Object
{
let emails = List<Email>()
lazy var emailsForDuplicateCheck:Set<Email> = Set(emails)
func possibleDuplicateOf(other: Contact) -> Bool {
return !emailsForDuplicateCheck.isDisjoint(with: other.emailsForDuplicateCheck)
}
override static func ignoredProperties() -> [String] {
return ["emailsForDuplicateCheck"]
}
}
And for the comparison:
// create an array of the contacts to be compared to cache them
let contacts = Array(realm.objects(Contact.self))
for contact in contacts {
for other in contacts {
if contact.possibleDuplicateOf(other: other) {
print("Possible duplicate found!")
}
}
}
This implementation ensures that the Contact objects are fetched only once and the Set of Email is only created once for each Contact.
The problem you have might be solved more optimal by rebuilding a bit data structures. Getting everything in memory and trying to convert to set (building sets is expensive operation) is far from optimal :(. I suggest this solution.
Consider contact is this object (I've added id property). I didn't add phones objects for brevity but exactly the same approach may be used for phones.
class Contact: Object {
#objc dynamic var id = UUID().uuidString
var emails = List<Email>()
override public static func primaryKey() -> String? {
return "id"
}
}
And Email class is this one. Relation to contact is added.
class Email: Object {
#objc dynamic var email: String = ""
#objc dynamic var contact: Contact?
}
Having these "connected" tables in realm you may create query to find duplicated objects:
func hasDups(contact: Contact) -> Bool {
let realm = try! Realm()
let emails: [String] = contact.emails.map { $0.email }
let sameObjects = realm.objects(Email.self)
.filter("email in %# AND contact.id != %#", emails, contact.id)
// sameObject will contain emails which has duplicates with current contact
return !sameObjects.isEmpty
}
This works super fast, I've tested on 100000+ objects and executed immediately.
Hope this helps!

Swift - How to manage turn order management?

I'm working on a local game which relies on turn order.
Rules;
There are a number of phases in the game (ie: Buy, Sell)
During each phase, a player takes a single turn
Each phase is not considered complete until every player (in turn order) has completed their turn.
I'm not sure how to manage this data. There are a number of things to track.
The phase we are in
The current player on turn
When all players have completed their turns
When the end of the turn order has been reached so we can move to next phase.
Resetting all turn completions when all phases are complete
I'm thinking that a subscription model is the best approach to this, but I'm not used to such a pattern.
Currently I'm using a similar system to a to-do where the phase itself can be marked complete or incomplete.
This is the way I'm currently handling turn orders and phases in Swift playground.
// Turn order management
class TurnOrderManager: NSObject
{
static var instance = TurnOrderManager()
var turnOrder: [Player] = [Player]()
private var turnOrderIndex = 0
var current: Player {
return turnOrder[turnOrderIndex]
}
func next() {
if (self.turnOrderIndex < (self.turnOrder.count-1)) {
turnOrderIndex += 1
}
else {
print("Reached end of line")
}
}
}
class Player: NSObject {
var name: String = ""
override var description: String {
return self.name
}
init(name: String) {
super.init()
self.name = name
}
}
let p1:Player = Player.init(name: "Bob")
let p2:Player = Player.init(name: "Alex")
TurnOrderManager.instance.turnOrder = [p1,p2]
print (TurnOrderManager.instance.current)
TurnOrderManager.instance.next()
print (TurnOrderManager.instance.current)
TurnOrderManager.instance.next()
print (TurnOrderManager.instance.current)
// ---------------------------------
// Phase management
enum PhaseType: Int {
case buying = 1
case selling
}
struct Phase {
var id: PhaseType
var title: String
var completed: Bool = false {
didSet {
// Notify subscribers of completion
guard completed else { return }
handlers.forEach { $0(self) }
}
}
var description:String {
return "Phase: \(self.title), completed: \(completed)"
}
// Task queue
var handlers = [(Phase) -> Void]()
init(id: PhaseType, title: String, initialSubscription: #escaping (Phase) -> Void =
{_ in})
{
self.id = id
self.title = title
subscribe(completion: initialSubscription)
}
mutating func subscribe(completion: #escaping (Phase) -> Void) {
handlers.append(completion)
}
}
class MyParentController {
lazy var phase1: Phase = {
return Phase(id: .buying, title: "Phase #1") {
print("Do something with phase: \($0.title)")
}
}()
}
let controller = MyParentController()
controller.phase1.completed = true
Question:
I'm wanting to notify:
Turn is complete
All turns are complete (so that it can move to next phase)
How do I make my TurnOrderManager alert the PhaseManager that the current turn is complete.
How do I make my PhaseManager know that when all turns are complete to move to the next phase.
I apologize for the verboseness of my query.
Many thanks
You're going to want to define a delegate relationship PhaseManager and your TurnOrderManager.
Here is the Apple docs on protocols.
Here is a great article on delegation.
First you'll need to define a protocol. Something like this:
protocol TurnManagerDelegate {
func complete(turn: objectType)
func allTurnsComplete()
}
Next you'll have to conform your PhaseManager to the protocol:
class PhaseManager: TurnManagerDelegate {
...
func complete(turn: objectType) {
// implement
}
func allTurnsComplete() {
// implement
}
...
}
Last you'll have to add a property on your TurnOrderManager with the delegate:
class TurnOrderManager {
...
var delegate: TurnManagerDelegate
...
}
and call the functions whenever needed in your TurnOrderManager:
...
delegate?.allTurnsComplete() //example
...
You'll also have to set your PhaseManager as the delegate before your TurnOrderManager would try to call any of the delegate methods.

How to get the current queue name in swift 3

We have function like this in swift 2.2 for printing a log message with the current running thread:
func MyLog(_ message: String) {
if Thread.isMainThread {
print("[MyLog]", message)
} else {
let queuename = String(UTF8String: dispatch_queue_get_label(DISPATCH_CURRENT_QUEUE_LABEL))! // Error: Cannot convert value of type '()' to expected argument type 'DispatchQueue?'
print("[MyLog] [\(queuename)]", message)
}
}
These code no longer compile in swift 3.0. How do we obtain the queue name now?
As Brent Royal-Gordon mentioned in his message on lists.swift.org it's a hole in the current design, but you can use this horrible workaround.
func currentQueueName() -> String? {
let name = __dispatch_queue_get_label(nil)
return String(cString: name, encoding: .utf8)
}
If you don't like unsafe pointers and c-strings, there is another, safe solution:
if let currentQueueLabel = OperationQueue.current?.underlyingQueue?.label {
print(currentQueueLabel)
// Do something...
}
I don't know any cases when the currentQueueLabel will be nil.
Now DispatchQueue has label property.
The label you assigned to the dispatch queue at creation time.
var label: String { get }
It seems been existed from first, maybe not been exposed via public API.
macOS 10.10+
And please use this only to obtain human-readable labels. Not to identify each GCDQ.
If you want to check whether your code is running on certain GCDQ, you can use dispatchPrecondition(...) function.
This method will work for both OperationQueue and DispatchQueue.
func printCurrnetQueueName()
{
print(Thread.current.name!)
}
Here's a wrapper class that offers some safety (revised from here):
import Foundation
/// DispatchQueue wrapper that acts as a reentrant to a synchronous queue;
/// so callers to the `sync` function will check if they are on the current
/// queue and avoid deadlocking the queue (e.g. by executing another queue
/// dispatch call). Instead, it will just execute the given code in place.
public final class SafeSyncQueue {
public init(label: String, attributes: DispatchQueue.Attributes) {
self.queue = DispatchQueue(label: label, attributes: attributes)
self.queueKey = DispatchSpecificKey<QueueIdentity>()
self.queue.setSpecific(key: self.queueKey, value: QueueIdentity(label: self.queue.label))
}
// MARK: - API
/// Note: this will execute without the specified flags if it's on the current queue already
public func sync<T>(flags: DispatchWorkItemFlags? = nil, execute work: () throws -> T) rethrows -> T {
if self.currentQueueIdentity?.label == self.queue.label {
return try work()
} else if let flags = flags {
return try self.queue.sync(flags: flags, execute: work)
} else {
return try self.queue.sync(execute: work)
}
}
// MARK: - Private Structs
private struct QueueIdentity {
let label: String
}
// MARK: - Private Properties
private let queue: DispatchQueue
private let queueKey: DispatchSpecificKey<QueueIdentity>
private var currentQueueIdentity: QueueIdentity? {
return DispatchQueue.getSpecific(key: self.queueKey)
}
}
This works best for me:
/// The name/description of the current queue (Operation or Dispatch), if that can be found. Else, the name/description of the thread.
public func queueName() -> String {
if let currentOperationQueue = OperationQueue.current {
if let currentDispatchQueue = currentOperationQueue.underlyingQueue {
return "dispatch queue: \(currentDispatchQueue.label.nonEmpty ?? currentDispatchQueue.description)"
}
else {
return "operation queue: \(currentOperationQueue.name?.nonEmpty ?? currentOperationQueue.description)"
}
}
else {
let currentThread = Thread.current
return "UNKNOWN QUEUE on thread: \(currentThread.name?.nonEmpty ?? currentThread.description)"
}
}
public extension String {
/// Returns this string if it is not empty, else `nil`.
public var nonEmpty: String? {
if self.isEmpty {
return nil
}
else {
return self
}
}
}

dispatch_once after the Swift 3 GCD API changes

What is the new syntax for dispatch_once in Swift after the changes made in language version 3? The old version was as follows.
var token: dispatch_once_t = 0
func test() {
dispatch_once(&token) {
}
}
These are the changes to libdispatch that were made.
While using lazy initialized globals can make sense for some one time initialization, it doesn't make sense for other types. It makes a lot of sense to use lazy initialized globals for things like singletons, it doesn't make a lot of sense for things like guarding a swizzle setup.
Here is a Swift 3 style implementation of dispatch_once:
public extension DispatchQueue {
private static var _onceTracker = [String]()
/**
Executes a block of code, associated with a unique token, only once. The code is thread safe and will
only execute the code once even in the presence of multithreaded calls.
- parameter token: A unique reverse DNS style name such as com.vectorform.<name> or a GUID
- parameter block: Block to execute once
*/
public class func once(token: String, block:#noescape(Void)->Void) {
objc_sync_enter(self); defer { objc_sync_exit(self) }
if _onceTracker.contains(token) {
return
}
_onceTracker.append(token)
block()
}
}
Here is an example usage:
DispatchQueue.once(token: "com.vectorform.test") {
print( "Do This Once!" )
}
or using a UUID
private let _onceToken = NSUUID().uuidString
DispatchQueue.once(token: _onceToken) {
print( "Do This Once!" )
}
As we are currently in a time of transition from swift 2 to 3, here is an example swift 2 implementation:
public class Dispatch
{
private static var _onceTokenTracker = [String]()
/**
Executes a block of code, associated with a unique token, only once. The code is thread safe and will
only execute the code once even in the presence of multithreaded calls.
- parameter token: A unique reverse DNS style name such as com.vectorform.<name> or a GUID
- parameter block: Block to execute once
*/
public class func once(token token: String, #noescape block:dispatch_block_t) {
objc_sync_enter(self); defer { objc_sync_exit(self) }
if _onceTokenTracker.contains(token) {
return
}
_onceTokenTracker.append(token)
block()
}
}
From the doc:
Dispatch
The free function dispatch_once is no longer available in
Swift. In Swift, you can use lazily initialized globals or static
properties and get the same thread-safety and called-once guarantees
as dispatch_once provided. Example:
let myGlobal: () = { … global contains initialization in a call to a closure … }()
_ = myGlobal // using myGlobal will invoke the initialization code only the first time it is used.
Expanding on Tod Cunningham's answer above, I've added another method which makes the token automatically from file, function, and line.
public extension DispatchQueue {
private static var _onceTracker = [String]()
public class func once(
file: String = #file,
function: String = #function,
line: Int = #line,
block: () -> Void
) {
let token = "\(file):\(function):\(line)"
once(token: token, block: block)
}
/**
Executes a block of code, associated with a unique token, only once. The code is thread safe and will
only execute the code once even in the presence of multithreaded calls.
- parameter token: A unique reverse DNS style name such as com.vectorform.<name> or a GUID
- parameter block: Block to execute once
*/
public class func once(
token: String,
block: () -> Void
) {
objc_sync_enter(self)
defer { objc_sync_exit(self) }
guard !_onceTracker.contains(token) else { return }
_onceTracker.append(token)
block()
}
}
So it can be simpler to call:
DispatchQueue.once {
setupUI()
}
and you can still specify a token if you wish:
DispatchQueue.once(token: "com.hostname.project") {
setupUI()
}
I suppose you could get a collision if you have the same file in two modules. Too bad there isn't #module
Edit
#Frizlab's answer - this solution is not guaranteed to be thread-safe. An alternative should be used if this is crucial
Simple solution is
lazy var dispatchOnce : Void = { // or anyName I choose
self.title = "Hello Lazy Guy"
return
}()
used like
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
_ = dispatchOnce
}
You can declare a top-level variable function like this:
private var doOnce: ()->() = {
/* do some work only once per instance */
return {}
}()
then call this anywhere:
doOnce()
You can still use it if you add a bridging header:
typedef dispatch_once_t mxcl_dispatch_once_t;
void mxcl_dispatch_once(mxcl_dispatch_once_t *predicate, dispatch_block_t block);
Then in a .m somewhere:
void mxcl_dispatch_once(mxcl_dispatch_once_t *predicate, dispatch_block_t block) {
dispatch_once(predicate, block);
}
You should now be able to use mxcl_dispatch_once from Swift.
Mostly you should use what Apple suggest instead, but I had some legitimate uses where I needed to dispatch_once with a single token in two functions and there is not covered by what Apple provide instead.
Swift 3: For those who likes reusable classes (or structures):
public final class /* struct */ DispatchOnce {
private var lock: OSSpinLock = OS_SPINLOCK_INIT
private var isInitialized = false
public /* mutating */ func perform(block: (Void) -> Void) {
OSSpinLockLock(&lock)
if !isInitialized {
block()
isInitialized = true
}
OSSpinLockUnlock(&lock)
}
}
Usage:
class MyViewController: UIViewController {
private let /* var */ setUpOnce = DispatchOnce()
override func viewWillAppear() {
super.viewWillAppear()
setUpOnce.perform {
// Do some work here
// ...
}
}
}
Update (28 April 2017): OSSpinLock replaced with os_unfair_lock due deprecation warnings in macOS SDK 10.12.
public final class /* struct */ DispatchOnce {
private var lock = os_unfair_lock()
private var isInitialized = false
public /* mutating */ func perform(block: (Void) -> Void) {
os_unfair_lock_lock(&lock)
if !isInitialized {
block()
isInitialized = true
}
os_unfair_lock_unlock(&lock)
}
}
I improve above answers get result:
import Foundation
extension DispatchQueue {
private static var _onceTracker = [AnyHashable]()
///only excute once in same file&&func&&line
public class func onceInLocation(file: String = #file,
function: String = #function,
line: Int = #line,
block: () -> Void) {
let token = "\(file):\(function):\(line)"
once(token: token, block: block)
}
///only excute once in same Variable
public class func onceInVariable(variable:NSObject, block: () -> Void){
once(token: variable.rawPointer, block: block)
}
/**
Executes a block of code, associated with a unique token, only once. The code is thread safe and will
only execute the code once even in the presence of multithreaded calls.
- parameter token: A unique reverse DNS style name such as com.vectorform.<name> or a GUID
- parameter block: Block to execute once
*/
public class func once(token: AnyHashable,block: () -> Void) {
objc_sync_enter(self)
defer { objc_sync_exit(self) }
guard !_onceTracker.contains(token) else { return }
_onceTracker.append(token)
block()
}
}
extension NSObject {
public var rawPointer:UnsafeMutableRawPointer? {
get {
Unmanaged.passUnretained(self).toOpaque()
}
}
}
import UIKit
// dispatch once
class StaticOnceTest {
static let test2 = {
print("Test " + $0 + " \($1)")
}("mediaHSL", 5)
lazy var closure: () = {
test(entryPoint: $0, videos: $1)
}("see all" , 4)
private func test(entryPoint: String, videos: Int) {
print("Test " + entryPoint + " \(videos)")
}
}
print("Test-1")
let a = StaticOnceTest()
a.closure
a.closure
a.closure
a.closure
StaticOnceTest.test2
StaticOnceTest.test2
StaticOnceTest.test2
StaticOnceTest.test2
OUTPUT:
Test-1
Test see all 4
Test mediaHSL 5
You can use a lazy var closure and execute it immediately with (#arguments_if_needed) so that it will call only one time. You can call any instance function inside of the closure [advantage].
You can pass multiple arguments based on need. You can capture those arguments when the class has been initialised and use them.
Another option: You can use a static let closure and it will execute only one time but you cannot call any instance func inside that static let clsoure. [disadvantage]
thanks!
Swift 5
dispatch_once is still available in libswiftFoundation.dylib standard library which is embedded to any swift app so you can access to exported symbols dynamically, get the function's symbol pointer, cast and call:
import Darwin
typealias DispatchOnce = #convention(c) (
_ predicate: UnsafePointer<UInt>?,
_ block: () -> Void
) -> Void
func dispatchOnce(_ predicate: UnsafePointer<UInt>?, _ block: () -> Void) {
let RTLD_DEFAULT = UnsafeMutableRawPointer(bitPattern: -2)
if let sym = dlsym(RTLD_DEFAULT, "dispatch_once") {
let f = unsafeBitCast(sym, to: DispatchOnce.self)
f(predicate, block)
}
else {
fatalError("Symbol not found")
}
}
Example:
var token: UInt = 0
for i in 0...10 {
print("iteration: \(i)")
dispatchOnce(&token) {
print("This is printed only on the first call")
}
}
Outputs:
iteration: 0
This is printed only on the first call
iteration: 1
iteration: 2
iteration: 3
iteration: 4
iteration: 5
iteration: 6
iteration: 7
iteration: 8
iteration: 9
iteration: 10
Use the class constant approach if you are using Swift 1.2 or above and the nested struct approach if you need to support earlier versions.
An exploration of the Singleton pattern in Swift. All approaches below support lazy initialization and thread safety.
dispatch_once approach is not worked in Swift 3.0
Approach A: Class constant
class SingletonA {
static let sharedInstance = SingletonA()
init() {
println("AAA");
}
}
Approach B: Nested struct
class SingletonB {
class var sharedInstance: SingletonB {
struct Static {
static let instance: SingletonB = SingletonB()
}
return Static.instance
}
}
Approach C: dispatch_once
class SingletonC {
class var sharedInstance: SingletonC {
struct Static {
static var onceToken: dispatch_once_t = 0
static var instance: SingletonC? = nil
}
dispatch_once(&Static.onceToken) {
Static.instance = SingletonC()
}
return Static.instance!
}
}