How should I interpret this GCD threading issue message? - swift

The XCODE TSAN thread analyser is throwing up a threading issue:
Data race in generic specialization <Foundation.UUID> of Swift._NativeSet.insertNew(_: __owned τ_0_0, at: Swift._HashTable.Bucket, isUnique: Swift.Bool) -> () at 0x10a16b300
This only occurs in a release build, and its Data race in generic specialization that has my attention.
It pin-points the addID function. But I cannot see the issue. Here is the relevant code snippet:
final class IDBox {
let syncQueue = DispatchQueue(label: "IDBox\(UUID().uuidString)", attributes: .concurrent)
private var _box: Set<UUID>
init() {
self._box = []
}
var box: Set<UUID> {
get { syncQueue.sync { self._box } }
set { syncQueue.async(flags: .barrier) {
self._box = newValue
}
}
func addID(_ id: UUID) {
syncQueue.async(flags: .barrier) {
self._box.insert(id)
}
}
}

I found the error. The read-writer implementation was correct, but I found a call idBox.box.insert(id) rather than idBox.addId(id) elsewhere in the code base.
As an aside I can see why using an actor would prevent this kind of bug...although actors come with other constraints, for example not being able to conform to Codable. But that's for another post.

Related

How to test a method that contains Task Async/await in swift

Given the following method that contains a Task.
self.interactor is mocked.
func submitButtonPressed() {
Task {
await self.interactor?.fetchSections()
}
}
How can I write a test to verify that the fetchSections() was called from that method?!
My first thought was to use expectations and wait until it is fulfilled (in mock's code).
But is there any better way with the new async/await?
Ideally, as you imply, your interactor would be declared using a protocol so that you can substitute a mock for test purposes. You then consult the mock object to confirm that the desired method was called. In this way you properly confine the scope of the system under test to answer only the question "was this method called?"
As for the structure of the test method itself, yes, this is still asynchronous code and, as such, requires asynchronous testing. So using an expectation and waiting for it is correct. The fact that your app uses async/await to express asynchronousness does not magically change that! (You can decrease the verbosity of this by writing a utility method that creates a BOOL predicate expectation and waits for it.)
I don't know if you already find a solution to your question, but here is my contribution to other developers facing the same problem.
I was in the same situation as you, and I solved the problem by using Combine to notify the tested class that the method was called.
Let's say that we have this method to test:
func submitButtonPressed() {
Task {
await self.interactor?.fetchSections()
}
}
We should start by mocking the interaction:
import Combine
final class MockedInteractor: ObservableObject, SomeInteractorProtocol {
#Published private(set) var fetchSectionsIsCalled = false
func fetchSection async {
fetchSectionsIsCalled = true
// Do some other mocking if needed
}
}
Now that we have our mocked interactor we can start write unit test:
import XCTest
import Combine
#testable import YOUR_TARGET
class MyClassTest: XCTestCase {
var mockedInteractor: MockedInteractor!
var myClass: MyClass!
private var cancellable = Set<AnyCancellable>()
override func setUpWithError() throws {
mockedInteractor = .init()
// the interactor should be injected
myClass = .init(interactor: mockedInteractor)
}
override func tearDownWithError() throws {
mockedInteractor = nil
myClass = nil
}
func test_submitButtonPressed_should_callFetchSections_when_Always(){
//arrage
let methodCallExpectation = XCTestExpectation()
interactor.$fetchSectionsIsCalled
.sink { isCalled in
if isCalled {
methodCallExpectation.fulfill()
}
}
.store(in: &cancellable)
//acte
myClass.submitButtonPressed()
wait(for: [methodCallExpectation], timeout: 1)
//assert
XCTAssertTrue(interactor.fetchSectionsIsCalled)
}
There was one solution suggested here (#andy) involving injecting the Task. There's a way to do this by the func performing the task returning the Task and allows a test to await the value.
(I'm not crazy about changing a testable class to suit the test (returning the Task), but it allows to test async without NSPredicate or setting some arbitrary expectation time (which just smells)).
#discardableResult
func submitButtonPressed() -> Task<Void, Error> {
Task { // I'm allowed to omit the return here, but it's returning the Task
await self.interactor?.fetchSections()
}
}
// Test
func testSubmitButtonPressed() async throws {
let interactor = MockInteractor()
let task = manager.submitButtonPressed()
try await task.value
XCTAssertEqual(interactor.sections.count, 4)
}
I answered a similar question in this post: https://stackoverflow.com/a/73091753/2077405
Basically, given code defined like this:
class Owner{
let dataManager: DataManagerProtocol = DataManager()
var data: String? = nil
init(dataManager: DataManagerProtocol = DataManager()) {
self.dataManager = dataManager
}
func refresh() {
Task {
self.data = await dataManager.fetchData()
}
}
}
and the DataManagerProtocol is defined as:
protocol DataManagerProtocol {
func fetchData() async -> String
}
a mock/fake implementation can be defined:
class MockDataManager: DataManagerProtocol {
func fetchData() async -> String {
"testData"
}
}
Implementing the unit test should go like this:
...
func testRefreshFunctionFetchesDataAndPopulatesFields() {
let expectation = XCTestExpectation(
description: "Owner fetches data and updates properties."
)
let owner = Owner(mockDataManager: DataManagerProtocol())
// Verify initial state
XCTAssertNil(owner.data)
owner.refresh()
let asyncWaitDuration = 0.5 // <= could be even less than 0.5 seconds even
DispatchQueue.main.asyncAfter(deadline: .now() + asyncWaitDuration) {
// Verify state after
XCTAssertEqual(owner.data, "testData")
expectation.fulfill()
}
wait(for: [expectation], timeout: asyncWaitDuration)
}
...
Hope this makes sense?

What is a good design pattern approach for a somewhat dynamic dependency injection in Swift

Let's say there are three components and their respective dynamic dependencies:
struct Component1 {
let dependency1: Dependency1
func convertOwnDependenciesToDependency2() -> Dependency2
}
struct Component2 {
let dependency2: Dependency2
let dependency3: Dependency3
func convertOwnDependenciesToDependency4() -> Dependency4
}
struct Component3 {
let dependency2: Dependency2
let dependency4: Dependency4
func convertOwnDependenciesToDependency5() -> Dependency5
}
Each of those components can generate results which can then be used as dependencies of other components. I want to type-safely inject the generated dependencies into the components which rely on them.
I have several approaches which I already worked out but I feel like I am missing something obvious which would make this whole task way easier.
The naive approach:
let component1 = Component1(dependency1: Dependency1())
let dependency2 = component1.convertOwnDependenciesToDependency2()
let component2 = Component2(dependency2: dependency2, dependency3: Dependency3())
let dependency4 = component2.convertOwnDependenciesToDependency4()
let component3 = Component3(dependency2: dependency2, dependency4: dependency4)
let result = component3.convertOwnDependenciesToDependency5()
Now I know that you could just imperatively call each of the functions and simply use the constructor of each component to inject their dependencies. However this approach does not scale very well. In a real scenario there would be up to ten of those components and a lot of call sites where this approach would be used. Therefore it would be very cumbersome to update each of the call sites if for instance Component3 would require another dependency.
The "SwiftUI" approach:
protocol EnvironmentKey {
associatedtype Value
}
struct EnvironmentValues {
private var storage: [ObjectIdentifier: Any] = [:]
subscript<Key>(_ type: Key.Type) -> Key.Value where Key: EnvironmentKey {
get { return storage[ObjectIdentifier(type)] as! Key.Value }
set { storage[ObjectIdentifier(type)] = newValue as Any }
}
}
struct Component1 {
func convertOwnDependenciesToDependency2(values: EnvironmentValues) {
let dependency1 = values[Dependency1Key.self]
// do some stuff with dependency1
values[Dependency2Key.self] = newDependency
}
}
struct Component2 {
func convertOwnDependenciesToDependency4(values: EnvironmentValues) {
let dependency2 = values[Dependency2Key.self]
let dependency3 = values[Dependency3Key.self]
// do some stuff with dependency2 and dependency3
values[Dependency4Key.self] = newDependency
}
}
struct Component3 {
func convertOwnDependenciesToDependency5(values: EnvironmentValues) {
let dependency2 = values[Dependency2Key.self]
let dependency4 = values[Dependency4Key.self]
// do some stuff with dependency2 and dependency4
values[Dependency5Key.self] = newDependency
}
}
But what I dislike with this approach is that you first of all have no type-safety and have to either optionally unwrap the dependency and give back an optional dependency which feels odd since what should a component do if the dependency is nil? Or force unwrap the dependencies like I did. But then the next point would be that there is no guarantee whatsoever that Dependency3 is already in the environment at the call site of convertOwnDependenciesToDependency4. Therefore this approach somehow weakens the contract between the components and could make up for unnecessary bugs.
Now I know SwiftUI has a defaultValue in its EnvironmentKey protocol but in my scenario this does not make sense since for instance Dependency4 has no way to instantiate itself without data required from Dependency2 or Depedency3 and therefore no default value.
The event bus approach
enum Event {
case dependency1(Dependency1)
case dependency2(Dependency2)
case dependency3(Dependency3)
case dependency4(Dependency4)
case dependency5(Dependency5)
}
protocol EventHandler {
func handleEvent(event: Event)
}
struct EventBus {
func subscribe(handler: EventHandler)
func send(event: Event)
}
struct Component1: EventHandler {
let bus: EventBus
let dependency1: Dependency1?
init(bus: EventBus) {
self.bus = bus
self.bus.subscribe(handler: self)
}
func handleEvent(event: Event) {
switch event {
case let .dependency1(dependency1): self.dependency1 = dependency1
}
if hasAllReceivedAllDependencies { generateDependency2() }
}
func generateDependency2() {
bus.send(newDependency)
}
}
struct Component2: EventHandler {
let dependency2: Dependency2?
let dependency3: Dependency3?
init(bus: EventBus) {
self.bus = bus
self.bus.subscribe(handler: self)
}
func handleEvent(event: Event) {
switch event {
case let .dependency2(dependency2): self.dependency2 = dependency2
case let .dependency3(dependency3): self.dependency3 = dependency3
}
if hasAllReceivedAllDependencies { generateDependency4() }
}
func generateDependency4() {
bus.send(newDependency)
}
}
struct Component3: EventHandler {
let dependency2: Dependency2?
let dependency4: Dependency4?
init(bus: EventBus) {
self.bus = bus
self.bus.subscribe(handler: self)
}
func handleEvent(event: Event) {
switch event {
case let .dependency2(dependency2): self.dependency2 = dependency2
case let .dependency4(dependency4): self.dependency4 = dependency4
}
if hasAllReceivedAllDependencies { generateDependency5() }
}
func generateDependency5() {
bus.send(newDependency)
}
}
I think in terms of type-safety and "dynamism" this approach would be a good fit. However to check if all dependencies were received already to start up the internal processes feels like a hack somehow. It feels like I am misusing this pattern in some form. Furthermore I think this approach may be able to "deadlock" if some dependency event was not sent and is therefore hard to debug where it got stuck. And again I would have to force unwrap the optionals in generateDependencyX but since this function would only get called if all optionals have a real value it seems safe to me.
I also took a look at some other design patterns (like chain-of-responsibility) but I couldn't really figure out how to model this pattern to my use-case.
My dream would be to somehow model a given design pattern as a result builder in the end so it would look something like:
FinalComponent {
Component1()
Component2()
Component3()
}
And in my opinion result builders would be possible with the "SwiftUI" and the event bus approach but they have the already described drawbacks. Again maybe I am missing an obvious design pattern which is already tailored to this situation or I am just modeling the problem in a wrong way. Maybe someone has a suggestion.

Swift Error: Realm accessed from incorrect thread

I am attempting to use the Realm library to persist data within my application. However, I keep running into the same error code: "Realm accessed from incorrect thread". I attempted to resolve this issue by creating a Realm-specific Dispatch Queue, and wrapping all of my Realm calls in it.
Here is what my "RealmManager" class looks like right now:
import Foundation
import RealmSwift
class RealmManager {
fileprivate static let Instance : RealmManager = RealmManager()
fileprivate var _realmDB : Realm!
fileprivate var _realmQueue : DispatchQueue!
class func RealmQueue() -> DispatchQueue {
return Instance._realmQueue
}
class func Setup() {
Instance._realmQueue = DispatchQueue(label: "realm")
Instance._realmQueue.async {
do {
Instance._realmDB = try Realm()
} catch {
print("Error connecting to Realm DB")
}
}
}
class func saveObjectArray(_ objects: [Object]) {
Instance._realmQueue.async {
do {
try Instance._realmDB.write {
for obj in objects {
Instance._realmDB.add(obj, update: .all)
}
}
} catch {
print("Error Saving Objects")
}
}
}
class func fetch(_ type: Int) -> [Object] {
if let realm = Instance._realmDB {
let results = realm.objects(Squeak.self).filter("type = \(type)")
var returnArray : [Object] = []
for r in results {
returnArray.append(r)
}
return returnArray
}
return []
}
I am calling Setup() inside of didFinishLaunchingWithOptions to instantiate the Realm queue and Realm Db instance.
I am getting the error code inside of saveObjectArray at:
try Instance._realmDB.write { }
This seems to simply be a matter of my misunderstanding of the threading requirements of Realm. I would appreciate any insight into the matter, or a direction to go in from here.
This issue is that you fetch your Realm data on a different thread than you save it.
To fix the error, the code within fetch will also need to run on the Realm thread that you have created.
I think this article does a good job of explaining multi-threading in Realm and particularly recommend paying attention to the three rules outlined in the article.

How to implement a Thread Safe HashTable (PhoneBook) Data Structure in Swift?

I am trying to implement a Thread-Safe PhoneBook object. The phone book should be able to add a person, and look up a person based on their name and phoneNumber. From an implementation perspective this simply involves two hash tables, one associating name -> Person and another associating phone# -> Person.
The caveat is I want this object to be threadSafe. This means I would like to be able to support concurrent lookups in the PhoneBook while ensuring only one thread can add a Person to the PhoneBook at a time. This is the basic reader-writers problem, and I am trying to solve this using GrandCentralDispatch and dispatch barriers. I am struggling to solve this though as I am running into issues.. Below is my Swift playground code:
//: Playground - noun: a place where people can play
import UIKit
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
public class Person: CustomStringConvertible {
public var description: String {
get {
return "Person: \(name), \(phoneNumber)"
}
}
public var name: String
public var phoneNumber: String
private var readLock = ReaderWriterLock()
public init(name: String, phoneNumber: String) {
self.name = name
self.phoneNumber = phoneNumber
}
public func uniquePerson() -> Person {
let randomID = UUID().uuidString
return Person(name: randomID, phoneNumber: randomID)
}
}
public enum Qos {
case threadSafe, none
}
public class PhoneBook {
private var qualityOfService: Qos = .none
public var nameToPersonMap = [String: Person]()
public var phoneNumberToPersonMap = [String: Person]()
private var readWriteLock = ReaderWriterLock()
public init(_ qos: Qos) {
self.qualityOfService = qos
}
public func personByName(_ name: String) -> Person? {
var person: Person? = nil
if qualityOfService == .threadSafe {
readWriteLock.concurrentlyRead { [weak self] in
guard let strongSelf = self else { return }
person = strongSelf.nameToPersonMap[name]
}
} else {
person = nameToPersonMap[name]
}
return person
}
public func personByPhoneNumber( _ phoneNumber: String) -> Person? {
var person: Person? = nil
if qualityOfService == .threadSafe {
readWriteLock.concurrentlyRead { [weak self] in
guard let strongSelf = self else { return }
person = strongSelf.phoneNumberToPersonMap[phoneNumber]
}
} else {
person = phoneNumberToPersonMap[phoneNumber]
}
return person
}
public func addPerson(_ person: Person) {
if qualityOfService == .threadSafe {
readWriteLock.exclusivelyWrite { [weak self] in
guard let strongSelf = self else { return }
strongSelf.nameToPersonMap[person.name] = person
strongSelf.phoneNumberToPersonMap[person.phoneNumber] = person
}
} else {
nameToPersonMap[person.name] = person
phoneNumberToPersonMap[person.phoneNumber] = person
}
}
}
// A ReaderWriterLock implemented using GCD and OS Barriers.
public class ReaderWriterLock {
private let concurrentQueue = DispatchQueue(label: "com.ReaderWriterLock.Queue", attributes: DispatchQueue.Attributes.concurrent)
private var writeClosure: (() -> Void)!
public func concurrentlyRead(_ readClosure: (() -> Void)) {
concurrentQueue.sync {
readClosure()
}
}
public func exclusivelyWrite(_ writeClosure: #escaping (() -> Void)) {
self.writeClosure = writeClosure
concurrentQueue.async(flags: .barrier) { [weak self] in
guard let strongSelf = self else { return }
strongSelf.writeClosure()
}
}
}
// MARK: Testing the synchronization and thread-safety
for _ in 0..<5 {
let iterations = 1000
let phoneBook = PhoneBook(.none)
let concurrentTestQueue = DispatchQueue(label: "com.PhoneBookTest.Queue", attributes: DispatchQueue.Attributes.concurrent)
for _ in 0..<iterations {
let person = Person(name: "", phoneNumber: "").uniquePerson()
concurrentTestQueue.async {
phoneBook.addPerson(person)
}
}
sleep(10)
print(phoneBook.nameToPersonMap.count)
}
To test my code I run 1000 concurrent threads that simply add a new Person to the PhoneBook. Each Person is unique so after the 1000 threads complete I am expecting the PhoneBook to contain a count of 1000. Everytime I perform a write I perform a dispatch_barrier call, update the hash tables, and return. To my knowledge this is all we need to do; however, after repeated runs of the 1000 threads I get the number of entries in the PhoneBook to be inconsistent and all over the place:
Phone Book Entries: 856
Phone Book Entries: 901
Phone Book Entries: 876
Phone Book Entries: 902
Phone Book Entries: 912
Can anyone please help me figure out what is going on? Is there something wrong with my locking code or even worse something wrong with how my test is constructed? I am very new to this multi-threaded problem space, thanks!
The problem is your ReaderWriterLock. You are saving the writeClosure as a property, and then asynchronously dispatching a closure that calls that saved property. But if another exclusiveWrite came in during the intervening period of time, your writeClosure property would be replaced with the new closure.
In this case, it means that you can be adding the same Person multiple times. And because you're using a dictionary, those duplicates have the same key, and therefore don't result in you're seeing all 1000 entries.
You can actually simplify ReaderWriterLock, completely eliminating that property. I’d also make concurrentRead a generic, returning the value (just like sync does), and rethrowing any errors (if any).
public class ReaderWriterLock {
private let queue = DispatchQueue(label: "com.domain.app.rwLock", attributes: .concurrent)
public func concurrentlyRead<T>(_ block: (() throws -> T)) rethrows -> T {
return try queue.sync {
try block()
}
}
public func exclusivelyWrite(_ block: #escaping (() -> Void)) {
queue.async(flags: .barrier) {
block()
}
}
}
A couple of other, unrelated observations:
By the way, this simplified ReaderWriterLock happens to solves another concern. That writeClosure property, which we've now removed, could have easily introduced a strong reference cycle.
Yes, you were scrupulous about using [weak self], so there wasn't any strong reference cycle, but it was possible. I would advise that wherever you employ a closure property, that you set that closure property to nil when you're done with it, so any strong references that closure may have accidentally entailed will be resolved. That way a persistent strong reference cycle is never possible. (Plus, the closure itself and any local variables or other external references it has will be resolved.)
You're sleeping for 10 seconds. That should be more than enough, but I'd advise against just adding random sleep calls (because you never can be 100% sure). Fortunately, you have a concurrent queue, so you can use that:
concurrentTestQueue.async(flags: .barrier) {
print(phoneBook.count)
}
Because of that barrier, it will wait until everything else you put on that queue is done.
Note, I did not just print nameToPersonMap.count. This array has been carefully synchronized within PhoneBook, so you can't just let random, external classes access it directly without synchronization.
Whenever you have some property which you're synchronizing internally, it should be private and then create a thread-safe function/variable to retrieve whatever you need:
public class PhoneBook {
private var nameToPersonMap = [String: Person]()
private var phoneNumberToPersonMap = [String: Person]()
...
var count: Int {
return readWriteLock.concurrentlyRead {
nameToPersonMap.count
}
}
}
You say you're testing thread safety, but then created PhoneBook with .none option (achieving no thread-safety). In that scenario, I'd expect problems. You have to create your PhoneBook with the .threadSafe option.
You have a number of strongSelf patterns. That's rather unswifty. It is generally not needed in Swift as you can use [weak self] and then just do optional chaining.
Pulling all of this together, here is my final playground:
PlaygroundPage.current.needsIndefiniteExecution = true
public class Person {
public let name: String
public let phoneNumber: String
public init(name: String, phoneNumber: String) {
self.name = name
self.phoneNumber = phoneNumber
}
public static func uniquePerson() -> Person {
let randomID = UUID().uuidString
return Person(name: randomID, phoneNumber: randomID)
}
}
extension Person: CustomStringConvertible {
public var description: String {
return "Person: \(name), \(phoneNumber)"
}
}
public enum ThreadSafety { // Changed the name from Qos, because this has nothing to do with quality of service, but is just a question of thread safety
case threadSafe, none
}
public class PhoneBook {
private var threadSafety: ThreadSafety
private var nameToPersonMap = [String: Person]() // if you're synchronizing these, you really shouldn't expose them to the public
private var phoneNumberToPersonMap = [String: Person]() // if you're synchronizing these, you really shouldn't expose them to the public
private var readWriteLock = ReaderWriterLock()
public init(_ threadSafety: ThreadSafety) {
self.threadSafety = threadSafety
}
public func personByName(_ name: String) -> Person? {
if threadSafety == .threadSafe {
return readWriteLock.concurrentlyRead { [weak self] in
self?.nameToPersonMap[name]
}
} else {
return nameToPersonMap[name]
}
}
public func personByPhoneNumber(_ phoneNumber: String) -> Person? {
if threadSafety == .threadSafe {
return readWriteLock.concurrentlyRead { [weak self] in
self?.phoneNumberToPersonMap[phoneNumber]
}
} else {
return phoneNumberToPersonMap[phoneNumber]
}
}
public func addPerson(_ person: Person) {
if threadSafety == .threadSafe {
readWriteLock.exclusivelyWrite { [weak self] in
self?.nameToPersonMap[person.name] = person
self?.phoneNumberToPersonMap[person.phoneNumber] = person
}
} else {
nameToPersonMap[person.name] = person
phoneNumberToPersonMap[person.phoneNumber] = person
}
}
var count: Int {
return readWriteLock.concurrentlyRead {
nameToPersonMap.count
}
}
}
// A ReaderWriterLock implemented using GCD concurrent queue and barriers.
public class ReaderWriterLock {
private let queue = DispatchQueue(label: "com.domain.app.rwLock", attributes: .concurrent)
public func concurrentlyRead<T>(_ block: (() throws -> T)) rethrows -> T {
return try queue.sync {
try block()
}
}
public func exclusivelyWrite(_ block: #escaping (() -> Void)) {
queue.async(flags: .barrier) {
block()
}
}
}
for _ in 0 ..< 5 {
let iterations = 1000
let phoneBook = PhoneBook(.threadSafe)
let concurrentTestQueue = DispatchQueue(label: "com.PhoneBookTest.Queue", attributes: .concurrent)
for _ in 0..<iterations {
let person = Person.uniquePerson()
concurrentTestQueue.async {
phoneBook.addPerson(person)
}
}
concurrentTestQueue.async(flags: .barrier) {
print(phoneBook.count)
}
}
Personally, I'd be inclined to take it a step further and
move the synchronization into a generic class; and
change the model to be an array of Person object, so that:
The model supports multiple people with the same or phone number; and
You can use value types if you want.
For example:
public struct Person {
public let name: String
public let phoneNumber: String
public static func uniquePerson() -> Person {
return Person(name: UUID().uuidString, phoneNumber: UUID().uuidString)
}
}
public struct PhoneBook {
private var synchronizedPeople = Synchronized([Person]())
public func people(name: String? = nil, phone: String? = nil) -> [Person]? {
return synchronizedPeople.value.filter {
(name == nil || $0.name == name) && (phone == nil || $0.phoneNumber == phone)
}
}
public func append(_ person: Person) {
synchronizedPeople.writer { people in
people.append(person)
}
}
public var count: Int {
return synchronizedPeople.reader { $0.count }
}
}
/// A structure to provide thread-safe access to some underlying object using reader-writer pattern.
public class Synchronized<T> {
/// Private value. Use `public` `value` computed property (or `reader` and `writer` methods)
/// for safe, thread-safe access to this underlying value.
private var _value: T
/// Private reader-write synchronization queue
private let queue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".synchronized", qos: .default, attributes: .concurrent)
/// Create `Synchronized` object
///
/// - Parameter value: The initial value to be synchronized.
public init(_ value: T) {
_value = value
}
/// A threadsafe variable to set and get the underlying object, as a convenience when higher level synchronization is not needed
public var value: T {
get { reader { $0 } }
set { writer { $0 = newValue } }
}
/// A "reader" method to allow thread-safe, read-only concurrent access to the underlying object.
///
/// - Warning: If the underlying object is a reference type, you are responsible for making sure you
/// do not mutating anything. If you stick with value types (`struct` or primitive types),
/// this will be enforced for you.
public func reader<U>(_ block: (T) throws -> U) rethrows -> U {
return try queue.sync { try block(_value) }
}
/// A "writer" method to allow thread-safe write with barrier to the underlying object
func writer(_ block: #escaping (inout T) -> Void) {
queue.async(flags: .barrier) {
block(&self._value)
}
}
}
In some cases you use might NSCache class. The documentation claims that it's thread safe:
You can add, remove, and query items in the cache from different threads without having to lock the cache yourself.
Here is an article that describes quite useful tricks related to NSCache
I don’t think you are using it wrong :).
The original (on macos) generates:
0 swift 0x000000010c9c536a PrintStackTraceSignalHandler(void*) + 42
1 swift 0x000000010c9c47a6 SignalHandler(int) + 662
2 libsystem_platform.dylib 0x00007fffbbdadb3a _sigtramp + 26
3 libsystem_platform.dylib 000000000000000000 _sigtramp + 1143284960
4 libswiftCore.dylib 0x0000000112696944 _T0SSwcp + 36
5 libswiftCore.dylib 0x000000011245fa92 _T0s24_VariantDictionaryBufferO018ensureUniqueNativeC0Sb11reallocated_Sb15capacityChangedtSiF + 1634
6 libswiftCore.dylib 0x0000000112461fd2 _T0s24_VariantDictionaryBufferO17nativeUpdateValueq_Sgq__x6forKeytF + 1074
If you remove the ‘.concurrent’ from your ReaderWriter queue, "the problem disappears”.©
If you restore the .concurrent, but change the async invocation in the writer side to be sync:
swift(10504,0x70000896f000) malloc: *** error for object 0x7fcaa440cee8: incorrect checksum for freed object - object was probably modified after being freed.
Which would be a bit astonishing if it weren’t swift?
I dug in, replaced your ‘string’ based array with an Int one by interposing a hash function, replaced the sleep(10) with a barrier dispatch to flush any laggardly blocks through, and that made it more reproducibly crash with the somewhat more helpful:
x(10534,0x700000f01000) malloc: *** error for object 0x7f8c9ee00008: incorrect checksum for freed object - object was probably modified after being freed.
But when a search of the source revealed no malloc or free, perhaps the stack dump is more useful.
Anyways, best way to solve your problem: use go instead; it actually makes sense.

How to get the current queue name in swift 3

We have function like this in swift 2.2 for printing a log message with the current running thread:
func MyLog(_ message: String) {
if Thread.isMainThread {
print("[MyLog]", message)
} else {
let queuename = String(UTF8String: dispatch_queue_get_label(DISPATCH_CURRENT_QUEUE_LABEL))! // Error: Cannot convert value of type '()' to expected argument type 'DispatchQueue?'
print("[MyLog] [\(queuename)]", message)
}
}
These code no longer compile in swift 3.0. How do we obtain the queue name now?
As Brent Royal-Gordon mentioned in his message on lists.swift.org it's a hole in the current design, but you can use this horrible workaround.
func currentQueueName() -> String? {
let name = __dispatch_queue_get_label(nil)
return String(cString: name, encoding: .utf8)
}
If you don't like unsafe pointers and c-strings, there is another, safe solution:
if let currentQueueLabel = OperationQueue.current?.underlyingQueue?.label {
print(currentQueueLabel)
// Do something...
}
I don't know any cases when the currentQueueLabel will be nil.
Now DispatchQueue has label property.
The label you assigned to the dispatch queue at creation time.
var label: String { get }
It seems been existed from first, maybe not been exposed via public API.
macOS 10.10+
And please use this only to obtain human-readable labels. Not to identify each GCDQ.
If you want to check whether your code is running on certain GCDQ, you can use dispatchPrecondition(...) function.
This method will work for both OperationQueue and DispatchQueue.
func printCurrnetQueueName()
{
print(Thread.current.name!)
}
Here's a wrapper class that offers some safety (revised from here):
import Foundation
/// DispatchQueue wrapper that acts as a reentrant to a synchronous queue;
/// so callers to the `sync` function will check if they are on the current
/// queue and avoid deadlocking the queue (e.g. by executing another queue
/// dispatch call). Instead, it will just execute the given code in place.
public final class SafeSyncQueue {
public init(label: String, attributes: DispatchQueue.Attributes) {
self.queue = DispatchQueue(label: label, attributes: attributes)
self.queueKey = DispatchSpecificKey<QueueIdentity>()
self.queue.setSpecific(key: self.queueKey, value: QueueIdentity(label: self.queue.label))
}
// MARK: - API
/// Note: this will execute without the specified flags if it's on the current queue already
public func sync<T>(flags: DispatchWorkItemFlags? = nil, execute work: () throws -> T) rethrows -> T {
if self.currentQueueIdentity?.label == self.queue.label {
return try work()
} else if let flags = flags {
return try self.queue.sync(flags: flags, execute: work)
} else {
return try self.queue.sync(execute: work)
}
}
// MARK: - Private Structs
private struct QueueIdentity {
let label: String
}
// MARK: - Private Properties
private let queue: DispatchQueue
private let queueKey: DispatchSpecificKey<QueueIdentity>
private var currentQueueIdentity: QueueIdentity? {
return DispatchQueue.getSpecific(key: self.queueKey)
}
}
This works best for me:
/// The name/description of the current queue (Operation or Dispatch), if that can be found. Else, the name/description of the thread.
public func queueName() -> String {
if let currentOperationQueue = OperationQueue.current {
if let currentDispatchQueue = currentOperationQueue.underlyingQueue {
return "dispatch queue: \(currentDispatchQueue.label.nonEmpty ?? currentDispatchQueue.description)"
}
else {
return "operation queue: \(currentOperationQueue.name?.nonEmpty ?? currentOperationQueue.description)"
}
}
else {
let currentThread = Thread.current
return "UNKNOWN QUEUE on thread: \(currentThread.name?.nonEmpty ?? currentThread.description)"
}
}
public extension String {
/// Returns this string if it is not empty, else `nil`.
public var nonEmpty: String? {
if self.isEmpty {
return nil
}
else {
return self
}
}
}