Skip new tasks if the queue is not empty. Swift - swift

I have a piece of code that spams long running tasks 5-6 times per second. Each task takes some time to finish. I want to ignore all the other tasks while 1 is being executed. After it finishes a fresh one should take its place.
There are a bunch of tools being used for concurrency in Swift 4.2. What would work the best?

For solving of this problem you can use GCD or Operation. In case that you have describ I would use Operation. Using this approach you can have a bit more user friendly control over Operation that are executing (stoping, cancelling....).
Small example:
let queue = OperationQueue()
queue.maxConcurrentOperationCount = 1
queue.addOperation { print("🤠") }
queue.addOperation { print("🤓") }
queue.addOperation { print("👺") }
In this case operations are executed one by one.

Related

GoogleWebRTC hangs (freezes) the main thread in swift native app (OpenVidu)

We have hanging problem (the app freezes due of main thread lock) with our iOS (swift) native app with OpenVidu implementation (which uses GoogleWebRTC under the hood). The specific conditions required: need to join existing room with at least 8 participants already streaming. With 6 participants it happens less often and almost never with less than 6. It doesn't hang if participants join one by one, only if you join the room with all other participants already streaming. This indicates concurrent nature of the issue.
The GoogleWebRTC hangs on setRemoteDescription call:
func setRemoteDescription(sdpAnswer: String) {
let sessionDescription: RTCSessionDescription = RTCSessionDescription(type: RTCSdpType.answer, sdp: sdpAnswer)
self.peerConnection!.setRemoteDescription(sessionDescription, completionHandler: {(error) in
print("Local Peer Remote Description set: " + error.debugDescription)
})
}
As you can see on the screenshot above, the main thread hangs on __psynch_cvwait. No any other threads seems being locked. The lock never releases leaving the app completely frozen.
In the attempt to solve it I was trying the following:
I moved OpenVidu signaling server processing (RPC protocol) from the main thread into separate threads. This only caused the lock now occurs in the one of separate threads I created. It now doesn't block the UI, but blocks OV signaling. The problem persists.
I added the lock to process each signaling event (participant join event, publish video, etc) synchronously (one by one). This doesn't help either (it actually made the situation worse).
Instead of using GoogleWebRTC v. 1.1.31999 from Cocoapods, I downloaded the latest GoogleWebRTC sources, built them in release configuration and included into my project. This didn't help to solve the issue.
Any suggestions/comments would be appreciated.
Thanks!
EDIT 1:
The signaling_thread and worker_thread are both is waiting for something in the same kind of lock. Nothing of them execute any of my code at the moment of the lock.
I also tried to run in DEBUG build of GoogleWebRTC, in this case no locks happen, but everything works much slower (which is OK for debug, but we can't use this in Production).
EDIT 2:
I tried to wrap in additional DispatchQueue for offer and setLocalDescription callbacks, but this changes nothing. The problem still well reproducible (almost 100% of time, if I have 8 participants with streams):
self.peerConnection!.offer(for: constrains) { (sdp, error) in
DispatchQueue.global(qos: .background).async {
guard let sdp = sdp else {
return
}
self.peerConnection!.setLocalDescription(sdp, completionHandler: { (error) in
DispatchQueue.global(qos: .background).async {
completion(sdp)
}
})
}
}
The WebRTC Obj-C API can be called from any thread, but most method calls are passed to WebRTC's internal thread called signalling thread.
Also, callbacks/observers like SetLocalDescriptionObserverInterface or RTCSetSessionDescriptionCompletionHandler are called from WebRTC on the signaling thread.
Looking at the screenshots, it seems that the signaling thread is currently blocked and can no longer call WebRTC API calls.
So, to avoid deadlocks, it's a good idea to create your own thread / dispatch_queue and handle callbacks.
See
https://webrtc.googlesource.com/src/+/0a52ede821ba12ee6fff6260d69cddcca5b86a4e/api/g3doc/index.md and
https://webrtc.googlesource.com/src/+/0a52ede821ba12ee6fff6260d69cddcca5b86a4e/api/g3doc/threading_design.md
for details.
After the comment from OpenVidu team, the problem was solved by adding 100ms delay between adding participants who are already in the room. I would consider this more like a hack than a real solution, but I can confirm that it works both in test and in Production environment:
DispatchQueue.global(qos: .background).async {
for info in dict.values {
let remoteParticipant = self.newRemoteParticipant(info: info)
if let streamId = info.streamId {
remoteParticipant.createOffer(completion: {(sdp) in
self.receiveVideoFrom(sdp: sdp, remoteParticipant: remoteParticipant, streamId: streamId)
})
} else {
print("No streamId")
}
Thread.sleep(forTimeInterval: 0.1)
}
}

Generic Grand Central Dispatch

I have code set up like below. It is my understanding that queue1 should finish all operations THEN move to queue2. However, as soon as my async operation starts, queue2 begins. This defeats the purpose of GCD.. what am I doing wrong? This outputs:
did this finish
queue2
then some time later, prints success from image download
..I want to make it clear that if I put in other code in queue1, such as print("test") or a loop 0..10 printing i, all those operations will complete before moving to queue2. It seems the async download is messing with it, how can I fix this? There is no documentation anywhere, I used This very hepful guide from AppCoda http://www.appcoda.com/grand-central-dispatch/
let queue1 = DispatchQueue(label: "com.matt.myqueue1")
let queue2 = DispatchQueue(label: "com.matt.myqueue2")
let group1 = DispatchGroup()
let group2 = DispatchGroup()
let item = DispatchWorkItem{
// async stuff happening like downloading an image
// print success if image downloads
}
queue1.sync(execute: item)
item.notify(queue1, execute: {
print("did this finish?")
})
queue2.sync {
print("queue2")
}
let item = DispatchWorkItem{
// async stuff happening like downloading an image
// print success if image downloads
}
OK, defines it, but nothing runs yet.
queue1.sync(execute: item)
Execute item and kick off its async events. Immediately return after that. Nothing here says "wait for those unrelated asynchronous events to complete." The system doesn't even have a way to know that there are additional async calls inside of functions you call. How would it know whether object.doit() includes async calls or not (and whether those are async calls you meant to wait for)? It just knows when item returns, continue.
This is what group1 is supposed to be used for (you don't seem to use it for anything). Somewhere down inside these "async stuff happening" you're supposed to tell the system that it finished by leaving the group. (I have no idea what group2 is for. It's never used either.)
item.notify(queue1, execute: {
print("did this finish?")
})
item already finished. We know it has to have finished already, because it was run with sync, and that doesn't return until its item has. So this block will be immediately scheduled on queue1.
queue2.sync {
print("queue2")
}
Completely unrelated and could run before or after the "did this finish" code, we schedule a block on queue2.
What you probably meant was:
let queue1 = DispatchQueue(label: "com.matt.myqueue1")
let group1 = DispatchGroup()
group1.enter()
// Kick off async stuff.
// These usually return quickly, so there's no need for your own queue.
// At some point, when you want to say this is "done", often in some
// completion handler, you call group1.leave(), for example:
... completionHandler: { group1.leave() }
// When all that finishes, print
group.notify(queue: queue1) { print("did this finish?") }
EVERYTHING is initially queued from the main queue, however at some point you switch from main queue to a background queue and you should NOT expect a synchronized queue would wait for what is has enqueued on another queue. They are irrelevant. If that was the case then always and always no matter what, everything is to wait for whatever it asked to run.*
so here's what I see is happening.
queue1 is happily finished, it has done everything it was suppose to by enqueuing item on another queue <-- that's all it was suppose to do. Since the 'async stuff' is async... it won't wait for it to finish. <-- actually if you use breakpoints inside the async you would see that the breakpoints would jump to } meaning they don't wait for a background queue they just jump to the end of the queue, since they are no longer on the main thread
then since it was a sync queue it wall wait till it's done. Once done it will go through its notify...now here's where it get's tricky: depending on how fast what you do in the async... 'print success' will either get called first or "queue2" though here obviously queue2 is returning/finishing sooner.
similar to what you see in this answer.
*: A mother (main queue) tells it's child1 to your room and bring book1 back, then tells child2 to your room and bring book2 back, then tells child3 to your room and bring book3 back. Each child is being ran from its own queue ( not the mother's queue).
The mother doesn't wait for child1 to come back...so it can tell child2 to go. The mother only tells child 1 go...then child2 go...then child 3 go.
However child2 is not told (to go) before child 1, nor child3 is told before child2 or child1 <-- this is because of the serial-ness of main queue. They get dispatched serially, but their completion order depends on how fast each child/queue finishes

Unity3d Parse FindAsync method freezes UI

I'm running a simple Parse FindAsync method as show below (on Unity3d):
Task queryTask = query.FindAsync();
Debug.Log("Start");
Thread.Sleep(5000);
Debug.Log("Middle");
while (!queryTask.IsCompleted) {
Debug.Log("Waiting");
Thread.Sleep(1);
}
Debug.Log("Finished");
I'm running this method on a separate thread and I put a load circle on UI. My load freezes (+- 1 second) somewhere in the middle of the Thread.sleep method. It's look like when findAsync finishes the process it freezes the UI until it complete their job. Is there anything I could do?
Ps: This works perfectly on editor, the problem is on Android devices.
Ps2: I'm running parse 1.4.1
Ps3: I already tried the continueWith method, but the same problem happens.
IEnumerator RunSomeLongLastingTask () {
Task queryTask = query.FindAsync();
Debug.Log("Start");
//Thread.Sleep(5000); //Replace with below call
yield WaitForSeconds(5); //Try this
Debug.Log("Middle");
while (!queryTask.IsCompleted) {
Debug.Log("Waiting");
//Thread.Sleep(1);
yield WaitForSeconds(0.001f);
}
Debug.Log("Finished");
}
To call this function, use:
StartCoroutine(RunSomeLongLastingTask());
Making the thread sleep might not be a good idea, mainly because the number of threads available is different on each device.
Unity as a built-in scheduler that uses coroutines, so it is better to use it.
IEnumerator RunSomeLongLastingTask()
{
Task queryTask = query.FindAsync();
while (!queryTask.IsCompleted)
{
Debug.Log("Waiting"); // consider removing this log because it also impact performance
yield return null; // wait until next frame
}
}
Now, one possible issue is if your task take too much CPU, then the UI will still not be responsive. If possible, try to give a lower priority to this task.

Semaphore is not waiting swift

I'm trying to do 3 async requests and control the load with semaphores to know when all have loaded.
I Init the semaphore in this way:
let sem = dispatch_semaphore_create(2);
Then send to background the waiting for semaphore code:
let backgroundQueue = dispatch_get_global_queue(QOS_CLASS_BACKGROUND, 0)
dispatch_async(backgroundQueue) { [unowned self] () -> Void in
println("Waiting for filters load")
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
println("Loaded")
}
Then I signal it 3 times (on each request onSuccess and onFailure):
dispatch_semaphore_signal(sem)
But when the signal code arrives it already passed the semaphore wait code, it never waits to subtract the semaphore count.
why?
You've specified dispatch_semaphore_create with a parameter of 2 (which is like calling dispatch_semaphore_signal twice), and then signal it three more times (for a total of five), but you appear to have only one wait (which won't wait at all because you started your semaphore with a count of 2).
That's obviously not going to work. Even if you fixed that (e.g. use zero for the creation of the semaphore and then issue three waits) this whole approach is inadvisable because you're unnecessarily tying up a thread waiting for the the other requests to finish.
This is a textbook candidate for dispatch groups. So you would generally use the following:
Create a dispatch_group_t:
dispatch_group_t group = dispatch_group_create();
Then do three dispatch_group_enter, once before each request.
In each of the three onSuccess/onFailure blocks pairs, do a dispatch_group_leave in both block.
Create a dispatch_group_notify block that will be performed when all of the requests are done.

Python-Multithreading Time Sensitive Task

from random import randrange
from time import sleep
#import thread
from threading import Thread
from Queue import Queue
'''The idea is that there is a Seeker method that would search a location
for task, I have no idea how many task there will be, could be 1 could be 100.
Each task needs to be put into a thread, does its thing and finishes. I have
stripped down a lot of what this is really suppose to do just to focus on the
correct queuing and threading aspect of the program. The locking was just
me experimenting with locking'''
class Runner(Thread):
current_queue_size = 0
def __init__(self, queue):
self.queue = queue
data = queue.get()
self.ID = data[0]
self.timer = data[1]
#self.lock = data[2]
Runner.current_queue_size += 1
Thread.__init__(self)
def run(self):
#self.lock.acquire()
print "running {ID}, will run for: {t} seconds.".format(ID = self.ID,
t = self.timer)
print "Queue size: {s}".format(s = Runner.current_queue_size)
sleep(self.timer)
Runner.current_queue_size -= 1
print "{ID} done, terminating, ran for {t}".format(ID = self.ID,
t = self.timer)
print "Queue size: {s}".format(s = Runner.current_queue_size)
#self.lock.release()
sleep(1)
self.queue.task_done()
def seeker():
'''Gathers data that would need to enter its own thread.
For now it just uses a count and random numbers to assign
both a task ID and a time for each task'''
queue = Queue()
queue_item = {}
count = 1
#lock = thread.allocate_lock()
while (count <= 40):
random_number = randrange(1,350)
queue_item[count] = random_number
print "{count} dict ID {key}: value {val}".format(count = count, key = random_number,
val = random_number)
count += 1
for n in queue_item:
#queue.put((n,queue_item[n],lock))
queue.put((n,queue_item[n]))
'''I assume it is OK to put a tulip in and pull it out later'''
worker = Runner(queue)
worker.setDaemon(True)
worker.start()
worker.join()
'''Which one of these is necessary and why? The queue object
joining or the thread object'''
#queue.join()
if __name__ == '__main__':
seeker()
I have put most of my questions in the code itself, but to go over the main points (Python2.7):
I want to make sure I am not creating some massive memory leak for myself later.
I have noticed that when I run it at a count of 40 in putty or VNC on
my linuxbox that I don't always get all of the output, but when
I use IDLE and Aptana on windows, I do.
Yes I understand that the point of Queue is to stagger out your
Threads so you are not flooding your system's memory, but the task at
hand are time sensitive so they need to be processed as soon as they
are detected regardless of how many or how little there are; I have
found that when I have Queue I can clearly dictate when a task has
finished as oppose to letting the garbage collector guess.
I still don't know why I am able to get away with using either the
.join() on the thread or queue object.
Tips, tricks, general help.
Thanks for reading.
If I understand you correctly you need a thread to monitor something to see if there are tasks that need to be done. If a task is found you want that to run in parallel with the seeker and other currently running tasks.
If this is the case then I think you might be going about this wrong. Take a look at how the GIL works in Python. I think what you might really want here is multiprocessing.
Take a look at this from the pydocs:
CPython implementation detail: In CPython, due to the Global Interpreter Lock, only one thread can execute Python code at once (even though certain performance-oriented libraries might overcome this limitation). If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing. However, threading is still an appropriate model if you want to run multiple I/O-bound tasks simultaneously.