I'm writing a script that, among other things, will have to run, in a separate cmd/ps shell, a command line sip-client which will connect / register to a sip trunk circuit, after that, the sip-client have to stay opened in background in a "listen mode" state.
After this action, the initial script have to runs others stuff, at a certain point I've to "re-use" the separate shell where I've launched the sip-client, for making a call to a phone number...
How can I re-use and runs commands in the separate cmd/ps shell whenever I need it?
I have been trying to fuzz using both AFL and Libfuzzer. One of the distinct differences that I have come across is that when the AFL is executed, it runs continuously unless it is manually stopped by the developer.
On the other hand, Libfuzzer stops the fuzzing process when a bug is identified.I know that it allow the addition of parallel fuzzing through the jobs=N command, however those processes still stop when a bug is identified.
Is there any reason behind this behavior?
Also, is there any command that allows the Libfuzzer to run continuously unless the developer stops the fuzzing process?
This question is old but I also was in need to run libFuzzer without stopping.
It can be accomplished with the flags -fork=<N of jobs> combined with -ignore_crashes=1.
Be aware that now Ctrl+C doesn't work anymore. It is considered as a crash and just spawns a new job. But I think this is a bug, see here.
I found Apple's document to understand why i should use run loop to implement task in main dispatch queue.
According to Apple docs,
The main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. This queue works with the application’s run loop (if one is present) to interleave the execution of queued tasks with the execution of other event sources attached to the run loop. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application.
but still, i can't understand 'why' run loop is needed. it sounds like 'it needs run loop because it needs run loop'. I will very appreciate it if someone explain me about this. Thank you.
why i should use run loop to implement task in main dispatch queue
Normally, you don’t, because you are already using one!
In an application project, there is a main queue run loop already. For example, an iOS app project is actually nothing but one gigantic call to UIApplicationMain, which provides a run loop.
That is how it is able to sit there waiting for the user to do something. The run loop is, uh, running. And looping.
But in, say, a Mac command line tool, there is no automatic run loop. It runs its main function and exits immediately. If you needed it not to do that, you would supply a run loop.
DispatchQueue.main.async is when you have code
running on a background queue and you need a specific block of code to
be executed on the main queue.
In your code, viewDidLoad is already running on the main queue so
there is little reason to use DispatchQueue.main.async.
But isn't necessarily wrong to use it. But it does change the order of
execution.
async closure is queued up to run after the current runloop completes.
i can't understand 'why' run loop is needed
Generally, a run loop is not needed for command line apps. You can use run loops if you have a special need for one (e.g. you have some dynamic UI that is performing some tasks while you wait for user input), but the vast majority of command line apps don’t require run loops.
As the docs say:
A run loop is an event processing loop that you use to schedule work and coordinate the receipt of incoming events. The purpose of a run loop is to keep your thread busy when there is work to do and put your thread to sleep when there is none.
So, if you need to have your app wait for some incoming events or you’re dispatching tasks asynchronously between queues, then, fine, employ run loops, but otherwise, don’t bother. Most command line apps don’t need to use run loops at all.
I have a set of scripts that run and spit out various bits of output. Sometimes they'll just stop until I hit enter. I have nothing in my script that prompts for information from the user.
At first I thought maybe it just wasn't flushing the output, but I've sat and waited to see what would happen and it doesn't act as if it had been processing in the background and just not flushing the output to the console (it would be further along).
The strange thing is that it happens at different points in the script.
Does anyone have any input on this? Anything I can look at specifically to identify this? This script will eventually be kicked off by another process and I can't have it randomly waiting and sitting.
I have a Perl programs that will takes a long time to run. The user may exit it occasionally and I hope to implement a mechanism to recover the program from where it exited.
I have an idea to use Storable/Dumper module to save the state of the program before it exited and restore the state after it resumed.
But how can I move the program to where it exited? Can I just set a recover point from where it exited and move to the recover point directly after it resumed?
You can use the concept of transactions and design the program around that, but having the user kill a process as an expected way of interacting with it doesn't sound like a good idea.
Maybe giving better feedback to the user about the program state would solve this issue instead of dealing with hacky behaviour.