How to make Libfuzzer run without stopping similar to AFL? - fuzzing

I have been trying to fuzz using both AFL and Libfuzzer. One of the distinct differences that I have come across is that when the AFL is executed, it runs continuously unless it is manually stopped by the developer.
On the other hand, Libfuzzer stops the fuzzing process when a bug is identified.I know that it allow the addition of parallel fuzzing through the jobs=N command, however those processes still stop when a bug is identified.
Is there any reason behind this behavior?
Also, is there any command that allows the Libfuzzer to run continuously unless the developer stops the fuzzing process?

This question is old but I also was in need to run libFuzzer without stopping.
It can be accomplished with the flags -fork=<N of jobs> combined with -ignore_crashes=1.
Be aware that now Ctrl+C doesn't work anymore. It is considered as a crash and just spawns a new job. But I think this is a bug, see here.

Related

How to interact with Openmodelica embedded opc-ua server

I have built and started an OPC UA embedded Openmodelica server with the BouncingBall model like so:
$ omc +s path/to/model
$ ./BouncingBall -embeddedServer=opc-ua -rt=1
Now I'm trying to interact with it using an OPCUA client. However, I don't understand how I'm supposed to interact with the server properly. As far as I know this is undocumented.
The most promising approach seems to be to set enableStopTime to false and run to true. Then the simulation seems to run indefinitely and the values seem to make sense. It seems I'm only able to extract the values in real time however. While running, when I set run to false it seems that the server enters an erroneous state and it refuses to give any values back.
If I restart the executable and instead set step to true nothing seems to change and after trying to set step to true a second time the server becomes unresponsive. The -rt=1 option doesn't seem to matter. Seems like it enters the same state as above (1).
(After restart) If I leave enableStopTime to be true and set run to true the simulation runs to stop and then the server quits with message The simulation finished successfully. Maybe this is intended. Kind of seems odd. Would make sense to be able to restart the simulation or trigger it with new options.
What I would hope to be able to do: Start and stop simulation as well as rewind to a certain point to check the value at that point. It seems to me that the API "affords" this functionality and it could probably be provided by hackingly wrapping the executable and API. Are the above bugs or intended? What is the intended way to interact with an OPC UA server in these cases?
The OpenModelica compiler version is 1.16.0~1-g84b4a71
Please try the latest nightly build
It includes the following commit.
That might solve it. I believe things worked without subscriptions before, since I could never reproduce this without them.
(By the way, do people go on our git commit feed and try to reproduce bugs fixed in the last 24 hours; we quite often get questions that were just recently fixed)

How do I continue where I left off in Spring Batch?

So I wrote an ItemReader. When this app is run from the command line again I want to continue reading from where I left off. How do I do that?
I've added spring-task. It seems to track certain things. Does it help with this?
Everything I have read online seems to be talking about restarting the job after a failure. I don't think I have any use for that. I've added all of my stuff into the ExecutionContext. Should I use the JobRepository and start looking around for the last successful execution?

Protractor - Why should i implement waiting or sleeps in test script

I have read that "Protractor can automatically execute the next step in your test the moment the webpage finishes pending tasks, so you don’t have to worry about waiting"
But, I had to implement waiting(s) or sleeps in my test script to make them all PASS.
Can anyone help to understand this waiting.
Read At :http://www.protractortest.org/#/
Automatic Waiting:
You no longer need to add waits and sleeps to your test. Protractor can automatically execute the next step in your test the moment the webpage finishes pending tasks, so you don’t have to worry about waiting for your test and webpage to sync.
Right, I find this description as confusing as you. I think it describes the ideal world with no network delays and timeouts, no animations and layout issues.
The description originates from the following:
Protractor runs an extra command before performing any action on the
browser to ensure that the application being tested has stabilized.
This extra command is an async script which asks Angular to respond when the application is done with all timeouts and asynchronous requests, and ready for the test to resume.
Now, what does that "application is ready" statement mean? It basically means that, there are no pending requests, promises and "macro tasks" inside the Angular running application (source for angular testability).
From what I understand, this helps to cover most of the timing and waiting issues, but, if there is a pending JS code executed outside of Angular, or if there are any pending animations or other UI-related changes - this may potentially have an effect on your test stability - for instance, an element might not be yet visible or clickable, an input may not yet get enabled etc.
And, this does not actually contribute to the feedback from the end-to-end tests being stable and helpful - for example, in our project we often find ourselves adding browser.wait()s here and there to tackle occasionally failing tests. Also, here is a set of things that helped us to tackle this flakiness:
Protractor flakiness

exe stops execution after couple of hours

I have one exe which collect some information and once information collected saved in local machine. I have managed loop such that it will do same task for infinite time.
But exe stops execution after couple of hours (approx 5-6 hours), it neither crashed nor gives exception.
I tried to find reason in windbg but I haven't got any exception in it.
Now, Could anyone help me to detect problem?
should I go for sysinternal tool or any other, which debugger tool should I use?
A few things jump to mind that could be killing your program:
Out of memory condition
Stack overflow
Integer wrap in loop counter
Programs that run forever are notoriously difficult to write correctly, because your memory management must be perfect. Without more information though, it's impossible to answer this question.
If the executable is not yours and is Naitive C/C++ code, you may want to capture first chance exception dumps or monitor the exe using Windows debug tools (such as DebugDiag or ADPlus).
Alternatively, if you have access to the developer of the executable, they may add more tracing to the exe (ETW or otherwise) to understand the possible failure points in the code.

How can I write a dtrace script to dump the stack of a crashing process on Solaris 10?

I have a process, running on Solaris 10, that is terminating due to a SIGSEGV. For various uninteresting reasons it is not possible for me to get a backtrace by the usual means (gdb, backtrace call, corefile are all out). But I think dtrace might be useable.
If so, I'd like to write a dtrace script that will print the thread stacks of the process when the process is killed. I'm not very familiar with dtrace, but this seems like it must be pretty easy for someone who knows it. I'd like to be able to run this in such a way as to monitor a particular process. Any thoughts?
In case anyone else stumbles across this, I'm making some progress experimenting on OS X with the following script I've cobbled together:
#!/usr/sbin/dtrace -s
proc:::fault
/pid == $1/
{
ustack();
}
I'll update this with a complete solution when I have one.
A couple of Solaris engineers wrote a script for using Dtrace to capture crash data and published an article on using it, which can now be found at Oracle Technology Network: Enabling User-Controlled Collection of Application Crash Data With DTrace.
One of the authors also published a number of updates to his blog, which can still be read at https://blogs.oracle.com/gregns/, but since he passed away in 2007, there haven't been any further updates.