I have a set of scripts that run and spit out various bits of output. Sometimes they'll just stop until I hit enter. I have nothing in my script that prompts for information from the user.
At first I thought maybe it just wasn't flushing the output, but I've sat and waited to see what would happen and it doesn't act as if it had been processing in the background and just not flushing the output to the console (it would be further along).
The strange thing is that it happens at different points in the script.
Does anyone have any input on this? Anything I can look at specifically to identify this? This script will eventually be kicked off by another process and I can't have it randomly waiting and sitting.
Related
I have a windows PowerShell script that needs to run 24/7.
Occasionally windows requires an automated update and re-boot. Is there any way I can detect when this is about to happen? I'd like to ensure the script does an orderly shutdown, rather than be abruptly terminated.
I'm not looking for a weeks notice of a re-boot, but five minutes warning would be long enough to ensure the script closes various database tables, and does some basic housekeeping rather than simply falling over in an ugly heap.
UPDATE - I know there are a lot of web articles describing how to detect when an update and/or reboot is pending, but none (that I can find) actually pin it down to a time. Some updates/reboots remain pending for hours or days. I'm looking for a flag or notification that 'this server will reboot within the next ten minutes' or something similar.
I have been trying to fuzz using both AFL and Libfuzzer. One of the distinct differences that I have come across is that when the AFL is executed, it runs continuously unless it is manually stopped by the developer.
On the other hand, Libfuzzer stops the fuzzing process when a bug is identified.I know that it allow the addition of parallel fuzzing through the jobs=N command, however those processes still stop when a bug is identified.
Is there any reason behind this behavior?
Also, is there any command that allows the Libfuzzer to run continuously unless the developer stops the fuzzing process?
This question is old but I also was in need to run libFuzzer without stopping.
It can be accomplished with the flags -fork=<N of jobs> combined with -ignore_crashes=1.
Be aware that now Ctrl+C doesn't work anymore. It is considered as a crash and just spawns a new job. But I think this is a bug, see here.
I want to open two command line prompts (I am using CMDer) from the same directory and run different commands at the same time.
Would those two commands interrupt each other?
One is for compiling a web application I am building (takes like 7 minutes to compile), and the other is to see the history of the commands I ran (this one should be done quickly).
Thank you!
Assuming that CMDer does nothing else than to issue the same commands to the operating system as a standard cmd.exe console would do, then the answer is a clear "Yes, they do interfere, but it depends" :D
Break down:
The first part "opening multiple consoles" is certainly possible. You can open up N console windows and in each of them switch to the same directory without any problems (Except maybe RAM restrictions).
The second part "run commands which do or do not interfere" is the tricky part. If your idea is that a console window presents you something like an isolated environment where you can do things as you like and if you close the window everything is back to normal as if you never ever touched anything (think of a virtual machine snapshot which is lost/reverted when closing the VM) - then the answer is: This is not the case. There will be cross console effects observable.
Think about deleting a file in one console window and then opening this file in a second console window: It would not be very intuitive if the file would not have been vanished in the second console window as well.
However, sometimes there are delays until changes to the file system are visible to another console window. It could be, that you delete the file in one console and make a dir where the file is sitting in another console and still see that file in the listing. But if you try to access it, the operating system will certainly quit with an error message of the kind "File not found".
Generally you should consider a console window to be a "View" on your system. If you do something in one window, the effect will be present in the other, because you changed the underlying system which exists only once (the system is the "Model" - as in "Model-View-Controller Design Pattern" you may have heard of).
An exception to this might be changes to the environment variables. These are copied from the current state when a console window is started. And if you change the value of such a variable, the other console windows will stay unaffected.
So, in your scenario, if you let a build/compile operation run and during this process some files on your file system are created, read (locked), altered or deleted then this would be a possible conflicting situation if the other console window tries to access the same files. It will be a so called "race condition", that is, a non-deterministic process, which state of a file will be actual to the second console window (or both, if the second one also changes files which the first one wants to work with).
If there is no interference on a file level (reading the same files is allowed, writing to the same file is not), then there should be no problem of letting both tasks run at the same time.
However, on a very detailed view, both processes would interfere in that they need the same limited but vastly available CPU and RAM resources of your system. This should not pose any problems with the todays PC computing power, considering features like X separate cores, 16GB of RAM, Terabytes of hard drive storage or fast SSDs, and so on.
Unless there is a very demanding, highly parallelizable, high priority task to be considered, which eats up 98% CPU time, for example. Then there might be a considerable slow down impact on other processes.
Normally, the operating system's scheduler does a good job on giving each user-process enough CPU time to finish as quickly as possible, while still presenting a responsive mouse cursor, playing some music in the background, allowing a Chrome running with more than 2 tabs ;) and uploading the newest telemetry data to some servers on the internet, all at the same time.
There are techniques which make it possible that a file is available as certain snapshots to a given timestamp. The key word would be "Shadow Copy" under Windows. Without going into details, this technique allows for example defragmenting a file while it is being edited in some application or a backup could copy a (large) file while a delete operation is run at the same file. The operating system ensures that the access time is considered when a process requests access to a file. So the OS could let the backup finish first, until it schedules the delete operation to run, since this was started after the backup (in this example) or could do even more sophisticated things to present a synchronized file system state, even if it is actually changing at the moment.
I have built a command line app that will listen for an argument and kick off a never-ending process. What I can't seem to figure out is how to keep that process running until I send a different argument back to the command line app to end it. I am able to accomplish this easily in the main thread, but I want to be able to gain access to the command prompt to issue an end command to stop that process.
My first thought was to go to threading, but I can't seem to get that to work. What is the solution you would use in this case?
I have one exe which collect some information and once information collected saved in local machine. I have managed loop such that it will do same task for infinite time.
But exe stops execution after couple of hours (approx 5-6 hours), it neither crashed nor gives exception.
I tried to find reason in windbg but I haven't got any exception in it.
Now, Could anyone help me to detect problem?
should I go for sysinternal tool or any other, which debugger tool should I use?
A few things jump to mind that could be killing your program:
Out of memory condition
Stack overflow
Integer wrap in loop counter
Programs that run forever are notoriously difficult to write correctly, because your memory management must be perfect. Without more information though, it's impossible to answer this question.
If the executable is not yours and is Naitive C/C++ code, you may want to capture first chance exception dumps or monitor the exe using Windows debug tools (such as DebugDiag or ADPlus).
Alternatively, if you have access to the developer of the executable, they may add more tracing to the exe (ETW or otherwise) to understand the possible failure points in the code.