Display time remaining in zenity progress bar - zenity

Is it possible for the zenity progress bar to display time remaining or transfer per second (MB/s) for the progress ? For example, by using
dd if=/dev/zero of=/dev/null status=progress
The command above will result in :
Log
So, if possible I want the progress bar to have all the information from the log. If it’s not possible, how can I make it so when the cloning process run, it will show the status=progress log and zenity progress bar at the same time.

I actually don't know a proper answer on your question, but i have solved a similar problem, so i'll post it here and maybe someone will find it useful.
To show a progress of archiving files i have used this command:
(pv -n $root_path/save/$backup_save_src_file |pigz -c > $backup_path/save/${backup_save_src_file%%.*}$backup_date.gz) 2>&1 | zenity --progress --percentage=0 --title="Backupping" --text="Cloning file into archive..." --auto-close
pv -n $root_path/save/$backup_save_src_file - will read file and output raw progress value separated by new line on standart error (man pv for more options)
pigz -c > $backup_path/save/${backup_save_src_file%%.*}$backup_date.gz - piped contents are directed into archiver for compressing
(...) 2>&1 | zenity ... - redirect STDERR to STDOUT and finally pipe it to zenity

Related

How to prevent the output truncated if the rows of output from the windbg to large?

If the output rows from the windbg command to large ,such as 100k rows, finally the windbg just display thousands of the rows, and most of them would be truncated , so my question is how to prevent the output truncated , or write all of the rows from the output to a local file to keep all of the output rows? the "write Windows text to file" wouldn't helpful.
Not sure if it would help, but .logopen and .logclose commands might be helpful in this case (respectively open and close a log file which keeps a copy of the events and commands from the Debugger Command window).
See also Keeping a Log File in WinDbg.
sometimes simply piping works especially when running cdb and quitting after executing just one command
cdb -c "tc 100;q" calc >> foo.txt
you should have 100 calls lets check
grep -c !.*: foo.txt
256
lets check how many sysenter were done and what were the index of the syscalls
grep sysenter -B 4 foo.txt | grep eax | awk "{print $1}"
eax=000000ea
eax=0000014d
eax=000000fb
we can use the output when the commands run for an infinite amount of time
without having file locked issues
like this
if .logopen .logclose isnt an option
Try to open additional command window with Ctrl+N and execute the long outputed command within it

Does wget -w option not work with -p?

When I run wget64.exe -p -w 10 http://www.example.com on Windows command line for a site with many images, I expect based on documentation for -w to cause this to space out all its image downloads by 10 seconds each. But it does the whole thing with no waits - is this because -w isn't meant to work with -p? Does grabbing the images linked in a page somehow "not count" as making an additional request from the server? Or am I using incorrect syntax?
wget64.exe -r -l 1 --wait=10 http://www.example.com should do what you want. It splits the page dependencies into retrievals and applies the wait time instead of combining them all into a single page request.

Script response if md5sum returns FAILED

Say I had a script that checked honeypot locations using md5sum.
#!/bin/bash
#cryptocheck.sh
#Designed to check md5 CRC's of honeypot files located throughout the filesystem.
#Must develop file with specific hashes and create crypto.chk using following command:
#/opt/bin/md5sum * > crypto.chk
#After creating file, copy honeypot folder out to specific folders
locations=("/share/ConfData" "/share/ConfData/Archive" "/share/ConfData/Application"
"/share/ConfData/Graphics")
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done
And the output looked like this:
http://pastebin.com/b4AU4s6k
Where would you start to try and recognize the output and perhaps trigger some sort of response by the system if there is a 'FAILED'?
I've worked a bit with PERL trying to parse log files before but my attempts typically failed miserably for one reason or another.
This may not be the proper way to go about this, but I'd want to be putting this script into a cronjob that would run every minute. I had some guys telling me that an inotify job or script (I'm not familiar with this) would be better than doing it this way.
Any suggestions?
--- edit
I made another script to call the script above and send the output to a file. The new script then runs a grep -q on 'FAILED' and if it picks anything up, it sounds the alarm (tbd what the alarm will be).
#!/bin/bash
#cryptocheckinit.sh
#
#rm /share/homes/admin/cryptoalert.warn
/share/homes/admin/cryptocheck.sh > /share/homes/admin/cryptoalert.warn
grep -q "FAILED" /share/homes/admin/cryptoalert.warn && echo "LIGHT THE SIGNAL FIRES"
Use:
if ! /opt/bin/md5sum -c /share/homes/admin/crypto.chk
then
# Do something
fi
Or pipe the output of the loop:
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done | grep -q FAILED && echo "LIGHT THE SIGNAL FIRES"

SLURM display the stdout and stderr of an unfinished job

I used to use a server with LSF but now I just transitioned to one with SLURM.
What is the equivalent command of bpeek (for LSF) in SLURM?
bpeek
bpeek Displays the stdout and stderr output of an unfinished job
I couldn't find the documentation anywhere. If you have some good references for SLURM, please let me know as well. Thanks!
You might also want to have a look at the sattach command.
I just learned that in SLURM there is no need to do bpeek to check the current standard output and standard error since they are printed in running time to the files specified for the stdout and stderr.
Here's a workaround that I use. It mimics the bpeek functionality from LSF
Create a file bpeek.sh:
#!/bin/bash
# take as input an argument - slurm job id - and save it into a variable
jobid=$1
# run scontrol show job $jobid and save the output into a variable
#find the string that starts with StdOut= and save it into a variable without the StdOut= part
stdout=$(scontrol show job $jobid | grep StdOut= | sed 's/StdOut=//')
#show last 10 rows of the file if no argument 2 is given
nrows=${2:-10}
tail -f -n $nrows $stdout
Then you can use it:
sh bpeek.sh JOBID NROWS(optional)
Or add an alias to ~/.bashrc file:
alias bpeek="sh ~/bpeek.sh $1 $2"
and then use it:
bpeek JOBID NROWS(optional)

Why isn't this command taking the diff of two directories?

I am asked to diff two directories using Perl but I think something is wrong with my command,
$diff = system("sudo diff -r '/Volumes/$vol1' '/Volumes/$vol2\\ 1/' >> $diff.txt");
It doesn't display and output. Can someone help me with this? Thanks!
It seems that you want to store all differences in a string.
If this is the case, the command in the question is not going to work for a few reasons:
It's hard to tell whether it's intended or not, but the $diff variable is being used to set the filename storing the differences. Perhaps this should be diff.txt, not $diff.txt
The result of the diff command is saved in $diff.txt. It doesn't display anything in STDOUT. This can be remedied by omitting the >> $diff.txt part. If it also needs to be stored in file, consider the tee command:
sudo diff -r dir1/ dir2/ | tee diff.txt
When a system call is assigned to a variable, it will return 0 upon success. To quote the documentation:
The return value is the exit status of the program as returned by the wait call.
This means that $diff won't store the differences, but the command exit status. A more sensible approach would be to use backticks. Doing this will allow $diff to store whatever is output to STDOUT by the command:
my $diff = `sudo diff -r dir1/ dir2/ | tee diff.txt`; # Not $diff.txt
Is it a must to use the sudo command? Avoid using it if even remotely possible:
my $diff = `diff -r dir1/ dir2/ | tee diff.txt`; # Not $diff.txt
A final recommendation
Let a good CPAN module take care of this task, as backtick calls can only go so far. Some have already been suggested here; it may be well worth a look.
Is sudo diff being prompted for a password?
If possible, take out the sudo from the invocation of diff, and run your script with sudo.
"It doesn't display and output." -- this is becuase you are saving the differences to a file, and then (presumably) not doing anything with that resulting file.
However, I expect "diff two directories using Perl" does not mean "use system() to do it in the shell and then capture the results". Have you considered doing this in the language itself? For example, see Text::Diff. For more nuanced control over what constitutes a "difference", you can simply read in each file and craft your own algorithm to perform the comparisons and compile the similarities and differences.
You might want to check out Test::Differences for a more flexible diff implementation.