I've just moved servers and have overlap on crontabs running.
Both servers set to BST, but one sends me a log at
08:00 BST the other old one 09:00 BST
The crontab entry for both is
0 9 * * * /root/phpmaillog.sh > /dev/null 2>&1
Mystery?
Related
Process Explorer has columns for CPU time (down to milliseconds) and CPU Cycles. For WinDbg I am aware of the !runaway command, also !runaway 7 for more details, but it shows CPU time only.
Are the CPU cycles also available somehow in a user mode crash dump?
What I have tried:
I looked at dt nt!_KTHREAD and I see it has a CycleTime property
ntdll!_KTHREAD
+0x000 Header : _DISPATCHER_HEADER
+0x018 CycleTime : Uint8B
I tried to query that property in a !for_each_thread, but WinDbg responds that it's available in kernel mode only.
Why do I want those CPU cycles?
I am working on a training for JetBrains dotTrace. It has an option to count CPU cycles and I'd like to explain where this cycles come from. Above kernel structure and Process Explorer is probably enough, but it would be awesome to see it live or post mortem in a user mode dump. I explain a lot of basics with WinDbg.
Following the implementation of GetProcessTimes() in ReactOS, you can see that the information is copied from the process' KPROCESS. So, indeed, it's only physically present in a dump that includes kernel memory.
C:\tw>ls -l
total 0
C:\tw>cdb -c ".dump /ma .\tw.dmp;q" calc.exe | grep writ
Dump successfully written
C:\tw>cdb -c "lm;!peb;.dump /ma .\tw1.dmp;q" calc.exe | grep writ
Dump successfully written
C:\tw>cdb -c ".ttime;q" -z tw.dmp | grep -B 3 quit
Created: Wed Apr 5 20:03:55.919 2017 ()
Kernel: 0 days 0:00:00.046
User: 0 days 0:00:00.000
quit:
C:\tw>cdb -c ".ttime;q" -z tw1.dmp | grep -B 3 quit
Created: Wed Apr 5 20:04:28.682 2017 ()
Kernel: 0 days 0:00:00.031
User: 0 days 0:00:00.000
quit:
C:\tw>
I am using Hyperion Planning. There are a lot of in-built command line utilities. may I know if all these command line utilities can be scheduled to run using the windows task scheduler?
You can run all command using batch file. Just call batch file from task scheduler.
On Linux we schedule things as follows:
[apphyp#ichypp0013 ~]$ crontab -l
### Minutes hours DayOfMonth Month Weekday(0-6 Sunday-Saturday)
###
###
### DAILY ONLINE BACKUP Plan0
00 20 * * 1-5 /app/ncia/scripts/stop.hyp.sh 2>&1
30 20 * * 1-5 /app/ncia/scripts/backup_Planning0.sh 2>&1
00 22 * * 1-5 /app/ncia/scripts/start.hyp.sh 2>&1
### Automatic Transfer
00 09 * * 1-5 /app/ncia/scripts/automatic.transfer.FMC_to_R2C.sh >> /app/ncia/log/automatic.transfer.FMC_to_R2C.log
00 23 * * 0-4 /app/ncia/scripts/automatic.transfer.HRC_to_HR2C.sh >> /app/ncia/log/automatic.transfer.HRC_to_HR2C.log
30 22 * * 0-4 /app/ncia/scripts/automatic.launch.business.rule.FMC_Admin_PrepareAllData_to_R2C.sh >> /app/ncia/log/automatic.launch.business.rule.FMC_Admin_PrepareAllData_to_R2C.log
I just encountered some strange behavior with Perl 5.16.3 on FreeBSD 9.3-RELEASE-p3. We've got a cron job which runs every five minutes and generates some text status files. I just happened to list the contents of the output directory and saw that the timestamps for some of the files were in the future! The files are created like this:
if (open(OUT, "> $status_file_path")) {
print OUT "$status_info\n";
close OUT;
}
Now, the file handle OUT is used in several places, however it is opened and closed within the same block as shown above. And like I said, out of ten files, only a few had future dates when displayed using ls.
For example, files with the current date had timestamps like 04/02/2015 20:29:46, files with future timestamps were out in November, e.g. 11/10/2015 09:38:41.
What might be going on here?
EDIT
I've got two tests running:
1) a perl script running a loop of 1000 iterations, sleeping a random time up to 10 seconds between iterations, using the open/print/close logic to create an output file and abort the script if the file's modification time is in the future.
2) a cron entry to touch a test file every minute, e.g. touch /home/test/test_file_date_with_cron.txt
TEST RESULTS
Neither of the tests generated output files with a timestamp in the future.
This is scary.
EDIT 2
Here is the filesystem info, the files are written in the /usr directory.
# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/gpt/gprootfs 2G 133M 1.7G 7% /
devfs 1.0k 1.0k 0B 100% /dev
/dev/gpt/gpusrfs 431G 3.8G 392G 1% /usr
procfs 4.0k 4.0k 0B 100% /proc
EDIT 3
Running the script outside of cron for several hundred iterations didn't duplicate the problem. HOWEVER, I just found some other files, which are created by a CGI script which have the future dates:
-rw-r--r-- 1 test test 5783 Nov 10 2015 Config.xml_20150210_104151
-rw-r--r-- 1 test test 34548 Nov 10 2015 Config2.xml_20150210_104151
-rw-r--r-- 1 test test 6105 Nov 10 2015 Config.xml_20151109_232210
-rw-r--r-- 1 test test 34554 Nov 10 2015 Config2.xml_20151109_232210
-rw-rw-r-- 1 root test 2075 Nov 9 2015 Config.xml_20151109_231055
-rw-rw-r-- 1 root test 1232 Nov 9 2015 Config2.xml_20151109_231055
These are archive files, which get moved and renamed with the file's mtime timestamp. Note that BOTH ls and Perl's stat() function report the future date -- stat() is used to generate the file's timestamp portion of the name.
Looking at the first entry, ls reports "Nov 10 2015", whereas when the CGI script processed it, Perl's stat() reported "20150210_104151", i.e. "Feb 02 2015" which is most likely correct.
Further down, we see ls showing "Nov 10 2015" and stat() reported "20151109_232210", i.e. "Nov 09 2015".
Finding those additional archived config files helped me track down the cause, which was as others have suggested, that the system date and timezone changed.
From: 1447147328 and America/Adak
To: 1426637771 and America/New_York
What was throwing me off, was that I thought the cron script wrote ALL of the output files each time it executes, but that's not the case. The files have different "refresh intervals".
In my /etc/crontab file I write:
* * * * * PLACK_ENV=development -I /home/adrian/app/lib/ /home/adrian/app/script/db/log_to_db.pl
To make a cron job run every minute. The job is running the log_to_db.pl perl script, which inserts data to my database.
When I run in my terminal
PLACK_ENV=development -I /home/adrian/app/lib/ /home/adrian/app/script/db/log_to_db.pl
It's OK! The script runs.
But the cron job isn't working!
What can be wrong?
PD: My script starts like
#!/usr/bin perl
....
My cron log prints:
Jul 8 20:29:01 dev0001 crond[1829]: (*system*) RELOAD (/etc/crontab)
Jul 8 20:29:01 dev0001 crond[1829]: (CRON) bad username (/etc/crontab)
Jul 8 20:30:01 dev0001 crond[1829]: (*system*) RELOAD (/etc/crontab)
Jul 8 20:30:01 dev0001 crond[1829]: (CRON) bad username (/etc/crontab)
Jul 8 20:30:01 dev0001 CROND[13504]: (root) CMD (/usr/lib64/sa/sa1 -S DISK 1 1)
You need a username when putting it in the system crontab
* * * * * adrian PLACK_ENV=development -I /home/adrian/app/lib/ /home/adrian/app/script/db/log_to_db.pl
But as #jithin said, putting this in your user crontab (crontab -e) might make more sense.
Don't edit the crontab file directly. Instead use crontab -e and add the cron entry.
With reference to the link
I'm running a JMeter test plan from command line and it's currently outputting something along the lines of:
Created the tree successfully using C:\*****\TestPlan.jmx
Starting the test # Thu Oct 11 10:20:43 EDT 2012 (1349965243947)
Waiting for possible shutdown message on port 4445
Tidying up ... # Thu Oct 11 10:20:46 EDT 2012 (1349965246384)
... end of run
Is there any way to turn off this output and have the plan execute 'silently'?
Found a way to do this, by following this article http://www.robvanderwoude.com/battech_redirection.php
and appending > NUL to the command
jmeter -n -t C:\***\TestPlan.jmx -Jhostname=%1 > NUL