How can I calculate exact difference between a server time and GMT time? - server

How can I calculate exact difference between a server time and GMT time ?
I want to send a POST request to a server on specific time but I know the server time is not accurate so I want to calculate inaccuracy of the server clock (mili seconds) to send my request on time.(if I send request earlier server will block me )
I try this code on ubuntu but it's only show server time.
curl -I --silent www.server.com | grep "^Date"
if I can calculate difference between my pc and server clock it's very helpful for me.

There are many options, of course. Here’s a suggestion.
Write a shell script or .bat file that runs your curl and grep commands at feeds the result into a program that you write.
Write the program in Java or another language that runs on the Java Virtual Machine since this probably has the best support for handling date and time, though all the major programming languages can.
In your program you may use DateTimeFormatter.RFC_1123_DATE_TIME for parsing the server time into an OffsetDateTime. Get the PC time as another OffsetDateTime and use Duration.between() for finding the difference (positive or negative).
Beware that there will be a delay from reading the server time until reading the PC time, so results will not be accurate. If you want a better estimate of upper and lower bound somehow read the PC time both before and after reading the server time.
Links
Oracle tutorial: Date Time explaining how to use java.time.
Documentation:
DateTimeFormatter.RFC_1123_DATE_TIME
OffsetDateTime
Duration.between()

Related

Unix timestamp: everywhere the same?

If I request some Unix timestamps at the same time, in any system, programming language, anywhere on the world (on universe), will they always be the same? Or is it possible that values differ?
As a precondition I assume that each system has to have their time configured correctly. Additional question: nowadays, can I assume devices with an internet connection have the correct time?
So, how reliable is the usage of the Unix timestamp? E.g. if I'd like so set an alert for different users on the world at a certain time and I broadcast just the timestamp, can I assume that the alerts happen in the same second?
(Journeys with speed of light should be disregarded here, I guess.)
Unix timestamps are the number of seconds elapsed since 01-01-1970 00:00:00 UTC so if the system time is set correctly it should be equal everywhere.

Get job average elapsed time

I need to get the average elapsed time for each job in Active Job Environment in order to produce a report.
I've tried to extract it from SMF records but I don't seem to get the right one. Also I've tried keystroke language but it's to slow! The job takes around 15min to collect all the data. I thought about using CTMJSA but since I only have examples to UPDATE and DELETE the statistics I thought it would be wiser not to use it.
There must be a file that loads the Statistics Screen and I'd like to ask if anyone knows which is it or how could I get that information.
Thank you!!
Ctmruninf is a better utility to use in this case. I use it on Unix to produce total numbers (via perl) but you should be able to adapt it to mainframe and get averages. To list everything between fixed dates do -
ctmruninf -list 20151101120101 20151109133301 -JOBNAME pdiscm005

Postgresql now() returning incorrect value

I have two postgresql servers - one running local on my windows machine and one running on a beta linux server.
I am running this command on both (at very close to the same time):
select current_setting('TIMEZONE'), now();
Local DB result:
"US/Pacific";"2015-10-09 12:29:51.053-07"
Beta DB result:
"US/Pacific";"2015-10-09 12:23:00.121953-07"
As you can see, the timezones are the same, but the times are not- the time on the local database is correct, but the time on the beta server appears to be about 6 minutes and 51 seconds slow.
EDIT based on answers:
Also, I checked the dates and times on both operating systems and both are correct.
The selects are not a part of a big transaction. I am using pgAdmin to run just those statements. Also, I ran the timeofday() with the now() function and they returned the exact same times as the now() function calls.
Any idea how this is happening?
Are you sure you checked the current date/time on the OS in both machines? It looks like the clock on them are not "synchronized"...
Also, are you running that "select" inside "long running" transaction? "now()" function returns always the time "frozen" in the beginning of the transaction. To get running time inside a transaction, use timeofday() function.
You checked the dates on both machines - but did you also check the times? The best way to keep the time in sync is using NTP across all your machines. Also if you are unsure about the transactions, you can use timeofday() to get the current system time.
SELECT timeofday();
PostgreSQL directly calls the underlying operating system's date and time routines to get the timestamp, adjusting only for time zone (if needed) and epoch - which is fixed.
Most likely your clocks are not in fact in sync between the two hosts. Set up network time sync. The problem will go away. If you're really, really sure they are, check the output of the date +%H:%M:%S command on the unix system, and time /T on the Windows command line. Are they really the same?
If they're different in PostgreSQL but not as reported by the operating system and you have set up network time sync, please report a bug to pgsql-bugs or follow up here. But please be very sure they're truly the same, and do make sure network time sync is active and working.

Telnet memchached stats- number of keys read in a time duration

Is it possible to connect to memcached server using telnet and fetch the number of keys read in last 24 hours ( or any time duration)?
Thanks
The standard memcached server doesn't provide that information out of the box. It's easy enough to regularly interrogate the daemon and store any/all the relevant information, in order to produce stats and graphs. An easy example of that is is shown by scripts such as memcache-stats.sh.

iPhone - Getting a reliable timestamp from the app to be store into external mysql database

I want to write into a mysql database a timestamp in real milliseconds (I mean not just seconds x 1000).
How may I achieve to get that timestamp into the app. I saw that some methods on NSDate could work, but they are based onto the iPhone date/time, that can be changed by the user, so using those methods would conclude to "fake" timestamps.
Any timestamp generated off the local clock will be subject to the same attack, so you'll have to find a reliable, trustworthy source of time information in its stead. An SSL-secured server would do. You'll also have to account for lag in contacting the server, which can be hundreds of milliseconds or more on an EDGE WWAN connection.
From there, you would write the data to the database the same way you would write any other data.