Using NTP to synchronise systems using different timezones - redhat

I have been trying to synchronize the time on 2 of my RHEL servers(Node1 and Node2) as I have to install a database which requires each node to be in sync. Both Node1 and Node2 are using the same server in /etc/ntp.conf i.e. Node3.
Node2 is perfectly synchronised with Node3 but Node1 is way off(5.5 hours). However, the 'ntpq -p' command doesn't show this difference.
remote refid st t when poll reach delay offset jitter
==============================================================================
*Node3 Node4 3 u 59 64 377 0.156 0.180 0.024
It shows an offset of just 0.18.
After spending some time trying to figure out the cause behind this, I've found that Node1 is on a different time zone(using the command: date +"%Z %z")! Although the time really is in sync if you consider the time zone in which the nodes are operating but since I'm not sure how the DB will behave in such a scenario I want to bring Node1 on the same time zone as that of Node2.
Basically, I want to know what are the recommended steps to perform in order to get Node1 to use the same timezone as that of Node2/Node3.
RHEL release: 6.9

It turns out that all I had to do was just create a softlink of a file pertaining to the time-zone to which I wanted to shift(delete the file if it already exists). As I wanted to change the time zone to IST(India Standard Time), I used the following command(from root user):
ln -s /usr/share/zoneinfo/Asia/Kolkata /etc/localtime
And now, the date command on both of my nodes shows the same time.

Related

mongodb: howto force master at startup

I defined a replicaset on mongodb v 3.6.8 ubuntu 20.04 with 2 nodes
At the beginnning, I had the first node as Master and the Second as Secondary.
Due to problem to modify the unix service (cf option --replSet), at a time the master was stopped and the secondary up.
Now, I have 2 Secondary nodes; any command to add a third one or to remove the 2nd node freeze.
Any clue to start the first node as master ?

show pool_nodes returns only one standby

I am working on PostgreSQL 14 and pgpool2 4.3.1
I have configured streaming replication with (two stand by) and three node pgpool2.
After configuring pgpool2 when I use show pool_node it show one primary and one standby.
I gave node id 0 for node1 ,1 for node2 and 2 for node3. show pool_node retunes node1(primary) and node3 (stand by) but node ID it returns are 0 and 1 .
pg_is_in_recovery() results are correct.
why show pool_node is not returning all the nodes(stand by).Am i doing some thing wrong.
Any help with this issue will be appreciated.

cronjob runs with UTC timezone in AWS EKS cluster

I am running several cronjobs on kubernetes cluster and want them to run on EDT timezone America/New_York. I would like to find out, how to ensure that my jobs run at specific EDT time. At present, all these jobs run with UTC timezone.
the POD images have been verified that they all have EDT timezone
can set manually timezone by going to each host machine/container
There was some suggestion on finding kubernetes controller and setting timezone on that particular host/container. I would appreciate,if someone can shed light on
a. How one can find kubernetes admin controller?
b. How one can set timezone automatically on the container via command-line or yaml file
I came across following git repo and it helped me solve the problem
https://github.com/hiddeco/cronjobber
Overall, it allows one to set timezone along with cron job specification.

Cassandra pod is taking more bootstrap time than expected

I am running Cassandra as a Kubernetes pod . One pod is having one Cassandra container.we are running Cassandra of version 3.11.4 and auto_bootstrap set to true.I am having 5 node in production and it holds 20GB data.
Because of some maintenance activity and if I restart any Cassandra pod it is taking 30 min for bootstrap then it is coming UP and Normal state.In production 30 min is a huge time.
How can I reduce the bootup time for cassandra pod ?
Thank you !!
If you're restarting the existing node, and data is still there, then it's not a bootstrap of the node - it's just restart.
One of the potential problems that you have is that you're not draining the node before restart, and all commit logs need to be replayed on the start, and this can take a lot of time if you have a lot of data in commit log (you can just check system.log on what Cassandra is doing at that time). So the solution could be is to execute nodetool drain before stopping the node.
If the node is restarted before crash or something like, you can thing in the direction of the regular flush of the data from memtable, for example via nodetool flush, or configuring tables with periodic flush via memtable_flush_period_in_ms option on the most busy tables. But be careful with that approach as it may create a lot of small SSTables, and this will add more load on compaction process.

Problems with memcached and ntpd on CentOS

We are having a problem with a virtual machine that's running our frontend website. Once it's running everything is fine, but after a reboot memcached is going bonkers. What happens is that we put items in there set to expire in 15 to 30 seconds, but they don't expire for about an hour! So after a while all data we're serving is highly outdated.
We've been investigating the issue for a bit and found that during startup ntp is changing the clock a lot, putting it almost an hour forward.
We found that memcached doesn't use the system clock but has it's own clock, so once the system clock changes and sets the expiry in it's time, memcache is an hour behind and will keep the item for an hour.
We've already swapped the boot order of ntpd (now S58) and memcached (now S59), but that hasn't resolved the issue.
Restarting memcached manually after a reboot is not really option because our host reboots the server regularly after patches and we're not always there after that's happened.
Does anyone have any idea on how to resolve this? We've googled high and low, but can't find anyone with the same problem. Surely we're not the first to have this problem?
virt-what is reporting the VPS is running in VMWare.