How does lsyncd handle one of multiple destination servers being down? - server

If we run lsyncd on a server and want to sync from that server to 2 or more servers, and one or more of those destination servers is down at that moment, how does lsyncd handle it? Is there some mechanism to automatically update the down servers as soon as they are back up?

There is not a automatic process to handle your problem own by lsyncd.
When one of the hosts went down, lsyncd does not perform rsync (obvious).
When this host is UP again, lsyncd will synchronize diff between the last consistent state of the down server and when it come back up.
That's a normal situation.
Lsyncd is based on rsync, the process is the same.
But your question is maybe "When the server come back up, how lsyncd will detect him ?"
Lsyncd sychronizes on diff. If there is a file who changes on "master" server (where lsyncd is launch), he will synchronize on his targeted servers.
Whatever if server went down for hours for example...
I hope i answered to your question.
Kind regards,

Related

Is there a way to stop VS Code Remote SSH from failing after it is disconnected?

I find that if I leave my VS Code Remote SSH connection open it disconnects automatically after a certain amount of time. Following automatic disconnection I find the Remote SSH then fails: when I try to log in again I get repeated requests for my remote password and every time I enter my password I just get another password prompt.
My current workaround is to go to the Command Palette and do "Remote-SSH: Kill VS Code Server on Host". Sometimes I need to do this multiple times for it to take effect. Then when I next log in there is a lengthy VS Code installation script that needs to run before I can start coding again.
Is there a way of setting up VS Code Remote SSH that avoids this issue? I have tried some of the suggestions on this page - https://code.visualstudio.com/docs/remote/troubleshooting. However I feel like I am completely in the dark regarding what the underlying issue is. I do not even know how I could go about generating informative diagnostics / a log.
Maybe the problem is that the remote machine has a limited number of proccess to run at same time. When a automatic disconnection of vscode happen, the that session is still running, but you cannot create a new one because you are over the limit in number of process.
In my case, asking to the remote machine to kill my process (manually done by the technician working in that machine in this case) works.
A better solution will be close the vscode sessions from your machine, to be able to start a new one again.

Deploy code to multiple production servers under the load balancer without continuous deployments

I am the only developer (full-stack) in my company I have too much work other than automating the deployments as of now. In the future, we may hire a DevOps guy for the same.
Problem: We have 3 servers under Load Balancer. I don't want to block 2nd & 3rd servers till the 1st server updated and repeat the same with 2nd & 3rd because there might be huge traffic for one server initially and may fail at some specif time before other servers go live.
Server 1
User's ----> Load Balancer ----> Server 2 -----> Database
Server 3
Personal Opinion: Is there a way where we can pull the code by writing any scripts in the Load Balancer. I can replace the traditional Digital Ocean load balancer with Nginx Server making it a reverse proxy.
NOTE: I know there are plenty of other questions asked in Stack
Overflow on the same but none of them solves my queries.
Solutions I know
GIT Hooks - I know somewhat about GIT Hooks but don't want to use it because if I commit to master branch by mistake then it must not get sync to my production and create havoc in the live server and live users.
Open multiple tabs of servers and do it manually (Current Scenario). Believe me its pain in the ass :)
Any suggestions or redirects to the solutions will be really helpful for me. Thanks in advance.
One of the solutions is to write ansible playbook for this. With Ansible, you can specify to run it per one host at the time and also as the last step you can include verification check that checks if your application responds with response code 200 or it can query some endpoint that indicates the status of your application. If the check fails, Ansible will stop the execution. For example, in your case, Server1 deploys fine, but on server2 it fails. The playbook will stop and you will have servers 1 and 3 running.
I have done it myself. Works fine in environments without continuous deployments.
Here is one example

Two master instances on same database

I want to use Postgresql in Windows Server 2012 R2 for one our project where it can be 24/7 uptime.
I would like to ask the community if I can have 2 master instances in 2 different servers A&B and they will 'work' on the same DB located in a shared file storage in lan. Always one master instance on server A will be online and when it goes offline for some reason (I suppose) a powershell script will recognize that the postgresql service stopped and will start the service in server B. The same script will continuous check that only one service in servers A & B is working to avoid conflicts.
I'd like to ask if this is possible or a better approach for my configuration.
(I can't use replication because when server A shuts down the server B is in read-only mode thing that I don't want)
If you manage to start two instances of PostgreSQL on the same data directory, serious data corruption will happen.
Normally there is a postmaster.pid file that prevents that, but a PostgreSQL server process on a different machine that accesses the same file system will happily unlink that after spewing some log messages, thinking it was left behind from a crash.
So you are really walking on thin ice with a solution like that.
One other issue that you didn't think of is that script that is supposed to check if the server is still running. What if that script fails, because for example the network connection between the two servers is down, but the server is still up an running happily? Such a “split brain” scenario will cause data corruption with your setup.
Another word of caution: since you seem to be using Windows (Powershell?), you probably envision a CIFS file system when you are talking of shared storage. A Windows “network share” is not a reliable file system — last time I checked, it did not honor _commit.
Creating a reliable failover cluster is harder than you think, and I'd recommend that you check existing solutions before you try to roll your own.

lsyncd does not delete files at the reciever side

I have successfully gotten lsyncd to work between two RHEL servers. Everything works great with 1 single exception.
My expectation that is confirmed by documentation is that if the file doesn't exist on the destination, it will be deleted. What is happening is that the only time files that exist on the destination and not in the source are deleted is if I restart the lsyncd service. Is that an expected behavior, or am I missing something?
This is the designed behavior.
Lsyncd is coded to keep the destination synchronous to the source assuming nobody else messes around with the destination.

Need an opinion on a method for pull data from a file with Perl

I am having a conflict of ideas with a script I am working on. The conflict is I have to read a bunch of lines of code from a VMware file. As of now I just use SSH to probe every file for each virtual machine while the file stays on the server. The reason I am now thinking this is a problem is because I have 10 virtual machines and about 4 files that I probe for filepaths and such. This opens a new SSH channel every time I refer to the ssh object I have created using Net::OpenSSH. When all is said and done I have probably opened about 16-20 ssh objects. Would it just be easier in a lot of ways if I SCP'd the files over to the machine that needs to process them and then have most of the work done on the local side. The script I am making is a backup script for ESXi and it will end up storing the files anyway, the ones that I need to read from.
Any opinion would be most helpful.
If the VM's do the work locally, it's probably better in the long run.
In the short term, the ~equal amount of resources will be used, but if you were to migrate these instances to other hardware, then of course you'd see gains from the processing distribution.
Also from a maintenance perspective, it's probably more convenient for each VM to host the local process, since I'd imagine that if you need to tweak it for a specific box, it would make more sense to keep it there.
Aside from the scalability benefits, there isn't really any other pros/cons.