lsyncd does not delete files at the reciever side - lsyncd

I have successfully gotten lsyncd to work between two RHEL servers. Everything works great with 1 single exception.
My expectation that is confirmed by documentation is that if the file doesn't exist on the destination, it will be deleted. What is happening is that the only time files that exist on the destination and not in the source are deleted is if I restart the lsyncd service. Is that an expected behavior, or am I missing something?

This is the designed behavior.
Lsyncd is coded to keep the destination synchronous to the source assuming nobody else messes around with the destination.

Related

context.storagePath is empty (except meta.json) but cached data is still available while debugging [REMOTE SSH]

I do have a really spooky issue while debugging my extension. Since I am using the workspaceState to cache information I tried to figure out where the state is usually located.
ExtensionContext.storagePath result into a path I was expecting /home/<user>/.vscode-server/data/User/workspaceStorage/a002010c26b7b33d865d62202553fe33/myname.myextension
This folder only contain a meta.json. (no other hidden files or folders)
But the strange thing is, that still the cached data is available. Any ideas where else this can be located?
I already removed the whole ".vscode-server" directory and still cached data is being loaded from somewhere else!?
I found it out by myself.
The cached data is stored on my local machine (not the remote connected computer).
So even if the ExtensionContext.storagePath is refering to a local path on the remote. It may happen that cached data is stored locally.
In my case it is a windows machine and the storagePath was:
C:\Users\<user>\AppData\Roaming\Code\User\workspaceStorage\a002010c26b7b33d865d62202553fe33
I assume this may also have something to do with its ExtensionKind

Google Cloud Storage - rsync or cp for initial upload?

We're using GCS for our archive backup and I was curious what people thought was better for the initial update - rsync or cp?
I've gotten hung up twice (once on a non-unicode character and again on what seemed like a long path) and would like to be able to pick up where I left off.
Any advice would be appreciated!
(and if this is a bad question, can someone tell me exactly why its bad or how to fix it? it seems I suck at asking questions here!)
rsync is better suited for doing archives/backups, for the reason you hinted at - if you started uploading data and then encountered a problem partway through, restarting a cp would cause you to re-upload files that were already successfully uploaded, while rsync would only upload files that weren't uploaded (or that changed since the last upload). Moreover, if some of the source files were deleted since you last started uploading, rsync will remove them from the destination bucket, making the destination content match the source content.

How does lsyncd handle one of multiple destination servers being down?

If we run lsyncd on a server and want to sync from that server to 2 or more servers, and one or more of those destination servers is down at that moment, how does lsyncd handle it? Is there some mechanism to automatically update the down servers as soon as they are back up?
There is not a automatic process to handle your problem own by lsyncd.
When one of the hosts went down, lsyncd does not perform rsync (obvious).
When this host is UP again, lsyncd will synchronize diff between the last consistent state of the down server and when it come back up.
That's a normal situation.
Lsyncd is based on rsync, the process is the same.
But your question is maybe "When the server come back up, how lsyncd will detect him ?"
Lsyncd sychronizes on diff. If there is a file who changes on "master" server (where lsyncd is launch), he will synchronize on his targeted servers.
Whatever if server went down for hours for example...
I hope i answered to your question.
Kind regards,

robocopy error with ERROR 32 (0x00000020)

I have two drives A and B. Using a python script I am creating some files in "A" drive and I am running a powerscript which copies all the files in the drive A to drive B in the interval of 1 sec.
I am getting this error in my powershell.
2015/03/10 23:55:35 ERROR 32 (0x00000020) Time-Stamping Destination
File \x.x.x.x\share1\source\ Dummy_100.txt The process cannot access
the file because it is being used by another process. Waiting 30
seconds...
How will I overcome this error?
This happened is because the file is locked by running process. To fix this, download Process Explorer. Then use Find>Find Handle or DLL, find out which process locked this file. Use 'taskkill' to kill that process in commandline. You will be fine.
if you want to skip this files you can use /r:n that n is times of tries
for example /w:3 /r:5 will try 5 time every 3 seconds
How will I overcome this error?
If backup is, what you got in mind, and you encounter in-use files frequently, you look into Volume Shadow Copies (VSS), which allow to copy files despite them being ‘in use’. It's not a product, but a windows technology used by various backup tool.
Sadly, it's not built into robocopy, but can be used in conjunction with it. See
➝ https://superuser.com/a/602833/75914
and especially:
➝ https://github.com/candera/shadowspawn
It could be many reasons.
In my case, I was running a CMD script to copy from one server to another, a heap of SQL Server backups and transaction logs. I too had the same problem because it was trying to write into a log file that was supposedly opened by another process. It was not.
I ran many IP checks and Process ID checkers that I ran out of knowing what was hogging the log file. Event viewer said nothing.
I found out it was not even the log file that was being locked. I was able to delete it by logging into the server as a normal user with no admin privileges!
It was the backup files themselves by the SQL Server Agent. Like #Oseack said, there may have been the need to use another tool whilst the backup files themselves were still being used or locked by the SQL Server Agent.
The way I got around it was to force ROBOCOPY to wait.
/W:5
did it.

What would happen if I deleted all the files associated with vBulletin?

I would like to completely take down the vBulletin forum running out of a subfolder of a site. I have already removed access to the bulletin via .htaccess, but now I would like to get rid of the whole shebang.
Can I just go in via ftp and remove all of the vBulletin files or will that cause problems?
The reason I want to get rid of the bulletin now, other than for security and resource conservation, is because now, after a move to a new server, I am receiving emails of database errors (I am assuming this is because the bulletin didn't get hooked up to the database at the new server).
If it makes any difference, this is the error:
mysql_connect() [function.mysql-connect]: Unknown MySQL server host 'blah.blah.blah.some.url.associated.with.my.old.hosts.nameserver.com' (1)
/path/to/my/forum/includes/class_core.php on line 317
Thanks in advance for any advice/info you have.
to completely remove vbulletin you want to remove all the files in FTP and delete the database as well. the database is usually a lot larger then the forum files. but to just stop the errors your getting removing the ftp files will work.