Watchman doesn't notice changes on network folder - watchman

I am trying to get watchman running in order to monitor an NFS mounted folder.
I was able to get everything running within the local file system.
Now, I have changed the config to monitor a network folder from my NAS. It is locally mounted.
Watchman server is running on the Linux client.
All watchman commands on the Linux client.
watchman watch
watchman -- trigger /home/karsten/CloudStation/karsten/CloudStation/Karsten_NAS/fotos/zerene 'photostack' -- /home/karsten/bin/invoke_rawtherapee.sh
Folder is located on the NAS, according to
mtab:
192.168.xxx.xxx:/volume1/homes /home/karsten/CloudStation nfs rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.xxx.xxx,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.xxx.xxx 0 0
If I move files into the folder on the local machine they get recognized and watchman triggers the actions.
BUT if I move files into the same folder from a remote client connected to the same NAS folder nothing happens.
Any idea what I need to change to make watchman recognize the files dropped from another client into that folder?
Many thanks in advance
Karsten

Unfortunately, it is not possible.
From https://facebook.github.io/watchman/docs/install.html#system-requirements:
Watchman relies on the operating system facilities for file notification, which means that you will likely have very poor results using it on any kind of remote or distributed filesystem.
NFS doesn't tie into the inotify layer in any kernel (the protocol simply doesn't support this sort of change notification), so you will only be able to be notified of changes that are made to the mounted volume by the client (because those are looped back through inotify locally), not for changes made anywhere else.

Related

How does NFS process requests for data?

When I used someone else's framework, I found that it would use NFS technology to share a specified folder before performing distributed computing.
For example, there are two parts 'part1' and 'part2' in this folder. Then if my machine 1 reads 'part1' and machine 2 reads 'part2', if machine 1 wants to get the content of 'part2', then it should make a request directly to machine 2, or directly read the local 'part2' file?
My understanding is that NFS can synchronize each machine under the corresponding folder, and the file will be stored in each machine, rather than a link to the corresponding location of a certain machine. I'm not sure if this understanding is correct.
NFS makes files available over a network. Using your example, if machine 1 and machine 2 are clients of the NFS server, they won't refer to each other when attempting to retrieve data. As such, when machine 1 wants 'part2', it will make the request to the NFS server rather than to machine 2 (despite the fact machine 2 has read 'part2').
The reasoning for this is that the version of 'part2' that exists on the NFS server may have changed in the time between machine 2 reading 'part2', making machine 2's copy of 'part2' out of date. By making all requests to the NFS server, clients can ensure that they are getting the most recent version of a file at any given time.
The behaviour you're describing is more akin to the behaviour of BitTorrent (https://en.wikipedia.org/wiki/BitTorrent). BitTorrent solves the out-of-date file problem by not allowing files to ever change and distributing hashes of the files. Knowing this, your torrent client can request parts of a folder or file from anyone in a 'swarm' and independently verify that the parts you received are correct.

sshfs -o follow_symlinks mounts with broken softlinks

Up until a day ago I was perfectly able to mount a drive via sshfs with the follow_symlinks option given.
In fact I set up an alias for this command and used it for several weeks.
Since yesterday, using the same alias the volume still mounts correctly but the all the soft symlinks within that volume are broken.
Logging in using a regular ssh session confirms the symlinks actually are functioning.
Are there any configuration files that may interfere with what I try to do?
I was modifying /etc/ssh/ssh_config and /etc/hosts because I experienced severe login delays when starting an ssh session from a friend's place. But I reverted any changes later on.
Could a wrong configuration in these files cause my issue?
Btw. I'm using Ubuntu 16.04
It turns out that the permissions on the particular machine I was trying to mount the folder from changed over the weekend.
It is now only allowing access to certain folders from within the same network. That is why my soft-links (pointing to now permission restricted content) seemed broken when mounting from my home network.

Vagrant: what is the difference between "shared" and "synced" directories?

Vagrant uses the words "share" and "sync" seemingly interchangeably. Is there a difference? If so, what is the difference?
IMO, "sync" implies that the data is duplicated in two places, and Vagrant does some magic to ensure that changes to one are also made to the other. This is a slightly different semantics to "sharing". Which is Vagrant doing, or can it do both?
EDIT: for example, say, I want a VM running MySQL server, but storing the database files on the host. Is this kind of setup the kind of thing that shared/syncd directories are appropriate for? E.g., do I have a guarantee of atomicity/transactionality? Sharing semantics would guarantee this, but syncing semantics possibly wouldn't.
(To make things worse, there's also Vagrant Share, which is unrelated to syncing or sharing.)
shared folder (v1 terminology) VS synced folder (renamed in v2)
In short: Shared Folders is more VirtualBox specific (vboxsf) and have known performance issues as number of files grows.
Vagrant v2 (vagrant 1.1.x, 1.2.x +) docs use a more generic name called Synced Folder, which now includes many options: default vboxsf, rsync, samba/CIFS, NFS.
By default, vagrant sync the project directory (where Vagrantfile resides) with /vagrant within the guest. This can be disabled by explicitly disable it in Vagnrantfile and do a vagrant reload.
e.g. config.vm.synced_folder ".", "/vagrant", disabled: true
To see a long story, see this answer: https://stackoverflow.com/a/18529697/1801697
Let's talk about sync
For vboxsf and nfs, host and guest folders (I mean synced folders) are always in sync (changes made on either side is synced to the other).
NOTE: SMB/CIFS should be the same but I've never used it.
In vagrant 1.5, rsync type is added, which makes manual sync possible, by default it sync from host to guest upon 1st vagrant up. I personally prefer rsync if real-time sync between host and is NOT needed.
BTW: Vagrant share is something different, it's sharing SSH access or other services via a cloud gateway.

Copying updated Files between Networks

Is there a way that I can copy updated files only from one network to another? The networks aren't connected in anyway, so the transfer will need to go via CD (or USB, etc.)
I've had a look at things like rsync, but that seems to require the two networks to be connected.
I am copying from a Windows machine, but it's going onto a network with both Windows and Linux machines (although Windows is preferable due to the way the network is set up).
you can rsync from source to the use-drive, use the usb-drive as a buffer, and then rsync from the usb-drive to the target. to benefit from the rsync algorithm and reduce the amount of copied data you need to keep the data on the usb-drive between runs.

Powershell script run against share on server

I'm running a powershell script that's on my local PC on a file share that's on a server. I had code in the script to let the user select to delete something permanently (using Remove-Item) or to send something to the Recycle bin using this code:
[Microsoft.VisualBasic.FileIO.Filesystem]::DeleteFile($file.fullname,'OnlyErrorDialogs','SendToRecycleBin')
When run locally (either from my desktop, or from the server) against a folder that's local to that respective location, it works fine. A file that is identified gets deleted & immediately shows up in the recycle bin.
However, if run from my desktop to the file share, it deletes the file, but it doesn't show up in either the server's recycle bin or the local one either. I've tried UNC naming and mapped drive naming, and have come to believe this may be by design.
Is there a workaround for this?
Only files deleted from redirected folders end up in the recycle bin. If you want to be able to undelete files deleted across the network then you need to use a third-party utility.