Up until a day ago I was perfectly able to mount a drive via sshfs with the follow_symlinks option given.
In fact I set up an alias for this command and used it for several weeks.
Since yesterday, using the same alias the volume still mounts correctly but the all the soft symlinks within that volume are broken.
Logging in using a regular ssh session confirms the symlinks actually are functioning.
Are there any configuration files that may interfere with what I try to do?
I was modifying /etc/ssh/ssh_config and /etc/hosts because I experienced severe login delays when starting an ssh session from a friend's place. But I reverted any changes later on.
Could a wrong configuration in these files cause my issue?
Btw. I'm using Ubuntu 16.04
It turns out that the permissions on the particular machine I was trying to mount the folder from changed over the weekend.
It is now only allowing access to certain folders from within the same network. That is why my soft-links (pointing to now permission restricted content) seemed broken when mounting from my home network.
Related
I ran various containers on Two different Windows 10 Pro machines, and thought that I had the data drives mapped correctly, but now I'm finding out that it isn't writing the data there at all. One example was Mongo db, where I mapped /mongodb/database:/data/db I upgraded docker, and when it restarted mongodb.. POOF! no data, I thought that was weird and looked in /mongodb/database and the directory is empty. Thankfully, the app is still in the development phase, and not critical that the data was lost...
the line from the docker compose file:
volumes:
- /mongodb/database:/data/db
Different machine:
I installed Gogs/gogs image, mapping the data:
docker run --name=Gogs-Git -p 10022:22 -p 10080:3000 -v /var/gogs:/Docker/Gogs-GitServer/Data gogs/gogs
Seemed to work perfectly, so I was thinking everything was fine, I pushed a Repo up to it.. and today, I looked at \Docker\Gogs-gitserver\data and no files... so where did it write the data?
I also installed TeamCity, mapping that data.. nope, it has no logs, no data...
This feature seems to just not work at all. I found a reference from 2016 saying I need to look at the 'shared' tab (below general),and check C: to be shared, but well, no, that isn't a tab, so it isn't that.
There is no way someone would write a system that just quietly wrote the data some other place, or didn't bother actually mapping it without giving an error - that would be nuts.
So, there must be some other explanation... One of the machines has Hyper-V enabled in the BIOS, the other one doesn't even support it as far as I know.
I think some of the images are Linux, and some are Windows (TeamCity I'm pretty sure is)
OK, this is interesting... If I look at the volumes, and enter one that is in use, I get this:
The Target looks about like the right path, but I'm not sure about the /backup and the /data on the last two lines, if these are supposed to be directories under that, they don't exist, but if I click on the data tab, I can see the data, it is in Docker, hidden and not shared, in spite of there being a 'target' that points at the right directory... how to I get it to start writing this data correctly to that folder??
I've not confirmed this yet with the above configuration, but I found that for other containers, I needed to specify the path as 'c:/data/MongoDb/Database' when I created the container using that as the path, it worked and I have data there now. I just need to go back and fix all these VMs so they have their data correctly...
I am trying to get watchman running in order to monitor an NFS mounted folder.
I was able to get everything running within the local file system.
Now, I have changed the config to monitor a network folder from my NAS. It is locally mounted.
Watchman server is running on the Linux client.
All watchman commands on the Linux client.
watchman watch
watchman -- trigger /home/karsten/CloudStation/karsten/CloudStation/Karsten_NAS/fotos/zerene 'photostack' -- /home/karsten/bin/invoke_rawtherapee.sh
Folder is located on the NAS, according to
mtab:
192.168.xxx.xxx:/volume1/homes /home/karsten/CloudStation nfs rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.xxx.xxx,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.xxx.xxx 0 0
If I move files into the folder on the local machine they get recognized and watchman triggers the actions.
BUT if I move files into the same folder from a remote client connected to the same NAS folder nothing happens.
Any idea what I need to change to make watchman recognize the files dropped from another client into that folder?
Many thanks in advance
Karsten
Unfortunately, it is not possible.
From https://facebook.github.io/watchman/docs/install.html#system-requirements:
Watchman relies on the operating system facilities for file notification, which means that you will likely have very poor results using it on any kind of remote or distributed filesystem.
NFS doesn't tie into the inotify layer in any kernel (the protocol simply doesn't support this sort of change notification), so you will only be able to be notified of changes that are made to the mounted volume by the client (because those are looped back through inotify locally), not for changes made anywhere else.
Vagrant uses the words "share" and "sync" seemingly interchangeably. Is there a difference? If so, what is the difference?
IMO, "sync" implies that the data is duplicated in two places, and Vagrant does some magic to ensure that changes to one are also made to the other. This is a slightly different semantics to "sharing". Which is Vagrant doing, or can it do both?
EDIT: for example, say, I want a VM running MySQL server, but storing the database files on the host. Is this kind of setup the kind of thing that shared/syncd directories are appropriate for? E.g., do I have a guarantee of atomicity/transactionality? Sharing semantics would guarantee this, but syncing semantics possibly wouldn't.
(To make things worse, there's also Vagrant Share, which is unrelated to syncing or sharing.)
shared folder (v1 terminology) VS synced folder (renamed in v2)
In short: Shared Folders is more VirtualBox specific (vboxsf) and have known performance issues as number of files grows.
Vagrant v2 (vagrant 1.1.x, 1.2.x +) docs use a more generic name called Synced Folder, which now includes many options: default vboxsf, rsync, samba/CIFS, NFS.
By default, vagrant sync the project directory (where Vagrantfile resides) with /vagrant within the guest. This can be disabled by explicitly disable it in Vagnrantfile and do a vagrant reload.
e.g. config.vm.synced_folder ".", "/vagrant", disabled: true
To see a long story, see this answer: https://stackoverflow.com/a/18529697/1801697
Let's talk about sync
For vboxsf and nfs, host and guest folders (I mean synced folders) are always in sync (changes made on either side is synced to the other).
NOTE: SMB/CIFS should be the same but I've never used it.
In vagrant 1.5, rsync type is added, which makes manual sync possible, by default it sync from host to guest upon 1st vagrant up. I personally prefer rsync if real-time sync between host and is NOT needed.
BTW: Vagrant share is something different, it's sharing SSH access or other services via a cloud gateway.
I'm looking for a portable way (application, file format, library/API, CMS, DBMS, whatever) to deny read and write access to a collection of text files unless the user enters the password. This would be for personal use, i.e. the files would be stored on my computer, which I share with other people.
I've already tried with:
password protected archives: but even a minor edit to one file requires unpacking
and re-packing everything, which is quite annoying
a wiki backed by a DBMS, with a single password protected account: but the DBMS
root user will be able to read my stuff
... any ideas?
I use TrueCrypt to mount an encrypted volume on my PC. Also available for Mac and Linux.
I mount the volume when I want to work with the files (the volume shows up as a new drive letter), and unmount it when done. The mount does not survive a reboot, so shutting down the computer guarantees that the volume will have to be re-mounted before it can be accessed again.
CentOS 5.5
I have a web application running on a server and it needs access to another CentOS server's file system running in the same network (via private IP). After doing a bunch of googling it looks like mounting the drive via NFS is a good way to go, but I'm not finding any good step by step instructions on how to go about it. I've read the man docs on the mount command and read some docs on the CentOS wiki as well but I feel like I'm missing something. Here is what I'm trying
mount -t nfs my.ip.address:/somePath /somePath/mount
I keep getting a 'no route to host' error but I can ping the server just fine. I'm guessing that I am possibly missing a port I need to open or something, but again, can't find information that makes sense to a non-sysadmin like myself.
Thanks for any help.
I ran across this, followed it step by step, and now I'm up and running!
http://www.cyberciti.biz/faq/centos-fedora-rhel-nfs-v4-configuration/