NFS mount display confused occasionally - operating-system

we use cmd "**mount -t nfs -o nfsvers=3,rw,bg,soft,nointr,rsize=262144,wsize=262144,tcp,actimeo=0,timeo=600,retrans=3,nolock,sync /ip1/**pathA /mnt", mount success. But we use mount to check, it`s display /ip1/**pathB **is mounted on /mnt
However, **pathB **is a file system that was created a long time ago but no longer exists.
we guess there is some cache in NFSServer?
nfs client is CentOS Linux release 7.6.1810 (Core)
we want know the root cause

Related

Postgres volume mounting on WSL2 and Docker desktop: Permission Denied on PGDATA folder

There are some similar posts but this is specifically related to running Postgres with WSL2 backend on Docker desktop. WSL2 brings full Linux experience on Windows. Volumes can be mounted to both Windows and Linux file systems. But the best practice is to use Linux file system for performance reasons see docker documentation.
Performance is much higher when files are bind-mounted from the Linux filesystem, rather than remoted from the Windows host. Therefore avoid docker run -v /mnt/c/users:/users (where /mnt/c is mounted from Windows).
Instead, from a Linux shell use a command like docker run -v ~/my-project:/sources where ~ is expanded by the Linux shell to $HOME.
My WSL distro is Ubuntu 20.04 LTS. I'm bind mounting Postgres data directory to a directory on Linux filesystem and I'm also configuring the Postgres PGDATA to use a sub-directory because this is instructed on the official Docker image docs:
PGDATA
This optional variable can be used to define another location - like a subdirectory - for the database files. The default is /var/lib/postgresql/data. If the data volume you're using is a filesystem mountpoint (like with GCE persistent disks) or remote folder that cannot be chowned to the postgres user (like some NFS mounts), Postgres initdb recommends a subdirectory be created to contain the data.
So this is how I start Postgres with the volume mounting to WSL2 Ubuntu file system:
docker run -d \
--name some-postgres -e POSTGRES_PASSWORD=root \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-v ~/custom/mount:/var/lib/postgresql/data \
postgres
I can exec into the running container and verify that the data folder exists and it's configured correctly:
Now from the host machine (WSL2 Linux) if I try to access that folder I get the permission denied:
I would appreciate if anyone can provide a solution. None of the existing posts worked to resolve the issue.
This has got nothing to do with PostgreSQL. Docker containers run as root and so any directory created by Docker will also belong to root.
When you attach to the container and list the directory under /var/lib/postgresql/data it shows postgres as the owner.
Check "Arbitrary --user Notes" section in the official documentation here
The second option "bind-mount /etc/passwd read-only from the host" worked for me.
Two things that were blocking us working with WSL2 on Windows were:
Folder c:\Program files\WindowsApps didn't have admin account listed as owner
McAfee was blocking the WSL. In order to disable blocking we had to remove following rule: Open McAfee -> Threat Prevention -> Show Advanced (button in Right upper corner) -> scroll down to Rules -> name of the rule is "Executing Subsystem for Linux"

Mounting NFS volume in Google Container Engine with Container OS (COS)

After migrating the image type from container-vm to cos for the nodes of a GKE cluster, it seems no longer possible to mount a NFS volume for a pod.
The problem seems to be missing NFS client libraries, as a mount command from command line fails on all COS versions I tried (cos-stable-58-9334-62-0, cos-beta-59-9460-20-0, cos-dev-60-9540-0-0).
sudo mount -t nfs mynfsserver:/myshare /mnt
fails with
mount: wrong fs type, bad option, bad superblock on mynfsserver:/myshare,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
But this contradicts the supported volume types listed here:
https://cloud.google.com/container-engine/docs/node-image-migration#storage_driver_support
Mounting a NFS volume in a pod works in a pool with image-type container-vm but not with cos.
With cos I get following messages with kubectl describe pod:
MountVolume.SetUp failed for volume "kubernetes.io/nfs/b6e6cf44-41e7-11e7-8b00-42010a840079-nfs-mandant1" (spec.Name: "nfs-mandant1") pod "b6e6cf44-41e7-11e7-8b00-42010a840079" (UID: "b6e6cf44-41e7-11e7-8b00-42010a840079") with: mount failed: exit status 1
Mounting command: /home/kubernetes/containerized_mounter/mounter
Mounting arguments: singlefs-1-vm:/data/mandant1 /var/lib/kubelet/pods/b6e6cf44-41e7-11e7-8b00-42010a840079/volumes/kubernetes.io~nfs/nfs-mandant1 nfs []
Output: Mount failed: Mount failed: exit status 32
Mounting command: chroot
Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs singlefs-1-vm:/data/mandant1 /var/lib/kubelet/pods/b6e6cf44-41e7-11e7-8b00-42010a840079/volumes/kubernetes.io~nfs/nfs-mandant1]
Output: mount.nfs: Failed to resolve server singlefs-1-vm: Temporary failure in name resolution
Martin, are you setting up the mounts manually (executing mount yourself), or are you letting kubernetes do it on your behalf via a pod referencing an NFS volume?
The former will not work. The later will. As you've discovered COS does not ship with NFS client libraries, so GKE gets around this by setting up a chroot (at /home/kubernetes/containerized_mounter/rootfs) with the required binaries and calling mount inside that.
I've nicked the solution #saad-ali mentioned above, from the kubernetes project, to make this work.
To be concrete, I've added the following to my cloud-config:
# This script creates a chroot environment containing the tools needed to mount an nfs drive
- path: /tmp/mount_config.sh
permissions: 0755
owner: root
content: |
#!/bin/sh
set +x # For debugging
export USER=root
export HOME=/home/dockerrunner
mkdir -p /tmp/mount_chroot
chmod a+x /tmp/mount_chroot
cd /tmp/
echo "Sleeping for 30 seconds because toolbox pull fails otherwise"
sleep 30
toolbox --bind /tmp /google-cloud-sdk/bin/gsutil cp gs://<uploaded-file-bucket>/mounter.tar /tmp/mounter.tar
tar xf /tmp/mounter.tar -C /tmp/mount_chroot/
mount --bind /tmp/mount_chroot /tmp/mount_chroot
mount -o remount, exec /tmp/mount_chroot
mount --rbind /proc /tmp/mount_chroot/proc
mount --rbind /dev /tmp/mount_chroot/dev
mount --rbind /tmp /tmp/mount_chroot/tmp
mount --rbind /mnt /tmp/mount_chroot/mnt
The uploaded-file-bucket container the chroom image the kube team has created, downloaded from: https://storage.googleapis.com/kubernetes-release/gci-mounter/mounter.tar
Then, the runcmd for the cloud-config looks something like:
runcmd:
- /tmp/mount_config.sh
- mkdir -p /mnt/disks/nfs_mount
- chroot /tmp/mount_chroot /bin/mount -t nfs -o rw nfsserver:/sftp /mnt/disks/nfs_mount
This works. Ugly as hell, but it'll have to do for now.

How to Mount Disk for Google Cloud Compute Engine to use with /home?

I have a VM Instance with a small 10GB boot disk running CentOS 7 and would like to mount a larger 200GB Persistent Disk to contain data relating to the /home directory from a previous dedicated server (likely via scp).
Here's what I tried:
Attempt #1, Symlinks Might work, but some questions.
mounted the disk to /mnt/disks/my-persistent-disk
created folders on the persistent disk that mirror the folders in the old server's /home directory.
created a symlink in the /home directory for each folder, pointing to the persistent disk.
scp from old server to the VM /home/example_account for the first account. Realized scp does not follow symlinks (oops) and therefore the files went to the boot drive instead of the disk.
I suppose I could scp to /mnt/disks/my-persistent-disk and manage the symlinks and folders. Would this pose a problem? Would making an image of the VM with this configuration carry over to new instances (with autoscaling etc)?
Attempt #2, Mounting into /home.
Looking for a more 'natural' configuration that works with ftp, scp etc, I mounted the disk in /home/example_account
$ sudo mkdir -p /home/example_account
$ sudo mount -o discard,defaults /dev/sdc /home/example_account
$ sudo chmod a+w /home/example_account
#set the UUID for mounting at startup
$ sudo blkid /dev/sdc
$ sudo nano /etc/fstab
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
└─sda1 8:1 0 10G 0 part /
sdc 8:32 0 200G 0 disk /home/example_account
scp from old server to the VM in the /home/example_account works fine. Yay. However, I would like to have more than just 1 folder in the /home directory. I suppose I could partition the disk but this feels a bit cumbersome and I'm not exactly sure how many accounts I will use in the future.
Attempt #3, Mount as /home
I felt the best solution was to have the persistent disk mount as the /home directory. This would allow for easily adding new accounts within /home without symlinks or disk partitions.
Attempted to move /home directory to /home.old but realized the Google Cloud Compute Engine would not allow it since I was logged into the system.
Changed to root user, but still said myusername#instance was logged in and using the /home directory. As root, I issued pkill -KILL -u myusername and the SSH terminated - apparently how the Google Cloud Compute Engine works with their SSH windows.
As I cannot change the /home directory, this method does not seem viable unless there is a workaround.
My thoughts:
Ideally, I think #3 is the best solution but perhaps there is something I'm missing (#4 solution) or one of the above situations is the preferable idea but perhaps with better execution.
My question:
In short, how to I move an old server's data to a Google Cloud VM with a persistent disk?

Docker mongodb - add database on disk to container

I am running Docker on windows and I have a database with some entries on disk at C:\data\db.
I want to add this database to my container. I have tried numerous ways to do this but failed.
I tried: docker run -p 27017:27017 -v //c/data/db:/data/db --name mongodb devops-mongodb
In my dockerfile I have:
RUN mkdir -p /data/db
VOLUME /data/db
But this doesn't add my current database on disk to the container. It creates a fresh /data/db directory and persists the data I add to it.
The docs here https://docs.docker.com/userguide/dockervolumes/ under 'Mount a host directory as a data volume' specifically told me to execute the -v //c/data/db:/data/db but this isn't working.
Any ideas?
You're using Boot2Docker (which runs inside a Virtual Machine). Boot2Docker uses VirtualBox guest additions to make directories on your Windows machine available to Docker running inside the Virtual Machine.
By default, only the C:\Users directory (on Windows), or /Users/ directory (on OS X) is shared with the virtual machine. Anything outside those directories is not shared with the Virtual Machine, which results in Docker creating an empty directory at the specified location for the volume.
To share directories outside C:\Users\ with the Virtual Machine, you have to manually configure Boot2Docker to share those. You can find the steps needed in the VirtualBox guest addition section of the README;
If some other path or share is desired, it can be mounted at run time by doing something like:
$ mount -t vboxsf -o uid=1000,gid=50 your-other-share-name /some/mount/location
It is also important to note that in the future, the plan is to have any share which is created in VirtualBox with the "automount" flag turned on be mounted during boot at the directory of the share name (ie, a share named home/jsmith would be automounted at /home/jsmith).
Please be aware that using VirtualBox guest additions have a really bad impact on performance (reading/writing to the volume will be really slow). Which could be fine for development, but should be used with caution.

How can I move postgresql data to another directory on Ubuntu over Amazon EC2?

We've been running postgresql 8.4 for quite some time. As with any database, we are slowly reaching our threshold for space. I added another 8 GB EBS drive and mounted it to our instance and configured it to work properly on a directory called /files
Within /files, I manually created
Correct me if I'm wrong, but I believe all postgresql data is stored in /var/lib/postgresql/8.4/main
I backed up the database and I ran sudo /etc/init.d/postgresql stop. This stops the postgresql server. I tried to copy and paste the contents of /var/lib/postgresql/8.4/main into the /files directory but that turned out be a HUGE MESS! due to file permissions. I had to go in and chmod the contents of that folder just so that I could copy and paste them. Some files did not copy fully because of root permissions. I modified the data_directory parameter in postgresql.conf to point to the files directory
data_directory = '/files/postgresql/main'
and I ran sudo /etc/init.d/postgresql restart and the server failed to start. Again probably due to permission issues. Amazon EC2 only allows you to access the service as ubuntu by default. You can only access root from within the terminal which makes everything a lot more complicated.
Is there a much cleaner and more efficient step by step way of doing this?
Stop the server.
Copy the datadir while retaining permissions - use cp -aRv.
Then (easiest, as it avoids the need to modify initscripts) just move the old datadir aside and symlink the old path to the new location.
Thanks for the accepted answer. Instead of the symlink you can also use a bind mount. That way it is independent from the file system. If you want to use a dedicated hard drive for the database you can also mount it normally. to the data directory.
I did the latter. Here are my steps if someone needs a reference. I ran this as a script on many AWS instances.
# stop postgres server
sudo service postgresql stop
# create new filesystem in empty hard drive
sudo mkfs.ext4 /dev/xvdb
# mount it
mkdir /tmp/pg
sudo mount /dev/xvdb /tmp/pg/
# copy the entire postgres home dir content
sudo cp -a /var/lib/postgresql/. /tmp/pg
# mount it to the correct directory
sudo umount /tmp/pg
sudo mount /dev/xvdb /var/lib/postgresql/
# see if it is mounted
mount | grep postgres
# add the mount point to fstab
echo "/dev/xvdb /var/lib/postgresql ext4 rw 0 0" | sudo tee -a /etc/fstab
# when database is in use, observe that the correct disk is being used
watch -d grep xvd /proc/diskstats
A clarification. It is the particular AMI that you used that sets ubuntu as the default user, this may not apply to other AMIs.
In essence if you are trying move data manually, you will probably need to do so as the root user, and then make sure its available to whatever user postgres is running with.
You also do have the option of snapshotting the volume and increasing the size of the a volume created from the snapshot. Then you could replace the volume on your instance with the new volume (You probably will have to resize the partition to take advantage of all the space).