Mount : Permission Denied - solaris

I am trying to mount a solaris sparc 5.10 dir over solaris sparc 5.11 like
root> mount S5.10Machine:/mydir /mydir
and I am getting
nfs mount S5.10Machine:/mydir permission denied.
I have given 777 permission on S5.10Machine:/mydir

If you trying to mount nfs share, be sure that you have premittions in /etc/dfs/dfstab.
man share
There is rw option, which set right to access.
And if you want to mount nfs share, right command will be:
mount -F nfs S5.10Machine:/mydir /mydir

Related

Solaris 11 smbfs mount utility is failing to mount SMB2.0 onward windows shares

We are using smbfs mount utility in solaris 11 to mount windows SMB2 share but failure was reported as below:
command(executed with root):
sudo mount -F smbfs -o user=administrator,uid=oracle //win-t370714v98p/TestMnt /mnt/TestMnt
explanation:
//win-t370714v98p/TestMnt - windows SMB2.0 share
/mnt/TestMnt - local mountpoint on solaris11 server
Error: /usr/lib/fs/smbfs/mount: //win-t370714v98p: login failed: syserr = Connection reset by peer
For SMb1.0 share smbfs mount utility is able to perform ount successfully but fails for SMB2.0 onward shareds.
Does solaris11 smbfs mount supports on SMB2.0?
No, it does not. The maximum protocol version for the client side is SMB1.0, as documented in https://docs.oracle.com/cd/E88353_01/html/E37852/smb-5.html.

NFS mount display confused occasionally

we use cmd "**mount -t nfs -o nfsvers=3,rw,bg,soft,nointr,rsize=262144,wsize=262144,tcp,actimeo=0,timeo=600,retrans=3,nolock,sync /ip1/**pathA /mnt", mount success. But we use mount to check, it`s display /ip1/**pathB **is mounted on /mnt
However, **pathB **is a file system that was created a long time ago but no longer exists.
we guess there is some cache in NFSServer?
nfs client is CentOS Linux release 7.6.1810 (Core)
we want know the root cause

Mounting NFS volume in Google Container Engine with Container OS (COS)

After migrating the image type from container-vm to cos for the nodes of a GKE cluster, it seems no longer possible to mount a NFS volume for a pod.
The problem seems to be missing NFS client libraries, as a mount command from command line fails on all COS versions I tried (cos-stable-58-9334-62-0, cos-beta-59-9460-20-0, cos-dev-60-9540-0-0).
sudo mount -t nfs mynfsserver:/myshare /mnt
fails with
mount: wrong fs type, bad option, bad superblock on mynfsserver:/myshare,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
But this contradicts the supported volume types listed here:
https://cloud.google.com/container-engine/docs/node-image-migration#storage_driver_support
Mounting a NFS volume in a pod works in a pool with image-type container-vm but not with cos.
With cos I get following messages with kubectl describe pod:
MountVolume.SetUp failed for volume "kubernetes.io/nfs/b6e6cf44-41e7-11e7-8b00-42010a840079-nfs-mandant1" (spec.Name: "nfs-mandant1") pod "b6e6cf44-41e7-11e7-8b00-42010a840079" (UID: "b6e6cf44-41e7-11e7-8b00-42010a840079") with: mount failed: exit status 1
Mounting command: /home/kubernetes/containerized_mounter/mounter
Mounting arguments: singlefs-1-vm:/data/mandant1 /var/lib/kubelet/pods/b6e6cf44-41e7-11e7-8b00-42010a840079/volumes/kubernetes.io~nfs/nfs-mandant1 nfs []
Output: Mount failed: Mount failed: exit status 32
Mounting command: chroot
Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs singlefs-1-vm:/data/mandant1 /var/lib/kubelet/pods/b6e6cf44-41e7-11e7-8b00-42010a840079/volumes/kubernetes.io~nfs/nfs-mandant1]
Output: mount.nfs: Failed to resolve server singlefs-1-vm: Temporary failure in name resolution
Martin, are you setting up the mounts manually (executing mount yourself), or are you letting kubernetes do it on your behalf via a pod referencing an NFS volume?
The former will not work. The later will. As you've discovered COS does not ship with NFS client libraries, so GKE gets around this by setting up a chroot (at /home/kubernetes/containerized_mounter/rootfs) with the required binaries and calling mount inside that.
I've nicked the solution #saad-ali mentioned above, from the kubernetes project, to make this work.
To be concrete, I've added the following to my cloud-config:
# This script creates a chroot environment containing the tools needed to mount an nfs drive
- path: /tmp/mount_config.sh
permissions: 0755
owner: root
content: |
#!/bin/sh
set +x # For debugging
export USER=root
export HOME=/home/dockerrunner
mkdir -p /tmp/mount_chroot
chmod a+x /tmp/mount_chroot
cd /tmp/
echo "Sleeping for 30 seconds because toolbox pull fails otherwise"
sleep 30
toolbox --bind /tmp /google-cloud-sdk/bin/gsutil cp gs://<uploaded-file-bucket>/mounter.tar /tmp/mounter.tar
tar xf /tmp/mounter.tar -C /tmp/mount_chroot/
mount --bind /tmp/mount_chroot /tmp/mount_chroot
mount -o remount, exec /tmp/mount_chroot
mount --rbind /proc /tmp/mount_chroot/proc
mount --rbind /dev /tmp/mount_chroot/dev
mount --rbind /tmp /tmp/mount_chroot/tmp
mount --rbind /mnt /tmp/mount_chroot/mnt
The uploaded-file-bucket container the chroom image the kube team has created, downloaded from: https://storage.googleapis.com/kubernetes-release/gci-mounter/mounter.tar
Then, the runcmd for the cloud-config looks something like:
runcmd:
- /tmp/mount_config.sh
- mkdir -p /mnt/disks/nfs_mount
- chroot /tmp/mount_chroot /bin/mount -t nfs -o rw nfsserver:/sftp /mnt/disks/nfs_mount
This works. Ugly as hell, but it'll have to do for now.

How to Mount Disk for Google Cloud Compute Engine to use with /home?

I have a VM Instance with a small 10GB boot disk running CentOS 7 and would like to mount a larger 200GB Persistent Disk to contain data relating to the /home directory from a previous dedicated server (likely via scp).
Here's what I tried:
Attempt #1, Symlinks Might work, but some questions.
mounted the disk to /mnt/disks/my-persistent-disk
created folders on the persistent disk that mirror the folders in the old server's /home directory.
created a symlink in the /home directory for each folder, pointing to the persistent disk.
scp from old server to the VM /home/example_account for the first account. Realized scp does not follow symlinks (oops) and therefore the files went to the boot drive instead of the disk.
I suppose I could scp to /mnt/disks/my-persistent-disk and manage the symlinks and folders. Would this pose a problem? Would making an image of the VM with this configuration carry over to new instances (with autoscaling etc)?
Attempt #2, Mounting into /home.
Looking for a more 'natural' configuration that works with ftp, scp etc, I mounted the disk in /home/example_account
$ sudo mkdir -p /home/example_account
$ sudo mount -o discard,defaults /dev/sdc /home/example_account
$ sudo chmod a+w /home/example_account
#set the UUID for mounting at startup
$ sudo blkid /dev/sdc
$ sudo nano /etc/fstab
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
└─sda1 8:1 0 10G 0 part /
sdc 8:32 0 200G 0 disk /home/example_account
scp from old server to the VM in the /home/example_account works fine. Yay. However, I would like to have more than just 1 folder in the /home directory. I suppose I could partition the disk but this feels a bit cumbersome and I'm not exactly sure how many accounts I will use in the future.
Attempt #3, Mount as /home
I felt the best solution was to have the persistent disk mount as the /home directory. This would allow for easily adding new accounts within /home without symlinks or disk partitions.
Attempted to move /home directory to /home.old but realized the Google Cloud Compute Engine would not allow it since I was logged into the system.
Changed to root user, but still said myusername#instance was logged in and using the /home directory. As root, I issued pkill -KILL -u myusername and the SSH terminated - apparently how the Google Cloud Compute Engine works with their SSH windows.
As I cannot change the /home directory, this method does not seem viable unless there is a workaround.
My thoughts:
Ideally, I think #3 is the best solution but perhaps there is something I'm missing (#4 solution) or one of the above situations is the preferable idea but perhaps with better execution.
My question:
In short, how to I move an old server's data to a Google Cloud VM with a persistent disk?

mounting wd my cloud in raspbian fails

I try to mount a folder of my wd my cloud NAS in raspbian, but I always get Permission denied.
What I've done:
Adding folder on my cloud to /etc/exports
/nfs/raspberry 192.168.178.26(rw,subtree_check)
then
service nfs-kernel-server restart
exportfs -a
On client showmount -e 192.168.178.23 shows
/nfs *
/nfs/raspberry 192.168.178.26
Then trying to mount and test:
sudo mount -t nfs -o v3,rw,soft,nolock,wsize=8192,rsize=16384 192.168.178.23:/nfs/raspberry nas/
pi#raspberrypi ~ $ ls nas
ls: cannot open directory nas: Permission denied
It shows that I don't have permission to /nfs/raspberry.
Did I miss something?
The problem was that I've added the folder I want to mount with the WD GUI,
when I add the folder by ssh and set its owner/group to root/share and then adding folder to /etc/exports it works fine.