How can i transfere the 150gb storage from the sda2 to the sda2/ol-root ?
lsblk
Thank you for your Help !
i tried to extend root storage
Related
My Centos 8 server not detected WD external hard drives which are mount in / with different folder like PHD10, PHD11 etc.
It's faults when running machine.
In /etc/fstab UUID show all disk & file system is ext4.
but in lsblk or fdisk -l not show those disk.
Can someone help?
I am learning Ceph storage (luminous) with one admin node and two nodes for OSD and MON etc. as I following the doc http://docs.ceph.com/docs/master/start/quick-ceph-deploy/ to setup my initial storage cluster and stuck after executing this below command. as per the document the below command should out put 6 files but this file "ceph.bootstrap-rbd.keyrin" is missing in the admin node directory where I execute ceph-deploy commands.
ceph-deploy --username sanadmin mon create-initial
I am not sure whether it is a normal behaviour are I am really missing something. appreciate you help on this.
Thanks.
It is not important. Because rbd is native service for ceph. Do not worry about that
I have a VM Instance with a small 10GB boot disk running CentOS 7 and would like to mount a larger 200GB Persistent Disk to contain data relating to the /home directory from a previous dedicated server (likely via scp).
Here's what I tried:
Attempt #1, Symlinks Might work, but some questions.
mounted the disk to /mnt/disks/my-persistent-disk
created folders on the persistent disk that mirror the folders in the old server's /home directory.
created a symlink in the /home directory for each folder, pointing to the persistent disk.
scp from old server to the VM /home/example_account for the first account. Realized scp does not follow symlinks (oops) and therefore the files went to the boot drive instead of the disk.
I suppose I could scp to /mnt/disks/my-persistent-disk and manage the symlinks and folders. Would this pose a problem? Would making an image of the VM with this configuration carry over to new instances (with autoscaling etc)?
Attempt #2, Mounting into /home.
Looking for a more 'natural' configuration that works with ftp, scp etc, I mounted the disk in /home/example_account
$ sudo mkdir -p /home/example_account
$ sudo mount -o discard,defaults /dev/sdc /home/example_account
$ sudo chmod a+w /home/example_account
#set the UUID for mounting at startup
$ sudo blkid /dev/sdc
$ sudo nano /etc/fstab
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
└─sda1 8:1 0 10G 0 part /
sdc 8:32 0 200G 0 disk /home/example_account
scp from old server to the VM in the /home/example_account works fine. Yay. However, I would like to have more than just 1 folder in the /home directory. I suppose I could partition the disk but this feels a bit cumbersome and I'm not exactly sure how many accounts I will use in the future.
Attempt #3, Mount as /home
I felt the best solution was to have the persistent disk mount as the /home directory. This would allow for easily adding new accounts within /home without symlinks or disk partitions.
Attempted to move /home directory to /home.old but realized the Google Cloud Compute Engine would not allow it since I was logged into the system.
Changed to root user, but still said myusername#instance was logged in and using the /home directory. As root, I issued pkill -KILL -u myusername and the SSH terminated - apparently how the Google Cloud Compute Engine works with their SSH windows.
As I cannot change the /home directory, this method does not seem viable unless there is a workaround.
My thoughts:
Ideally, I think #3 is the best solution but perhaps there is something I'm missing (#4 solution) or one of the above situations is the preferable idea but perhaps with better execution.
My question:
In short, how to I move an old server's data to a Google Cloud VM with a persistent disk?
I am following this guide to set up my cluster. It all works fine.
However, when I install fabric8 in this cluster I run out of disk on the minions. The image, kube.vmdk, is only about 6GB. It is the /var/lib/docker which gets filled up. How do I solve this?
Using the GUI for vmware the option to resize the disk is 'greyed out'.
Should I attach a second disk to the minions and then mount this disk? Where should I mount it? /var/lib/docker?
I would appreciate any input.
Docker's image is store in /var/lib/docker(more precisely, it store in storage driver's directory, /var/lib/docker/aufs when using aufs storage driver) , so when Kubernetes report disk gets filled up, it check that directory.
So you can
Remove all the images in docker(not necessary, you can copy everything to new dir).
stop docker daemon.
mount your new disk to /var/lib/docker/ or /var/lib/docker
start docker daemon.
If you are not sure what storage driver your docker is using, type docker info in your node, will get something contain this:
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 139
Dirperm1 Supported: true
It seems that you run out of the space of the disk. You can remove all the files in /var/lib/docker, and mount the second disk. Finally you need restart your dockerd.
I am trying to reproduce the following
/
/var
/usr
/tmp
/home
[PGDATA]
[PGDATA]pg-xlog
[PGDATA]base
[PGDATA] / [tablespace]
[PGDATA] xlog-archive
but i'm a bit confuse with [PGDATA], i'm using Virtual host Ubuntu server 14.04 and i'm stuck in partition HDD stage. any references to read about this?
Thanks in advance