external hard disk not detected in my centos 8 server? - server

My Centos 8 server not detected WD external hard drives which are mount in / with different folder like PHD10, PHD11 etc.
It's faults when running machine.
In /etc/fstab UUID show all disk & file system is ext4.
but in lsblk or fdisk -l not show those disk.
Can someone help?

Related

IBM Cloud: How do I see the newly attached block storage in a virtual server?

I created a virtual server but found that the space was too small, so I wanted to add additional disk space to it. After attaching it, I'm not able to see the newly attached disk using df -h in the virtual server. How can I see the newly attached disk?
There are several commands that you can use: fdisk -l and lsblk.
The first disk drive should appear as xvda. Usually, there is a second disk drive xvdb that is used for swap. Your new disk will appear as xvdc. Note that naming is OS-specific.
You will need to partition the new disk, format a file system and then mount the file system to a directory. The exact steps are OS-dependent.
Note: Following steps may change wrt OS. Below example specfic to IBM VSI with ubuntu 18.04 LTS
First confirm the disk is attached to VSI(virtual server instance)and check the number of disk at the node by fdisk command.
fdisk -l | grep xvdc
Create a directory for new partition to be mounted
mkdir /data1
Make the partition using fdisk command
fdisk /dev/xvdc
After the above command it will ask some info
n # And then hit enter until it creates the partition.
Format the partition using mkfs command mkfs.ext4
mkfs.ext4 /dev/xvdc
Mount the partitoined on the new directory
mount /dev/xvdc /data1
Check the disk and its mounted points
df -h

How to change the path of /tmp folder in ubuntu 16

I am using Ubuntu 16.04 server. I have a couple of apps that are running and therefore can't reboot occasionally. Because of this my /tmp folder is filling up pretty fast and sometimes I have problem with logging in to the server
I want help with redirecting /tmp to another path like /mnt so I will be able to clear it periodically.

Kickstart: create lvm volume group without partition

I am trying to setup a redhat 7 kickstart for a server with 2 disks.
On the second disk I want to use the full disk without partitioning in lvm.
Once the system is installed, the config works:
pvcreate /dev/sdb
vgcreate data /dev/sdb
lvcreate -l +100%FREE -n data data
mkfs.xfs /dev/mapper/data-data
echo -e "/dev/mapper/data-data\t/data\txfs\tdefaults\t0 1" >> /etc/fstab
mount /data
But I cannot manage to have the following partitioning to work as expected
The partitioning system on kickstart as I know it will only create a partition on /dev/sdb and the volume is create on /dev/sdb1 at the end.
I managed to workaround the issue by using a postscript, but I compiled packages to install in this folder, so I would need the formating to be done before at least in a pre script if the partitioning is not possible.

How to Mount Disk for Google Cloud Compute Engine to use with /home?

I have a VM Instance with a small 10GB boot disk running CentOS 7 and would like to mount a larger 200GB Persistent Disk to contain data relating to the /home directory from a previous dedicated server (likely via scp).
Here's what I tried:
Attempt #1, Symlinks Might work, but some questions.
mounted the disk to /mnt/disks/my-persistent-disk
created folders on the persistent disk that mirror the folders in the old server's /home directory.
created a symlink in the /home directory for each folder, pointing to the persistent disk.
scp from old server to the VM /home/example_account for the first account. Realized scp does not follow symlinks (oops) and therefore the files went to the boot drive instead of the disk.
I suppose I could scp to /mnt/disks/my-persistent-disk and manage the symlinks and folders. Would this pose a problem? Would making an image of the VM with this configuration carry over to new instances (with autoscaling etc)?
Attempt #2, Mounting into /home.
Looking for a more 'natural' configuration that works with ftp, scp etc, I mounted the disk in /home/example_account
$ sudo mkdir -p /home/example_account
$ sudo mount -o discard,defaults /dev/sdc /home/example_account
$ sudo chmod a+w /home/example_account
#set the UUID for mounting at startup
$ sudo blkid /dev/sdc
$ sudo nano /etc/fstab
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
└─sda1 8:1 0 10G 0 part /
sdc 8:32 0 200G 0 disk /home/example_account
scp from old server to the VM in the /home/example_account works fine. Yay. However, I would like to have more than just 1 folder in the /home directory. I suppose I could partition the disk but this feels a bit cumbersome and I'm not exactly sure how many accounts I will use in the future.
Attempt #3, Mount as /home
I felt the best solution was to have the persistent disk mount as the /home directory. This would allow for easily adding new accounts within /home without symlinks or disk partitions.
Attempted to move /home directory to /home.old but realized the Google Cloud Compute Engine would not allow it since I was logged into the system.
Changed to root user, but still said myusername#instance was logged in and using the /home directory. As root, I issued pkill -KILL -u myusername and the SSH terminated - apparently how the Google Cloud Compute Engine works with their SSH windows.
As I cannot change the /home directory, this method does not seem viable unless there is a workaround.
My thoughts:
Ideally, I think #3 is the best solution but perhaps there is something I'm missing (#4 solution) or one of the above situations is the preferable idea but perhaps with better execution.
My question:
In short, how to I move an old server's data to a Google Cloud VM with a persistent disk?

Docker mongodb - add database on disk to container

I am running Docker on windows and I have a database with some entries on disk at C:\data\db.
I want to add this database to my container. I have tried numerous ways to do this but failed.
I tried: docker run -p 27017:27017 -v //c/data/db:/data/db --name mongodb devops-mongodb
In my dockerfile I have:
RUN mkdir -p /data/db
VOLUME /data/db
But this doesn't add my current database on disk to the container. It creates a fresh /data/db directory and persists the data I add to it.
The docs here https://docs.docker.com/userguide/dockervolumes/ under 'Mount a host directory as a data volume' specifically told me to execute the -v //c/data/db:/data/db but this isn't working.
Any ideas?
You're using Boot2Docker (which runs inside a Virtual Machine). Boot2Docker uses VirtualBox guest additions to make directories on your Windows machine available to Docker running inside the Virtual Machine.
By default, only the C:\Users directory (on Windows), or /Users/ directory (on OS X) is shared with the virtual machine. Anything outside those directories is not shared with the Virtual Machine, which results in Docker creating an empty directory at the specified location for the volume.
To share directories outside C:\Users\ with the Virtual Machine, you have to manually configure Boot2Docker to share those. You can find the steps needed in the VirtualBox guest addition section of the README;
If some other path or share is desired, it can be mounted at run time by doing something like:
$ mount -t vboxsf -o uid=1000,gid=50 your-other-share-name /some/mount/location
It is also important to note that in the future, the plan is to have any share which is created in VirtualBox with the "automount" flag turned on be mounted during boot at the directory of the share name (ie, a share named home/jsmith would be automounted at /home/jsmith).
Please be aware that using VirtualBox guest additions have a really bad impact on performance (reading/writing to the volume will be really slow). Which could be fine for development, but should be used with caution.