Kickstart: create lvm volume group without partition - redhat

I am trying to setup a redhat 7 kickstart for a server with 2 disks.
On the second disk I want to use the full disk without partitioning in lvm.
Once the system is installed, the config works:
pvcreate /dev/sdb
vgcreate data /dev/sdb
lvcreate -l +100%FREE -n data data
mkfs.xfs /dev/mapper/data-data
echo -e "/dev/mapper/data-data\t/data\txfs\tdefaults\t0 1" >> /etc/fstab
mount /data
But I cannot manage to have the following partitioning to work as expected
The partitioning system on kickstart as I know it will only create a partition on /dev/sdb and the volume is create on /dev/sdb1 at the end.
I managed to workaround the issue by using a postscript, but I compiled packages to install in this folder, so I would need the formating to be done before at least in a pre script if the partitioning is not possible.

Related

Postgresql path and LVM path issues while mapping to directory

I have created a ec2 instance and attached 3 ebs volumes gp3=3, io1=4gb, io2=4 and mounted it.
I have installed postgres source code v 8.4.18 on it and created a database with 2 million sample entries.
The directory of the pgsql is /usr/local/pgsql/data
Now my root volume is full to 95%, I created a LVM for this 3 ebs volumes with pv create, lvcreate and vgcreate, formatted via ext4 file system and mounted to a directory /usr/local/pgsql.
Now when I try to login to postgresql by doing su - postgres and then /usr/local/pgsql/data/bin/pg_ctl -D usr/local/pgsql/data -l logfile start. It does not get start and I get an error bash: /usr/local/pgsql/data/bin/pg_ctl: No such file or directory.
And if i do cd /usr/local/pgsql/data all the data has been gone ( eg pg config files. logs. hba files etc) Also when I open /home/postgres/usr/local/pgsql/data/postgresql.conf I see blank page.
If i mount the LVM to other directories it works and i can see the conf files etc. I want to mount to the same directory so that once my root volume is full, The further sample tables i am creating should be stored in the LVM if there is no space on root vol.
Tried to check conf files, uninstalled postgresql but did not get results. I tried the same on postgresql v12, was facing same error then I just uninstalled postgres and re-installed it with it's default directory and it worked there, but not on v8.4.18 .

IBM Cloud: How do I see the newly attached block storage in a virtual server?

I created a virtual server but found that the space was too small, so I wanted to add additional disk space to it. After attaching it, I'm not able to see the newly attached disk using df -h in the virtual server. How can I see the newly attached disk?
There are several commands that you can use: fdisk -l and lsblk.
The first disk drive should appear as xvda. Usually, there is a second disk drive xvdb that is used for swap. Your new disk will appear as xvdc. Note that naming is OS-specific.
You will need to partition the new disk, format a file system and then mount the file system to a directory. The exact steps are OS-dependent.
Note: Following steps may change wrt OS. Below example specfic to IBM VSI with ubuntu 18.04 LTS
First confirm the disk is attached to VSI(virtual server instance)and check the number of disk at the node by fdisk command.
fdisk -l | grep xvdc
Create a directory for new partition to be mounted
mkdir /data1
Make the partition using fdisk command
fdisk /dev/xvdc
After the above command it will ask some info
n # And then hit enter until it creates the partition.
Format the partition using mkfs command mkfs.ext4
mkfs.ext4 /dev/xvdc
Mount the partitoined on the new directory
mount /dev/xvdc /data1
Check the disk and its mounted points
df -h

How to Mount Multiple CephFS on Client-Node?

I'd Created three CephFS and try to Mount it on Client node but didn't find any way to mount specific one Cephfs. I'd tried
mount -t ceph mon-node:/ /mnt/apachefs/ -o mds_namespace=webfs,secret=ceph-authtool -p /etc/ceph/ceph.client.admin.keyring
But it fails, Is there any other way to Mount Multiple File systems on Client node with use of kernel Driver, mount.ceph or ceph-fuse?
It is possible to specify multiple CephFS by following options.
-o mds_namespace ... kernel Driver (mount -t ceph)
--client_mds_namespace ... ceph fuse (cephf-fuse)
I am pretty sure that -o mds_namespace did not work due to old kernel version. If you are using CentOS7, please test it with ceph-fuse 12.2.4 or later version with (--client_mds_namespace). It worked fine on my env.
If you using Debian based system, you can install ceph-fs-common package with apt, like: apt-get install -y ceph-fs-common.
ceph fs volume create nextcloud [<placement>]
ceph fs volume create okd-admin [<placement>]
#/etc/fstab
### one
10.10.20.6:6789:/folder1 /USERDATA ceph name=admin,secretfile=/etc/ceph/secret.key,fs=nextcloud,noatime,_netdev 0 2
### two
10.10.20.5:6789:/folder2 /mnt/cephfs ceph name=okd-admin,secretfile=/etc/ceph/secret-openshift.key,fs=openshift,noatime,_netdev 0 2

How to Mount Disk for Google Cloud Compute Engine to use with /home?

I have a VM Instance with a small 10GB boot disk running CentOS 7 and would like to mount a larger 200GB Persistent Disk to contain data relating to the /home directory from a previous dedicated server (likely via scp).
Here's what I tried:
Attempt #1, Symlinks Might work, but some questions.
mounted the disk to /mnt/disks/my-persistent-disk
created folders on the persistent disk that mirror the folders in the old server's /home directory.
created a symlink in the /home directory for each folder, pointing to the persistent disk.
scp from old server to the VM /home/example_account for the first account. Realized scp does not follow symlinks (oops) and therefore the files went to the boot drive instead of the disk.
I suppose I could scp to /mnt/disks/my-persistent-disk and manage the symlinks and folders. Would this pose a problem? Would making an image of the VM with this configuration carry over to new instances (with autoscaling etc)?
Attempt #2, Mounting into /home.
Looking for a more 'natural' configuration that works with ftp, scp etc, I mounted the disk in /home/example_account
$ sudo mkdir -p /home/example_account
$ sudo mount -o discard,defaults /dev/sdc /home/example_account
$ sudo chmod a+w /home/example_account
#set the UUID for mounting at startup
$ sudo blkid /dev/sdc
$ sudo nano /etc/fstab
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
└─sda1 8:1 0 10G 0 part /
sdc 8:32 0 200G 0 disk /home/example_account
scp from old server to the VM in the /home/example_account works fine. Yay. However, I would like to have more than just 1 folder in the /home directory. I suppose I could partition the disk but this feels a bit cumbersome and I'm not exactly sure how many accounts I will use in the future.
Attempt #3, Mount as /home
I felt the best solution was to have the persistent disk mount as the /home directory. This would allow for easily adding new accounts within /home without symlinks or disk partitions.
Attempted to move /home directory to /home.old but realized the Google Cloud Compute Engine would not allow it since I was logged into the system.
Changed to root user, but still said myusername#instance was logged in and using the /home directory. As root, I issued pkill -KILL -u myusername and the SSH terminated - apparently how the Google Cloud Compute Engine works with their SSH windows.
As I cannot change the /home directory, this method does not seem viable unless there is a workaround.
My thoughts:
Ideally, I think #3 is the best solution but perhaps there is something I'm missing (#4 solution) or one of the above situations is the preferable idea but perhaps with better execution.
My question:
In short, how to I move an old server's data to a Google Cloud VM with a persistent disk?

How can I move postgresql data to another directory on Ubuntu over Amazon EC2?

We've been running postgresql 8.4 for quite some time. As with any database, we are slowly reaching our threshold for space. I added another 8 GB EBS drive and mounted it to our instance and configured it to work properly on a directory called /files
Within /files, I manually created
Correct me if I'm wrong, but I believe all postgresql data is stored in /var/lib/postgresql/8.4/main
I backed up the database and I ran sudo /etc/init.d/postgresql stop. This stops the postgresql server. I tried to copy and paste the contents of /var/lib/postgresql/8.4/main into the /files directory but that turned out be a HUGE MESS! due to file permissions. I had to go in and chmod the contents of that folder just so that I could copy and paste them. Some files did not copy fully because of root permissions. I modified the data_directory parameter in postgresql.conf to point to the files directory
data_directory = '/files/postgresql/main'
and I ran sudo /etc/init.d/postgresql restart and the server failed to start. Again probably due to permission issues. Amazon EC2 only allows you to access the service as ubuntu by default. You can only access root from within the terminal which makes everything a lot more complicated.
Is there a much cleaner and more efficient step by step way of doing this?
Stop the server.
Copy the datadir while retaining permissions - use cp -aRv.
Then (easiest, as it avoids the need to modify initscripts) just move the old datadir aside and symlink the old path to the new location.
Thanks for the accepted answer. Instead of the symlink you can also use a bind mount. That way it is independent from the file system. If you want to use a dedicated hard drive for the database you can also mount it normally. to the data directory.
I did the latter. Here are my steps if someone needs a reference. I ran this as a script on many AWS instances.
# stop postgres server
sudo service postgresql stop
# create new filesystem in empty hard drive
sudo mkfs.ext4 /dev/xvdb
# mount it
mkdir /tmp/pg
sudo mount /dev/xvdb /tmp/pg/
# copy the entire postgres home dir content
sudo cp -a /var/lib/postgresql/. /tmp/pg
# mount it to the correct directory
sudo umount /tmp/pg
sudo mount /dev/xvdb /var/lib/postgresql/
# see if it is mounted
mount | grep postgres
# add the mount point to fstab
echo "/dev/xvdb /var/lib/postgresql ext4 rw 0 0" | sudo tee -a /etc/fstab
# when database is in use, observe that the correct disk is being used
watch -d grep xvd /proc/diskstats
A clarification. It is the particular AMI that you used that sets ubuntu as the default user, this may not apply to other AMIs.
In essence if you are trying move data manually, you will probably need to do so as the root user, and then make sure its available to whatever user postgres is running with.
You also do have the option of snapshotting the volume and increasing the size of the a volume created from the snapshot. Then you could replace the volume on your instance with the new volume (You probably will have to resize the partition to take advantage of all the space).