VERITAS: VxVM vxvol ERROR V-5-1-1654 keyword clean not recognized for init operation - solaris

I have installed the Veritas Volume Manager on Solaris 10. And I am trying to create volumes from the vmdisks. I have created the vmdisks and created the disk group with 2 vm disks. I have created almost 3 sub disks (with same length) in each of the vm disk after the creation of the disk group. I have created the 3 plexes from the 6 sub disks and I created a logical volume using the 3 plexes with the below command.
# vxmake -g testgrp -Uraid5 vol testvol1 plex=testplex,testplex-2,testplex-3
But when I run the below command, I could see the plex and volumes as disabled.
# vxprint -hg testgrp
And I tried to run the below command before starting the volumes as the volume is empty.
# vxvol -g testgrp init clean testvol1
But at this point, I am getting the error as "VxVM vxvol ERROR V-5-1-1654 keyword clean not recognized for init operation".
Could anyone please help me in solving this issue? Thanks in advance.

If raid5 is the type of volume you want to create I think you can use following command:
vxassist -g testgrp make testvol1 layout=raid5 disk1,disk2,disk3,...
It will take care to create the appropriate plex(es) and subdisks on the specified disks in the given diskgroup

Related

Installing Postgresql in AWS EC2 CentOS 7 on secondary volume

My AWS EC2 has two volumes, primary and secondary, with the secondary volume being larger. I am looking to install Postgres on this EC2. As the database gets used, I anticipate it will overrun the size of the primary volume. So,
1 - How can I install it such that the database sits on the secondary volume? I am referencing this article for installation. Particularly, the following command installs it on the primary volume:
sudo yum install postgresql postgresql-server postgresql-devel postgresql- contrib postgresql-docs
2 - Is is advisable to install it on the secondary volume? If no, why?
Thanks.
1 - How can I install it such that the database sits on the secondary volume?
see the documentation, basically you can initialize a database on any folder
https://www.postgresql.org/docs/13/app-initdb.html
Example:
initdb -D /mnt/data
2 - Is is advisable to install it on the secondary volume? If no, why?
Sure, it's easier to maintain and resize a non-root volume.
Regardless that with AWS you could consider running the AWS RDS, where a lot of maintenance tasks (e.g. storage auto-scaling) is offloaded to AWS
The standard pattern I see for this is to install postres db the normal way to the normal place, and then setting the pg data directory to a mountpoint on a different volume. This differentiates the postgres application files (which would be on the same volume as the rest of the OS filesystem) from the postgres data (which would be on the secondary). It can be advisable for a few reasons - isolating db data disk usage from system disk usage is a good one. Another reason is to be able to scale throughput and size independently and see usage independently.

Spoof free space available in Docker or tricking Postgres and RabbitMQ

I'm using Google Cloud Run to host some solutions. When the containers start, programs can write to disk, and the data persists until the container stops. However, from a system point of view, all partitions of the container always report zero free space. I confirmed this in a few ways:
Running df from start.sh shows zero free space when the container starts
Deleting a large file and then running df from start.sh still shows zero free space
It is possible to write to disk via start.sh, PHP scripts, etc, so the system DOES have free space to write to memory, yet df still reports zero free space
(All of the above are once the container is deployed to Cloud Run. Manually running the same container via docker from the Cloud Shell and executing df reports free space).
The problem is that certain applications perform disk space checks when they start, and they fail to load in Google Cloud Run. For example, MariaDB uses df in its init script, so commenting out these lines makes it possible to add a static yet functional MariaDB instance to a Cloud Run container.
MariaDB made it easy. Now, I'm trying to do the same thing with PostgreSQL and RabbitMQ, but I'm having trouble figuring out how to override their disk space checks. Here are the two options I am considering:
Keep digging through the source of PostgreSQL and RabbitMQ until I find the disk space check and override it. I don't speak Erlang, so this is a pain, and I would have to do it for every application with this issue
Programs are probably using coreutils to determine disk size. I could edit the source and rebuild it as part of my Dockerfile routine so the system always returns with free space available (could have unintentional side effects)
Is anyone either familiar with the source of Postgres or RabbitMQ or have a system-wide solution that I could implement that would "spoof" the free space available?
EDIT: Here are the error messages given by RabbitMQ and PostgreSQL
RabbitMQ:
{error,{cannot_log_to_file,"/var/log/rabbitmq/rabbit#localhost.log",{error,einval}}}
Postgres:
Error: /usr/lib/postgresql/10/bin/pg_ctl /usr/lib/postgresql/10/bin/pg_ctl start -D /var/lib/postgresql/10/main -l /var/log/postgresql/postgresql-10-main.log -s -o -c config_file="/etc/postgresql/10/main/postgresql.conf" exited with status 1:

How listing objects in ceph works

I know that object locations in ceph are computed from the cluster map using the hash of the object. On the other hand, we have commands like this that list objects:
rados -p POOL_NAME ls
How does this command work? Are object names stored somewhere? If yes, is it all in the monitor database? What will happen in ceph when we run this command?
Monitors keeps pool -> PG map in their database and when you run rados -p POOL_NAME ls it will ask monitor to get PGs associated with this pool. Each PG has an up/acting set that keeps the running OSDs for that PG. After that it will ask PG on the primary OSD to return objects within it.
You can find more info within source code: https://github.com/ceph/ceph/blob/master/src/tools/rados/rados.cc#L2399

ERROR: could not create directory "base/16386": No space left on device

I receive the same error when I try to create a database with
CREATE DATABASE dwh;
and
createdb dwh;
namely:
createdb: database creation failed: ERROR: could not create directory "base/16385": No space left on device
and
ERROR: could not create directory "base/16386": No space left on device
I am using a postgres AMI on aws (PostgreSQL/Ubuntu provided by OpenLogic)https://aws.amazon.com/marketplace/ordering/ref=dtl_psb_continue?ie=UTF8&productId=13692aed-193f-4384-91ce-c9260eeca63d&region=eu-west-1
provisioned with m2.xlarge machine, which should have 17GB RAM and 350GB SSD
Based on the description provided, you have not mapped your Postgres /data directory to your actual 350GB partition.
If you are running production server, 1st of all - try to clean up the logs (/pg_log folder) to save disk space and bring up the box to normal operation AND create backup of your database.
Run df -h to see disk devices utilization and lsblk what is mounted to your disk. It highly likely, that AWS by default gave you not extended 350GB volume. You have 2 options:
Add new disk take a look at Ubuntu add new drive procedure and map it to your Postgres /data folder
Try to do perform resize of the existing file system with resize2fs, relevant answer can be found at AskUbuntu

On Solaris, how do you mount a second zfs system disk for diagnostics?

I've got two hard disks in my computer, and have installed Solaris 10u8 on the first and Opensolaris 2010.3 (dev onnv_134) on the second. Both systems uses ZFS and were independently created with a zpool name of 'rpool'.
While running Solaris 10u8 on the first disk, how do I mount the second ZFS hard disk (at /dev/dsk/c1d1s0) on an arbitrary mount point (like /a) for diagnostics?
If you have upgraded your OpenSolaris zpool to a newer version, for example to take advantage of deduplication, you won't be able to do it.
If there is no version issue, you can use that command:
zpool import -f -R /a 3347820847789110672 oldrpool
Replace 3347820847789110672 (pool id) by the one displayed with "zpool import" with no other options.
If you need to mount the pool for diagnostic purposes, better to boot on a CD that contains the latest OpenSolaris distribution.