How to inspect drive for zfs? - ubuntu-16.04

I have a drive with ubuntu 16.04, then two drives as zfs, now I moved all the three drives to another computer, and can not see zfs pool residing in the two drives, sudo zpool list returns nothing, and I do some zpool imports, all complain about :
cannot import '/dev/disk/ata-ST340014AS_5MQ40HNH-part1': no such pool available
any idea how I can re-import the zfs pool? THanks

You seem to have tried to import a disk instead of the pool.
Try zpool import without arguments to list all found pools after a reboot (to rule out disk recognition errors when hotplugging disks). You should get a list of importable pools that can be imported by zpool import <id> or zpool import <name>.
If your export/shutdown on the old system was unclean, you might have to add -f, but try it without it first.

Related

Spoof free space available in Docker or tricking Postgres and RabbitMQ

I'm using Google Cloud Run to host some solutions. When the containers start, programs can write to disk, and the data persists until the container stops. However, from a system point of view, all partitions of the container always report zero free space. I confirmed this in a few ways:
Running df from start.sh shows zero free space when the container starts
Deleting a large file and then running df from start.sh still shows zero free space
It is possible to write to disk via start.sh, PHP scripts, etc, so the system DOES have free space to write to memory, yet df still reports zero free space
(All of the above are once the container is deployed to Cloud Run. Manually running the same container via docker from the Cloud Shell and executing df reports free space).
The problem is that certain applications perform disk space checks when they start, and they fail to load in Google Cloud Run. For example, MariaDB uses df in its init script, so commenting out these lines makes it possible to add a static yet functional MariaDB instance to a Cloud Run container.
MariaDB made it easy. Now, I'm trying to do the same thing with PostgreSQL and RabbitMQ, but I'm having trouble figuring out how to override their disk space checks. Here are the two options I am considering:
Keep digging through the source of PostgreSQL and RabbitMQ until I find the disk space check and override it. I don't speak Erlang, so this is a pain, and I would have to do it for every application with this issue
Programs are probably using coreutils to determine disk size. I could edit the source and rebuild it as part of my Dockerfile routine so the system always returns with free space available (could have unintentional side effects)
Is anyone either familiar with the source of Postgres or RabbitMQ or have a system-wide solution that I could implement that would "spoof" the free space available?
EDIT: Here are the error messages given by RabbitMQ and PostgreSQL
RabbitMQ:
{error,{cannot_log_to_file,"/var/log/rabbitmq/rabbit#localhost.log",{error,einval}}}
Postgres:
Error: /usr/lib/postgresql/10/bin/pg_ctl /usr/lib/postgresql/10/bin/pg_ctl start -D /var/lib/postgresql/10/main -l /var/log/postgresql/postgresql-10-main.log -s -o -c config_file="/etc/postgresql/10/main/postgresql.conf" exited with status 1:

How listing objects in ceph works

I know that object locations in ceph are computed from the cluster map using the hash of the object. On the other hand, we have commands like this that list objects:
rados -p POOL_NAME ls
How does this command work? Are object names stored somewhere? If yes, is it all in the monitor database? What will happen in ceph when we run this command?
Monitors keeps pool -> PG map in their database and when you run rados -p POOL_NAME ls it will ask monitor to get PGs associated with this pool. Each PG has an up/acting set that keeps the running OSDs for that PG. After that it will ask PG on the primary OSD to return objects within it.
You can find more info within source code: https://github.com/ceph/ceph/blob/master/src/tools/rados/rados.cc#L2399

Where is the DATA_DUMP_DIR in sql developer

I'm trying to import a .dmp file using the Data Pump Import tool in oracle sql developer.
I'm connected to an oracle database running in a container on my local machine.
When I get to the step where I specify where the dump file is to import, where should I place the .dmp file?
DATA_PUMP_DIR is a default Oracle directory object. It isn't part of SQL Developer; the import tool is really just giving you a GUI equivalent of running impdp from the command line.
You can find the operating system location that Oracle directory object points to by querying the data dictionary:
select directory_path from all_directories where directory_name = 'DATA_PUMP_DIR';
The path that returns is on the database server (in your case that'll be inside your container too), and your dump file needs to go there.
You might want to create additional directory objects pointing to other locations, and grant suitable privileges to users to be able to access them; but they all need to be on the DB server and read/writable by the Oracle process owner on that server.
(They could be remote filesystems mounted on the server, they don't necessarily have to be local storage, but that's another issue and more operating-system specific. Again, in your case, you might be able to share a folder on your local machine with the container, if you don't want to copy the file into the container.)

postgresql initdb - directory not empty

I am installing postgres 8.4 on an ubuntu lucid server (no, at the moment we are using the "lucid" LTS version on that server so an upgrade is not possible yet (although we are going to start testing the system on precise quite soon now))
I have set up an own partition for the /var/lib/postgresql/8.4/main directory with a ext4 file system. (Those of you who are really into postgres installs knows what is happening now...) Since ext4 puts a lost+found directory in the root of all file system, postgres will not use that directory as its data-directory since it is initially not empty...
initdb: directory "/var/lib/postgresql/8.4/main" exists but is not empty
If you want to create a new database system, either remove or empty
the directory "/var/lib/postgresql/8.4/main" or run initdb
with an argument other than "/var/lib/postgresql/8.4/main".
The easiest way to proceed would be to remove the lost+found and recreate it after initdb has done its job. - could that cause any problems? Does the lost+found have any special attributes or anything that makes it impossible to recreate, and also, it is needed at any other time than if checkdisk finds something it needs to put there?
Another way would be to unmount the .../main/ file system, init the database, temporary mount the .../main/ filesystem somewhere else, move things over there and mount it in place. Seems to be a bit more work than the "easiest way".
Or is it some way to make initdb ignore that the directory is not empty? (couldn't see any command line switches for that)
May a lost+found directory within postgres main directory cause any problems?
At the moment I am running the system on a virtual machine for testing, so it really doesn't matter if I mess up things, but before making this an official way of installing a mission-critical system, it would be nice to have some thoughts on this.
lost+found has preallocated blocks that make it easier for fsck to move data into it when the partition is short of free blocks. To create it, better use the mklost+found command rather than mkdir.
If you don't recreate it, fsck will do it anyway when it's needed.
But if it comes to the point where fsck finds corruption within PGDATA, I'd think about going for a backup rather than counting on lost+found to retrieve anything.

On Solaris, how do you mount a second zfs system disk for diagnostics?

I've got two hard disks in my computer, and have installed Solaris 10u8 on the first and Opensolaris 2010.3 (dev onnv_134) on the second. Both systems uses ZFS and were independently created with a zpool name of 'rpool'.
While running Solaris 10u8 on the first disk, how do I mount the second ZFS hard disk (at /dev/dsk/c1d1s0) on an arbitrary mount point (like /a) for diagnostics?
If you have upgraded your OpenSolaris zpool to a newer version, for example to take advantage of deduplication, you won't be able to do it.
If there is no version issue, you can use that command:
zpool import -f -R /a 3347820847789110672 oldrpool
Replace 3347820847789110672 (pool id) by the one displayed with "zpool import" with no other options.
If you need to mount the pool for diagnostic purposes, better to boot on a CD that contains the latest OpenSolaris distribution.