I have installed FreeBSD onto a raw image file using QEMU Emulator successfully. I have formatted the image file using the ZFS file system (ZFS POOL).
Using the following commands below I have successfully mounted the image file ready to be opened by zpool
sudo losetup /dev/loop0 [path-to-file].img
sudo kpartx -l /dev/loop0
sudo kpartx -av /dev/loop0
However with the next command show below....
sudo zpool import -R [MOUNT-PATH] -d /dev/mapper
I get the following error message
The pool can only be accessed in read-only mode on this system. It
cannot be accessed in read-write mode because it uses the following
feature(s) not supported on this system:
com.delphix:spacemap_v2 (Space maps representing large segments are more efficient.)
The pool cannot be imported in read-write mode. Import the pool with
"-o readonly=on", access the pool on a system that supports the
required feature(s), or recreate the pool from backup.
I cannot find anywhere online about the feature called 'spacemap_v2'. How do I install this or how do I mount my zfs pool to be writable. I know I can mount it as read-only but that defeats the purpose of what I want to do as I want to be able to write data to copy/paste data in its mountable platform interface.
Does anyone know how to achieve this. I shall be grateful for a response.
Regards
What version of FreeBSD are you using? And where did this ZFS pool come from?
I'm guessing it's a ZFS On Linux pool which, as the message says, is using a feature which FreeBSD's ZFS doesn't currently support.
The only way around it at the moment is to create another pool without the feature on a system that does support it, zfs send to the new pool and then import that pool into FreeBSD.
Note FreeBSD is going to support this feature Soon(tm).
Related
I changed the mongod.conf.orig of the mongo running on ECS, but when I restart, the changes are gone.
Here's the details:
I have a mongodb running on ECS, it always crashes due to out of memory.
I have found the reason, I set the ECS memory to 8G, but because the mongo is running in a container, it detected a higher memory.
when I run db.hostInfo()
I got the memSizeMB higher than 16G.
It caused that when I run db.serverStatus().wiredTiger.cache
I got a "maximum bytes configured" higher than 8G
so I need to reduce the wiredTigerCacheSizeGB in config file.
I used the command line copilot svc exec -c /bin/sh -n mongo to connect to it.
Then I found a file named mongod.conf.orig.
I ran apt-get install vim to install vi and edit this file mongod.conf.orig.
But after I restart the mongo task, all my changes are gone. include the vi I just installed.
Did anyone meet the same problem? Any information will be appreciated.
ECS containers has ephemeral storage. In your case, you could create an EFS and mount it in a container, then share the configuration.
If you use CloudFormation, look at mount points.
In a Proxmox machine I noticed some of the backups of some VM's were failing, so I wanted to test stuff.
Whilst testing the whole host stopped responding and I forced a reboot.
After the reboot I seem to have lost the whole data store.
Almost every zfs command results in a freeze.
zpool status,zpool list, you name it, it locks up and you can't even ctrl break out of it.
I can still create a new SSH session and try other things though.
In an attempt to see what is causing the commands to hang I thought about running
zpool set failmode=continue
hoping it will show me an error, but as you can guess, that command also hangs.
It's a pool created on two nvme drives. The original command to create the pool was
zpool create -f -o ashift=12 storage-vm /dev/nvme0n1 /dev/nvme1n1
First thing I thought was that one of the nvme's had gone bad so I checked the SMART status, but it shows both drives are perfectly healthy.
Then before trying other stuff I decided to backup the drives to an NFS share with the dd command.
dd if=/dev/nvme0n1 of=/mnt/pve/recovery/nvme0n1
dd if=/dev/nvme1n1 of=/mnt/pve/recovery/nvme1n1
Both commands completed and on the NFS share I have 2 images of the exact same size (2TB each)
Then I tried to do a non destructive read/write test with dd on both the nvme's and got no errors.
In order to rule out as much as possible I build another Proxmox machine using spare hardware (same brand and type etc.) and place the drives in there.
On the new machine all zpool commands also hang. If i run zpool status with the drives removed from the motherboard, it does not hang, but obviously it has nothing to show.
So I placed the nvme's back in the original machine.
zdb -l /dev/nvme0n1 gives
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3
which kind of worries me. It does the same for the other nvme.
And now I'm running out of ideas. I have little knowledge of the zfs system and don't know what is possible to save the data.
Obviously, the drives are not really dead as the smart tells me it is healthy and I can dd an image from them.
Things like faulty RAM or motherboard are pretty much ruled out also with the hardware swap.
Is there a way to recover at least some VM's from that storage?
Help/pointers wil be greatly appreciated.
The issue was eventually solved and this is what I did.
Considering the volume was made out of 2 nvme drives I created 2 loop devices using the dd images.
losetup -fP /mnt/pve/recovery/nvme0n1
losetup -fP /mnt/pve/recovery/nvme1n1
You can check the mounted loop devices with lsblk and unmount them with losetup -d /dev/loop[X]
Finally I imported the pool devices into ZFS in readonly mode and I was able to access/recover all my data
zpool import -f -d /dev/loop0p1 -f -d /dev/loop1p1 -o readonly=on storage-vm
I'm using CentOS 7 via AWS.
I'd like to store MongoDB data on an attached EBS instead of the default /var/lib path.
However, when I edit /etc/mongod.conf to point to a new dbpath, I'm getting a permission denied error.
Permissions are set correctly to mongod.mongod on the dir.
What gives?
TL;DR - The issue is SELinux, which affects what daemons can access. Run setenforce 0 to temporarily disable.
You're using a flavour of Linux that uses SELinux.
From Wikipedia:
SELinux can potentially control which activities a system allows each
user, process and daemon, with very precise specifications. However,
it is mostly used to confine daemons[citation needed] like database
engines or web servers that have more clearly defined data access and
activity rights. This limits potential harm from a confined daemon
that becomes compromised. Ordinary user-processes often run in the
unconfined domain, not restricted by SELinux but still restricted by
the classic Linux access rights
To fix temporarily:
sudo setenforce 0
This should disable SELinux policies and allow the service to run.
To fix permanently:
Edit /etc/sysconfig/selinux and set this:
SELINUX=disabled
Then reboot.
The service should now start-up fine.
The data dir will also work with Docker, i.e. something like:
docker run --name db -v /mnt/path-to-mounted-ebs:/data/db -p 27017:27017 mongo:latest
Warning: Both solutions DISABLE the security that SELinux provides, which will weaken your overall security. A better solution is to understand how SELinux works, and create a policy on your new data dir that works with mongod. See https://wiki.centos.org/HowTos/SELinux for a more complete tutorial.
I am trying to create logical volumes (like /dev/sdb or so) inside a running centos docker container. If anyone has tried doing so successfully, please help!
After installing lvm2 and running lvmetad, when I tried creating a VG, I get the below error:
bash-4.2# lvcreate -L 2G stackit
/dev/mapper/control: open failed: Operation not permitted
Failure to communicate with kernel device-mapper driver.
Check that device-mapper is available in the kernel.
striped: Required device-mapper target(s) not detected in your
kernel.
Run `lvcreate --help' for more information.
I'm not sure what exactly what you are trying to do, but docker containers by default run with restricted privileges.
Try adding (old way)
--privileged=true
Or (new way)
--cap-add=ALL
To give the container full privileges. Then you can narrow down which capabilities you actually need to give the container.
We've been running postgresql 8.4 for quite some time. As with any database, we are slowly reaching our threshold for space. I added another 8 GB EBS drive and mounted it to our instance and configured it to work properly on a directory called /files
Within /files, I manually created
Correct me if I'm wrong, but I believe all postgresql data is stored in /var/lib/postgresql/8.4/main
I backed up the database and I ran sudo /etc/init.d/postgresql stop. This stops the postgresql server. I tried to copy and paste the contents of /var/lib/postgresql/8.4/main into the /files directory but that turned out be a HUGE MESS! due to file permissions. I had to go in and chmod the contents of that folder just so that I could copy and paste them. Some files did not copy fully because of root permissions. I modified the data_directory parameter in postgresql.conf to point to the files directory
data_directory = '/files/postgresql/main'
and I ran sudo /etc/init.d/postgresql restart and the server failed to start. Again probably due to permission issues. Amazon EC2 only allows you to access the service as ubuntu by default. You can only access root from within the terminal which makes everything a lot more complicated.
Is there a much cleaner and more efficient step by step way of doing this?
Stop the server.
Copy the datadir while retaining permissions - use cp -aRv.
Then (easiest, as it avoids the need to modify initscripts) just move the old datadir aside and symlink the old path to the new location.
Thanks for the accepted answer. Instead of the symlink you can also use a bind mount. That way it is independent from the file system. If you want to use a dedicated hard drive for the database you can also mount it normally. to the data directory.
I did the latter. Here are my steps if someone needs a reference. I ran this as a script on many AWS instances.
# stop postgres server
sudo service postgresql stop
# create new filesystem in empty hard drive
sudo mkfs.ext4 /dev/xvdb
# mount it
mkdir /tmp/pg
sudo mount /dev/xvdb /tmp/pg/
# copy the entire postgres home dir content
sudo cp -a /var/lib/postgresql/. /tmp/pg
# mount it to the correct directory
sudo umount /tmp/pg
sudo mount /dev/xvdb /var/lib/postgresql/
# see if it is mounted
mount | grep postgres
# add the mount point to fstab
echo "/dev/xvdb /var/lib/postgresql ext4 rw 0 0" | sudo tee -a /etc/fstab
# when database is in use, observe that the correct disk is being used
watch -d grep xvd /proc/diskstats
A clarification. It is the particular AMI that you used that sets ubuntu as the default user, this may not apply to other AMIs.
In essence if you are trying move data manually, you will probably need to do so as the root user, and then make sure its available to whatever user postgres is running with.
You also do have the option of snapshotting the volume and increasing the size of the a volume created from the snapshot. Then you could replace the volume on your instance with the new volume (You probably will have to resize the partition to take advantage of all the space).