Raspberry Pi backups - raspberry-pi

I need help with backups using the dd command (or gparted). I have problems restoring it in a bootable way.
Hi everybody,
I was using an sd card and making backups of the entire disk with dd, but now I switched to an SSD and I have problems to do the same:
I have an 480GB SSD and I had a 128GB micro SD card, so I have two partitions on my SSD (BOOT and ROOTFS) and a third one for storing data like a hard drive. I only want to do backups of /dev/sda1 and /dev/sda2 (excluding /dev/sda3 which is the "hard drive"), and the easiest solution I've found was doing two backups: dd if=/dev/sda1 status=progress | gzip -c > BOOT.img.gz and dd if=/dev/sda2 status=progress | gzip -c > ROOTFS.img.gz.
Here's the PROBLEM:
When I restore those partitions I can read them if I connect the device to any computer, but I don't have a bootable sd card, and I guess I have to edit /boot/cmdline.txt and /etc/fstab or something like this I've read, but nothing seems to work so I would really need some help 😥 . (I've also tried gparted, but the problem is with the partitions)

Related

How to extend boot disk ubuntu

we created a instance with 10GB of storage and loaded a few app now our PostgreSQL SQL won't start because of low disk space.
We tried to edit the instance by changing the boot disk space from 10 - 30 GB
We added a new disk 200GB
How can we extend the boot disk sba1 (10GB)?
Off: I think you mistagged your question, it's not related to Postgres itself.
Also, you don't provided enough info about your environment (what's your filesystem? are you using lvm?)
To solve your problem without manipulating partitions you can move postgres data to your new disk partition and make a symbolic link to it:
mv /path/to/data/ /new/disk/
ln -s /new/disk/data /path/to/data

GCE persistent disk data management

GCE beginner here...Basic question : How can I send data to a persistent disk?
I have attached a persistent disk to an instance and tried sending files through the instance using the copy-file instruction. The disk seems correctly mounted (see below)
$ sudo fdisk -l
Disk /dev/sda: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x000935ca
Device Boot Start End Blocks Id System
/dev/sda1 2048 20969472 10483712+ 83 Linux
Disk /dev/sdb: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
I was able to send files to the instance itself - targetting the /tmp directory on the instance.
I haven't succeeded however in sending the files to the persistent disk.
Should I send the data to the instance first, then move the data to the attached drive? Or can that be done directly? Either way some directions would help.
Thanks in advance
You have to mount and format the Disk before usage:
Formatting disks
Before you can use non-root persistent disks in Compute Engine, you need to format and mount them. Compute Engine provides a tool, safe_format_and_mount, that can be used to assist in this process. The tool can be found at the following location on your virtual machine instance:
/usr/share/google/safe_format_and_mount
The tool performs the following actions:
Format the disk (only if it is unformatted)
Mount the disk
This can be helpful if you need to use a non-root persistent disk from a startup script, because the tool prevents your script from accidentally reformatting your disks and erasing your data.
safe_format_and_mount works much like the standard mount tool:
$ sudo mkdir MOUNT_POINT
$ sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" DISK_LOCATION MOUNT_POINT
Alternatively, you can format and mount disks using standard tools such as mkfs and mount.
Caution: If you are formatting disks from a startup script, you risk data loss if you do not take precautions to prevent reformatting your data on boot. Make sure to back up all important data and set up data recovery systems.
Source:
https://cloud.google.com/compute/docs/disks/persistent-disks
Then you can copy Data to the folder you mounted the Disk to :)

What are important mongo data files for backup

If I want to backup database by copying raw files. What files do I need to copy ? only db-name.ns, db-name.0, db-name.1.... or whole folder (local.ns.., journal). I'm running replica set. I understand procedure for locking hidden secondary node and then copying files to new location. But I'm wondering do I need to copy whole folder or just some files.
Thx
Simple answer: All of them. As obvious as it might sound. And here is why:
If you don't copy a namespaces file, your database will most likely not work.
When not copying all datafiles, some of your data is missing and your indices will point to void locations. The database in question might work (minus the data stored in the missing data file), but I would not bet on that – and since the data was important enough to create a backup in the first place, you don't want this to happen, do you?
Config, admin and local databases are vitally necessary for their respective features – and since you used the feature, you probably want to use it after a restore, too.
How do I backup all files?
The best solution save for MMS backup I have found so far is to create LVM snapshots of the filesystem the MongoDB data resides on. In order for tis to work, the journal needs to be included. Usually, you don't need a dedicated backup node for this approach. It is a bit complicated to set up, though.
Preparing LVM backups
Let's assume you have your data in the default data directory /data/db and you have not changed any paths. Then you would mount a logical volume to /data/db and use this to hold the data. Assuming that you don't have anything like this, here is a step by step guide:
Create a logical volume big enough to hold your data. I will call that one /dev/VolGroup/LogVol1 from now on. Make sure that you only use about 80% of the available disk space in the volume group for creating the logical volume.
Create a filesystem on the logical volume. I prefer XFS, so we create an xfs filesystem on /dev/VolGroup/LogVol1:
mkfs.xfs /dev/VolGroup/LogVol1
Mount the newly created filesystem on /mnt
mount /dev/VolGroup/LogVol1 /mnt
Shut down mongod:
killall mongod
(Note that the upstart scripts sometimes have problems shutting down mongod, and this command gracefully stops mongod anyway).
Copy the datafiles from /data/dbto /mntby issuing
cp -a /data/db/* /mnt
Adjust your /etc/fstab so that the logical volume gets mounted on reboot:
# The noatime parameter increases io speed of mongod significantly
/dev/VolGroup/LogVol1 /data/db xfs defaults,noatime 0 1
Umount the logical volume from it's current outpoint and remount it on the correct one:
cd && umount /mnt/ && mount /data/db
Restart mongod
Creating a backup
Creating a backup now becomes as easy as
Create a snapshot:
lvcreate -l100%FREE -s -n mongo_backup /dev/VolGroup/LogVol1
Mount the snapshot:
mount /dev/VolGroup/mongo_backup /mnt
Copy it somewhere. The reason we need to do this is that the snapshot can only be held up until the changes to the data files do not exceed the space in the volume group you did not allocate during preparation. For example, if you have a 100GB disk and you allocated 80GB for /dev/VolGroup/LogVol1, the snapshot size would be 20GB. While the changes on the filesystem from the point you took the snapshot are less than 20GB, everything runs fine. After that, the filesystem will refuse to take any changes. So you aren't in a hurry, but you should definitely move the data to an offsite location, an FTP server or whatever you deem appropriate. Note that compressing the datafiles can take quite long and you might run out of "change space" before finishing that. Personally, I like to have a slower HDD as a temporary place to store the backup, doing all other operations on the HDD. So my copy command looks like
cp -a /mnt/* /home/mongobackup/backups
when the HDD is mounted on /home/mongobackup.
Destroy the snapshot:
umount /mnt && lvremove /dev/VolGroup/mongo_backup
The space allocated for the snapshot is released and the restrictions to the amount of changes to the filesystem are removed.
Whole db-Data folder + where ever you have your logs and journalling
The best solution to backup data on MongoDB would be to use Mongo monitoring Service(MMS). All other solutions including copying files manually, mongodump, mongoexport are way behind MMS.

You are using a GPT bootdisk on a non efi system

While installing CentOS on "HP ProLiant DL380p Gen8 Server"
(Server have 2TB*6 Disks. with 4 Disks in RAID 5 and 2 Spares Giving me total of 5.5 TB)
I am facing this warning while installation.
"you are using a GPT bootdisk on a non efi system. This May not work.
Depending upon your BIOS........ (some long messge)"
and after install completes the System is not booting...
i have this partition Table
/ 6GB
/home 1TB
/usr 80 GB
/opt 80GB
/var 300 GB
SWAP 128 GB
/xmldata 1TB
/mysqldata 3 TB
Please advise.
Thanks
Ali
Convert the disk from GPT to MBR.
You can use a Linux live CD to boot, use gparted (or install it from EPEL if you're using CentOS), and convert it to MBR.
Also, I assumed no MS windows OS in installed on the disk. It gets a bit trickier if you want to have dual boot.

should I let mongodb make use of the new hard disk in this way?

I have a mongodb v2.4.6 running on ubuntu 13.04. It is known that mongodb store all data in /var/lib/mongodb. Now the mongodb is running out of the hard disk. Fortunately, I got a new hard disk which is installed, fdisked, formated and got a name /dev/sda3. Unfortunately I don't know how to let the mongodb make use of the new hard disk because my knowledge on ubuntu and mongodb is very limited. After some research in internet, it seems that I should execute the following command
sudo mount /dev/sda3 /var/lib/mongodb
Is this what I need to do to let mongodb use the new disk? If so, will mongodb automatically and intelligently increase its data to this disk? Is there any othere things I should do? Thank you.
Unfortunately this one will not be that straightforward. Even if you succeed with the mounting it will not move the files at all. What you can do is to
mount the disk elsewhere (mkdir /var/lib/mongodb1, mount /dev/sda3 /var/lib/mongodb1)
stop mongo
copy the files from /var/lib/mongodb to /var/lib/mongodb1 (only helps if the new disk is bigger)
reconfigure mongo to use as db dir the new directory or swap the names with mv commands
start mongo
if everything went fine, mongo started and so on,(check it first!!!) you can delete the old data.
If you have a disk which is the same size so with moving the data you will run into the same problem, if you need larger space then a single disk you should play around with RAID and/or LVM and more disks.