Ubuntu 14.04
MongoDB shell version: 2.4.9
doing backup of mongodb used for ceilometer in OpenStack Kilo. get no space error during back. Where is partial back file? What file do I delete to get rid of the partial backup file. How do I recover the space
taken up by the failed backup?
stack#cloud:~$ mongodump --username ceilometer --password mypassword --host 3.2.0.10 --port 27017 --db ceilometer
...
Thu Apr 13 18:33:33.033 Collection File Writing Progress: 39821000/94803354 42% (objects)
Thu Apr 13 18:33:43.960 Collection File Writing Progress: 39824300/94803354 42% (objects)
Thu Apr 13 18:33:48.731 Collection File Writing Progress: 39827600/94803354 42% (objects)
**assertion: 14035 couldn't write to file: errno:28 **No space left on device****
stack#cloud:/$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 49474116 4 49474112 1% /dev
tmpfs 9897052 1552 9895500 1% /run
/dev/mapper/cloud--vg-root 381244660 361897752 0 100% /
none 4 0 4 0% /sys/fs/cgroup
none 5120 0 5120 0% /run/lock
none 49485248 0 49485248 0% /run/shm
none 102400 4 102396 1% /run/user
/dev/sda1 240972 237343 0 100% /boot
Since you didn't specify --out or -o on the command line to mongodump, the output would default to the current directory.
I would suggest you verify by running ls -la ~/dump. You can remove the dump by running rm -rf ~/dump.
Related
I've build a Linux image with Yocto Poky kirkstone (4.0.2) for corei7-64-poky-linux 'core-image-minimal'.
rootfs is mounted on RAM as read-only using:
IMAGE_FEATURES += "read-only-rootfs"
I'm now trying to create a new rw partition ( mounted on /usr/local) (or a RO that can be remount as RW) to store and update my application when needed.
I tried to add my own fstab using a base-files/base-files_%.bbappend (https://stackoverflow.com/a/47250915/2482513) , and adding something like:
/usr/local /usr/local ext2 defaults,rw 0 0
But this doesn't work, I can see my custom fstab (/etc/fstab) on the target, but it seems that it is not used at all.
mount -v shows:
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
devtmpfs on /dev type devtmpfs (rw,relatime,size=852628k,nr_inodes=213157,mode=755)
/dev/loop0 on / type ext4 (ro,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /var/volatile type tmpfs (rw,relatime)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /var/lib type tmpfs (rw,relatime)
I suspect yocto is using recipes-core/initrdscripts/files/init-install.sh or something similar instead of my custom fstab.
This link https://www.digi.com/resources/documentation/digidocs/embedded/dey/3.2/cc8x/yocto_t_read-only-rootfs
suggest to use volatile blinds on a read-write partition, but doesn't explain how to create that read-write partition as part of my image.
I found people using wks files to create partition in the final wic image, but I'm using hddimg (IMAGE_FSTYPES += " hddimg") for compatibility with the hardware bootloader, so I'm not sure this could work, or how to make it work.
Yocto - Create and populate a separate /home partition
I'm new to all of this, so thank you in advance for your help.
Turns out that I didn't need hddimg, you can simply create a wic image that can boot with a legacy bios using the bootimg-pcbios option:
part /boot --source bootimg-pcbios --sourceparams="loader=systemd-boot,initrd=microcode.cpio" --ondisk sda --label msdos --active --align 1024 --use-uuid
I am running Centos 7 and this is my df -h output.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 2.5G 48G 5% /
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 32G 50M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sda1 1014M 202M 813M 20% /boot
/dev/mapper/centos-home 1.4T 434M 1.4T 1% /home
tmpfs 6.3G 0 6.3G 0% /run/user/1000
tmpfs 6.3G 0 6.3G 0% /run/user/1002
I have my web files stored under /var/www and need to allocate space for it. Looking at the above output I am assuming it is stored under the root directory. I also notice that the /home directory is where the majority of my space is and I would prefer to use most of that for my /var/www directory since this will contain my web files. I'm also guessing 32GB is plenty for the /home directory since that just contains the user created folders? I'm also unsure what the last two tmpfs file systems are being used for?
I have done a lot of research but could not find any solid answer on how to do this. Any help would be appreciated. I've seen some suggestions about moving my /var/www files to the /home directory but I would prefer to keep them where there are. I also use SELinux so I don't want to have SELinux permission issues by moving them.
I personally would prefer to make a symbolic link from /var/www to /home/yourname/www.
To do this:
mkdir /home/yourname/www
rsync -avz /var/www/* /home/yourname/www
chown -R www-data:www-data /home/yourname/www
rm -r /var/www
ln -s /home/yourname/www /var/www
You can also perform local mounting from some path in your /home dir to /var/www by defining it in your /etc/fstab
Hope this help.
A symbolic link would be a good option.
In the process of trying to rescue an unbootable Debian Jessie system, I get the following error when trying to chroot:
chroot: failed to run command ‘/bin/bash’: No such file or directory
I have been googling around and it's supposedly related to a 64bit/32bit clash (chrooting from a 32bit into 64bit or vis a versa), yet I don't see how that could apply here since I am rescuing a 64bit system with a 64bit live-hybrid-Debian-USB-stick.
/bin/bash is in the chroot directory and so are the library depenencies, as per ldd.
Does anyone have an idea what is causing the error?
Below are my mount points, and an ls:
# mount |grep mnt
/dev/mapper/centos_vh200-root on /mnt/vh2 type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /mnt/vh2/boot type ext4 (rw,relatime,data=ordered)
none on /mnt/vh2/proc type proc (rw,relatime)
devtmpfs on /mnt/vh2/dev type devtmpfs (rw,nosuid,size=10240k,nr_inodes=414264,mode=755)
sys on /mnt/vh2/sys type sysfs (rw,relatime)
# ls -l /mnt/vh2/bin/bash
-rwxr-xr-x 1 root root 1029624 Nov 12 2014 /mnt/vh2/bin/bash
This is ldd output for bash:
# ldd /mnt/vh2/bin/bash
linux-vdso.so.1 (0x00007ffd49bcc000)
libncurses.so.5 => /lib/x86_64-linux-gnu/libncurses.so.5 (0x00007fad99f1a000)
libtinfo.so.5 => /lib/x86_64-linux-gnu/libtinfo.so.5 (0x00007fad99cf0000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fad99aec000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fad99743000)
/lib64/ld-linux-x86-64.so.2 (0x00007fad9a13f000)
Terminal session:
# mount /dev/centos_vh200/root /mnt/vh2
# mount /dev/sda1 /mnt/vh2/boot/
# mount -t proc none /mnt/vh2/proc/
# mount -o bind /dev /mnt/vh2/dev/
# mount -t sysfs sys /mnt/vh2/sys/
# chroot /mnt/vh2/ /bin/bash
chroot: failed to run command ‘/bin/bash’: No such file or directory
ldd /mnt/vh2/bin/bash is done outside chroot so it finds your live system libraries. Look for libraries in /mnt/vh2/ not in /.
The MongoDB version is v2.6.3 on my server, and mongod is running:
ubuntu#koala:/var/log/mongodb$ ps -ef | grep mongo
root 7434 1 17 Jun16 ? 06:57:26 mongod -f /etc/mongodb-onepiece.conf --fork
I am using logrotate to daily rotate the log file of MongoDB. A strange problem just occurred with logrotate.
I check the log file:
ubuntu#koala:/var/log/mongodb$ ls -lth | grep mongodb
-rw-r--r-- 1 ubuntu ubuntu 1.9G Jun 18 10:23 mongodb-onepiece.log.1
-rw-r--r-- 1 ubuntu ubuntu 0 Jun 17 07:35 mongodb-onepiece.log
-rw-r--r-- 1 ubuntu ubuntu 838M Jun 15 07:35 mongodb-onepiece.log.3.gz
-rw-r--r-- 1 ubuntu ubuntu 22 Jun 14 20:52 mongodb-onepiece.log.2.gz
-rw-r--r-- 1 ubuntu ubuntu 1.1G Jun 4 17:10 mongodb-onepiece.log.4.gz
-rw-r--r-- 1 ubuntu ubuntu 53M May 29 19:14 mongodb-onepiece.log.5.gz
The most up-to-date log file is .log.1 instead of .log. When I use tail -fn to check the log.1 file, I can see that the log is still appending to it, and it's growing:
ubuntu#koala:/var/log/mongodb$ tail -fn 2 mongodb-onepiece.log.1
2015-06-18T10:36:50.163+0800 [initandlisten] connection accepted from 192.168.1.52:50278 #2507 (49 connections now open)
2015-06-18T10:36:50.163+0800 [conn2503] command koala.$cmd command: isMaster { ismaster: 1 } keyUpdates:0 numYields:0 reslen:178 0ms
This means that MongoDB is logging to the file that is't not supposed. As can be seen from the mongod config file, MongoDB should log to the logpath:
ubuntu#koala:/var/log/mongodb$ vim /etc/mongodb-onepiece.conf
dbpath=/var/lib/mongodb-onepiece
logpath=/var/log/mongodb/mongodb-onepiece.log
logappend=true
bind_ip = 192.168.1.*
port = 47017
fork=true
journal=true
master = true
From the above, I assume that the problem was not with the logrotate config, but with MongoDB writing to the wrong file. Everyday when logrotate starts, it only checks .log file and finds out it's empty, then it will stop rotating the log.
If I restart the mongod daemon, the logpath will be correct for a moment (writing to the right log file). For that day, the .log file is not empty, then it will be successfully rotated to .log.1 file. But the same problem will happen again after log rotating ,i.e., MongoDB will be logging to .log.1 file afterwards. The cycle comes here.
The logrotate config file is given here:
ubuntu#koala:/var/log/mongodb$ vim /etc/logrotate.d/mongodb
/var/log/mongodb/*.log {
daily
rotate 52
missingok
copytruncate
notifempty
compress
delaycompress
}
The same logrotate config just works fine with other MongoDB logs on the other server with MongoDB v2.6.5 and I suppose postrotate is not the trick here (I have also tried postrotate but without luck).
How to solve this problem?
I'm not a mongo expert, but:
You should be following the official documentation https://docs.mongodb.org/v2.6/tutorial/rotate-log-files/
If you are going to use a logrotate config file, as you indicated, then you need a postrotate lint to your config (failure to do so is why mongodb continues to log to the log.1 file)
postrotate
kill -SIGUSR1 `cat /var/run/mongodb.pid` >/dev/null 2>&1 || true
In the document(jbossperformancetuning.pdf), it suggest us to enable large page memory for the JVM.
But actually after I added the following to our command-line / script start-up:
"-XX:+UseLargePages"
It didn't work, so I investigated more, enabled the large page memory on OS first, then added "-XX:+UseLargePages -XX:LargePageSizeInBytes=2m" to start-up script.
But unfortunately, it didn't work neither, so could someone give us some suggestions of how to enable the large page memory for the JVM successfully?
Here are some details of our server:
[root#localhost ~]# cat /proc/meminfo
MemTotal: 37033340 kB
MemFree: 318108 kB
Buffers: 179452 kB
Cached: 5934940 kB
SwapCached: 0 kB
...
HugePages_Total: 10251
HugePages_Free: 10251
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
[root#localhost ~]# ps aux | grep java
root 22525 0.2 20.3 28801756 7552420 ? Sl Nov03 31:54 java -Dprogram.name=run.sh -server -Xms1303m -Xmx24g -XX:MaxPermSize=512m -XX:+UseLargePages -XX:LargePageSizeInBytes=2m -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Dsun.lang.ClassLoader.allowArraySyntax=true -verbose:gc -Xloggc:/tmp/gc.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Djava.net.preferIPv4Stack=true -Djava.endorsed.dirs=/opt/jboss-as/lib/endorsed -classpath /opt/jboss-as/bin/run.jar org.jboss.Main -c default -b 0.0.0.0
root 31962 0.0 0.0 61200 768 pts/2 S+ 22:46 0:00 grep java
[root#localhost ~]# cat /etc/sysctl.conf
...
# JBoss is running as root, so the group id is 0
vm.hugetlb_shm_group = 0
# The pages number
vm.nr_hugepages = 12288
Finally I fixed this issue, at first set the large pages memory bigger than JVM heap size, then just reboot the server, because there is no way to make it work unless you upgrade the kernel to the newer one - in RHEL 6.0.