I am running Centos 7 and this is my df -h output.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 2.5G 48G 5% /
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 32G 50M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sda1 1014M 202M 813M 20% /boot
/dev/mapper/centos-home 1.4T 434M 1.4T 1% /home
tmpfs 6.3G 0 6.3G 0% /run/user/1000
tmpfs 6.3G 0 6.3G 0% /run/user/1002
I have my web files stored under /var/www and need to allocate space for it. Looking at the above output I am assuming it is stored under the root directory. I also notice that the /home directory is where the majority of my space is and I would prefer to use most of that for my /var/www directory since this will contain my web files. I'm also guessing 32GB is plenty for the /home directory since that just contains the user created folders? I'm also unsure what the last two tmpfs file systems are being used for?
I have done a lot of research but could not find any solid answer on how to do this. Any help would be appreciated. I've seen some suggestions about moving my /var/www files to the /home directory but I would prefer to keep them where there are. I also use SELinux so I don't want to have SELinux permission issues by moving them.
I personally would prefer to make a symbolic link from /var/www to /home/yourname/www.
To do this:
mkdir /home/yourname/www
rsync -avz /var/www/* /home/yourname/www
chown -R www-data:www-data /home/yourname/www
rm -r /var/www
ln -s /home/yourname/www /var/www
You can also perform local mounting from some path in your /home dir to /var/www by defining it in your /etc/fstab
Hope this help.
A symbolic link would be a good option.
Related
I've build a Linux image with Yocto Poky kirkstone (4.0.2) for corei7-64-poky-linux 'core-image-minimal'.
rootfs is mounted on RAM as read-only using:
IMAGE_FEATURES += "read-only-rootfs"
I'm now trying to create a new rw partition ( mounted on /usr/local) (or a RO that can be remount as RW) to store and update my application when needed.
I tried to add my own fstab using a base-files/base-files_%.bbappend (https://stackoverflow.com/a/47250915/2482513) , and adding something like:
/usr/local /usr/local ext2 defaults,rw 0 0
But this doesn't work, I can see my custom fstab (/etc/fstab) on the target, but it seems that it is not used at all.
mount -v shows:
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
devtmpfs on /dev type devtmpfs (rw,relatime,size=852628k,nr_inodes=213157,mode=755)
/dev/loop0 on / type ext4 (ro,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /var/volatile type tmpfs (rw,relatime)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /var/lib type tmpfs (rw,relatime)
I suspect yocto is using recipes-core/initrdscripts/files/init-install.sh or something similar instead of my custom fstab.
This link https://www.digi.com/resources/documentation/digidocs/embedded/dey/3.2/cc8x/yocto_t_read-only-rootfs
suggest to use volatile blinds on a read-write partition, but doesn't explain how to create that read-write partition as part of my image.
I found people using wks files to create partition in the final wic image, but I'm using hddimg (IMAGE_FSTYPES += " hddimg") for compatibility with the hardware bootloader, so I'm not sure this could work, or how to make it work.
Yocto - Create and populate a separate /home partition
I'm new to all of this, so thank you in advance for your help.
Turns out that I didn't need hddimg, you can simply create a wic image that can boot with a legacy bios using the bootimg-pcbios option:
part /boot --source bootimg-pcbios --sourceparams="loader=systemd-boot,initrd=microcode.cpio" --ondisk sda --label msdos --active --align 1024 --use-uuid
I shutdown my non-global zone and umount her point zfs zonepath.
command for umount:
zfs unmount -f zones-pool/one-zone
details:
zfs list | grep one
zones-pool/one-zone 15,2G 9,82G 32K /zones-fs/one-zone
zones-pool/one/rpool/ROOT/solaris 15,2G 9,82G 7,83G /zones-fs/one/root
in the above, it is seen that there is an occupied space, 9.82G of 15.2G
more details:
# zfs get mountpoint zones-pool/one-zone
NAME PROPERTY VALUE SOURCE
zones-pool/one-zone mountpoint /zones-fs/one-zone local
# zfs get mounted zones-pool/one-zone
NAME PROPERTY VALUE SOURCE
zones-pool/one-zone mounted no -
but, if I try mount point zfs
I can not see the content
step 1 mount:
zfs mount zones-pool/one-zone
step 2 see mount with df -h:
df -h | grep one
zones-pool/one-zone/rpool/ROOT/solaris 25G 32K 9,8G 1% /zones-fs/one-zone/root
zones-pool/one-zone 25G 32K 9,8G 1% /zones-fs/one-zone
step 3 list content:
ls -l /zones-fs/one-zone/root
total 0
why?
also in step 2, you see that df -h prints 1% used
I do not understand
To view contents of zoned dataset you need to start zone or mount it directly.
Zone files (root-fs) are located into dataset
zones-pool/one-zone/rpool/ROOT/solaris
To mount it you need to change its "zoned" option to off and set "mountpoint" option to path you want to mount.
This may be done via
zfs set zoned=off zones-pool/one-zone/rpool/ROOT/solaris
zfs set mountpoint=/zones-pool/one-zone-root-fs
Space into dataset may be occupied by snapshots and clones, you may check them by commands:
zfs list -t snap zones-pool
zfs get -H -r -o value,name origin dom168vol1 | grep -v '^-'
The first command displays all snapshots, the second command displays datasets which are depends from some snapshots (have not "-" origin property).
Ubuntu 14.04
MongoDB shell version: 2.4.9
doing backup of mongodb used for ceilometer in OpenStack Kilo. get no space error during back. Where is partial back file? What file do I delete to get rid of the partial backup file. How do I recover the space
taken up by the failed backup?
stack#cloud:~$ mongodump --username ceilometer --password mypassword --host 3.2.0.10 --port 27017 --db ceilometer
...
Thu Apr 13 18:33:33.033 Collection File Writing Progress: 39821000/94803354 42% (objects)
Thu Apr 13 18:33:43.960 Collection File Writing Progress: 39824300/94803354 42% (objects)
Thu Apr 13 18:33:48.731 Collection File Writing Progress: 39827600/94803354 42% (objects)
**assertion: 14035 couldn't write to file: errno:28 **No space left on device****
stack#cloud:/$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 49474116 4 49474112 1% /dev
tmpfs 9897052 1552 9895500 1% /run
/dev/mapper/cloud--vg-root 381244660 361897752 0 100% /
none 4 0 4 0% /sys/fs/cgroup
none 5120 0 5120 0% /run/lock
none 49485248 0 49485248 0% /run/shm
none 102400 4 102396 1% /run/user
/dev/sda1 240972 237343 0 100% /boot
Since you didn't specify --out or -o on the command line to mongodump, the output would default to the current directory.
I would suggest you verify by running ls -la ~/dump. You can remove the dump by running rm -rf ~/dump.
In the process of trying to rescue an unbootable Debian Jessie system, I get the following error when trying to chroot:
chroot: failed to run command ‘/bin/bash’: No such file or directory
I have been googling around and it's supposedly related to a 64bit/32bit clash (chrooting from a 32bit into 64bit or vis a versa), yet I don't see how that could apply here since I am rescuing a 64bit system with a 64bit live-hybrid-Debian-USB-stick.
/bin/bash is in the chroot directory and so are the library depenencies, as per ldd.
Does anyone have an idea what is causing the error?
Below are my mount points, and an ls:
# mount |grep mnt
/dev/mapper/centos_vh200-root on /mnt/vh2 type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /mnt/vh2/boot type ext4 (rw,relatime,data=ordered)
none on /mnt/vh2/proc type proc (rw,relatime)
devtmpfs on /mnt/vh2/dev type devtmpfs (rw,nosuid,size=10240k,nr_inodes=414264,mode=755)
sys on /mnt/vh2/sys type sysfs (rw,relatime)
# ls -l /mnt/vh2/bin/bash
-rwxr-xr-x 1 root root 1029624 Nov 12 2014 /mnt/vh2/bin/bash
This is ldd output for bash:
# ldd /mnt/vh2/bin/bash
linux-vdso.so.1 (0x00007ffd49bcc000)
libncurses.so.5 => /lib/x86_64-linux-gnu/libncurses.so.5 (0x00007fad99f1a000)
libtinfo.so.5 => /lib/x86_64-linux-gnu/libtinfo.so.5 (0x00007fad99cf0000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fad99aec000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fad99743000)
/lib64/ld-linux-x86-64.so.2 (0x00007fad9a13f000)
Terminal session:
# mount /dev/centos_vh200/root /mnt/vh2
# mount /dev/sda1 /mnt/vh2/boot/
# mount -t proc none /mnt/vh2/proc/
# mount -o bind /dev /mnt/vh2/dev/
# mount -t sysfs sys /mnt/vh2/sys/
# chroot /mnt/vh2/ /bin/bash
chroot: failed to run command ‘/bin/bash’: No such file or directory
ldd /mnt/vh2/bin/bash is done outside chroot so it finds your live system libraries. Look for libraries in /mnt/vh2/ not in /.
I have a small script that executes fine from my home folder but when moved to a different folder on different partition (EXT4)
$ ls -lah ./build.sh
-rwxrwxr-x 1 olmec(me) olmec(me) 510 Oct 31 20:00 ./build.sh
$ ./build.sh
bash: ./build.sh: Permission denied
I have tried chmod 777 build.sh but no difference.
The script is in folder /media/data/source
Data drive partition is mounted in FStab as
UUID=affd0ac6-f3da-4f88-ac22-65d94dc5da8c /media/data ext4 user,user 0 0
Resolved by modifying FStab mount command
UUID=affd0ac6-f3da-4f88-ac22-65d94dc5da8c media/data ext4 auto,users,exec 0 0
Most probably it's on a volume which was mounted with the noexec option, I'd check that. If that's not the case, you can still try to find out from strace bash yourscript's output.