Is it posible to make wks.in with multiple exclude paths like:
--exclude-path [src,settings]
part / --source rootfs --ondisk mmcblk --fstype=ext4 --label root --align 4096 --exclude-path src,settings
Above syntax is incorect but I realy need to exclude couple dirs from rootfs (mounted in separate partitions)
Related
I've started recently to create a distribution for a qemux86_64 machine using dunfell LTS branch. To be able to enable the read-only-rootfs image feature and have a writeable home directory, I've added a custom wks-file:
part /boot --source bootimg-pcbios --label boot --active --align 1024
part / --source rootfs --ondisk sda --fstype=ext4 --label root --exclude-path=home/
part /home --source rootfs --rootfs-dir=${IMAGE_ROOTFS}/home --ondisk sda --label home
bootloader --timeout=0 --append="rw oprofile.timer=1 rootfstype=ext4"
With this file in place I get a separate boot- and home-directory which works great.
The next thing I wanted to add was a user within its own recipe based on the Description in meta-skeleton/recipes-skeleton/useradd/useradd-example.bb which looks like the following:
SUMMARY = "Create test users"
DESCRIPTION = "test"
LICENSE = "MIT"
inherit useradd
USERNAME="test"
USERADD_PACKAGES = "${PN}"
USERADD_PARAM_${PN} = "-u 1000 -d /home/${USERNAME} -r -P 'test' -s /bin/bash ${USERNAME};"
do_install () {
install -d -m 755 ${D}/home/${USERNAME}
# The new users and groups are created before the do_install
# step, so you are now free to make use of them:
chown -R ${USERNAME} ${D}/home/${USERNAME}
}
FILES_${PN} = "/home/${USERNAME}"
INHIBIT_PACKAGE_DEBUG_SPLIT = "1"
...but unfortunately it doesn't seem to work correctly. The /home/test directory is created, but with the uid and gid from the user triggering the image build. Same issue with the /home/root directory.
The issue disappears as soon as I remove the home partition from the wks file, but I wouldn't expect that a separate partition is unusual, especially if you need a location for persistent data with the read-only-rootfs feature enabled.
That said, I'm confident that I've messed up something, but unfortunately I can't find the missing piece... Would be great if someone could help me resolving this issue.
Hi everyone, I want to add u-boot environment files to my /etc directory. bitbake-layers show-recipes shows my current u-boot, which is:
u-boot-ti-staging: 1. meta-ti 1:2021.01+git999 2. meta-ti 1:2020.01+gitAUTOINC+2781231a33I've created a seperate layer with a higher priority and with the folder recipes-bsp/u-boot in it.The folder u-boot has a sub-directory "files" with 2 files in it.This is how my u-boot-ti-staging%.bbappend looks like:
```FILESEXTRAPATHS:prepend := "${THISDIR}/files:"
SRC_URI:append = " \file://env.cfg \file://fw_env.config \"
FILES_${PN} += " \${sysconfdir}/fw_env.config \${sysconfdir}/env.cfg \"
do_install:append() {install -m 755 ${WORKDIR}/fw_env.config ${D}${sysconfdir}install -m 755 ${WORKDIR}/env.cfg ${D}${sysconfdir}}```Unfortunately the files do not show up in my rootfs under the /etc directory
I am trying to add iptables to my imx6ullevk image but kernel modules do not get included.
Added to build/conf/local.conf;
CORE_IMAGE_EXTRA_INSTALL += " kernel-modules"
IMAGE_INSTALL_append = " iptables "
IMAGE_FSTYPES += "tar.bz2"
After setting new kernel configuration with menuconfig i can see the new configurations in the .config file. I have created a bsp layer with a defconfig using it.
> CONFIG_NETFILTER=y
> CONFIG_NF_TABLES=y
> CONFIG_NETFILTER_XTABLES=y
> CONFIG_NFT_REJECT=y
...
After the build i can't find "net" kernel modules in the image files: /lib/modules/4.9.11.../kernel/net/ folder is empty. So iptables is in the image but ip_tables kernel modules are not.
After changing the machine to qemux86 modules get included.
When i test the image on the device i see that iptables is looking for modules under a different kernel version. Just to test if the modules are included in the original version i copy the kernel modules to the required version and test again.
root#imx6ullevk:~# iptables -L
modprobe: can't change directory to '4.9.88+g5e23f9d61147': No such file or directory
iptables v1.8.3 (legacy): can't initialize iptables table `filter': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
root#imx6ullevk:/lib/modules# ls -l
drwxr-xr-x 3 root root 1040 Jan 1 1970 4.14.98-imx+g1175b59
#Copy kernel just to see if they are included
root#imx6ullevk:/lib/modules# cp -r 4.14.98-imx+g1175b59 4.9.88+g5e23f9d61147
root#imx6ullevk:/lib/modules# ls -l
drwxr-xr-x 3 root root 1040 Jan 1 1970 4.14.98-imx+g1175b59
drwxr-xr-x 3 root root 1040 Dec 8 13:11 4.9.88+g5e23f9d61147
root#imx6ullevk:/lib/modules# iptables -L
modprobe: module ip_tables not found in modules.dep
iptables v1.8.3 (legacy): can't initialize iptables table `filter': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
I am using https://github.com/Freescale/fsl-community-bsp-platform repo for yocto and i have tried zeus, rocko and dunfell branches.
What could be the problem? Any help would be appreciated.
Thanks.
I'm trying to take a tar of the '/home/store/' directory content.
tar cvf store.tar /home/store/
While doing so, I can see that the .snapshot directories are also getting included. My understanding is that snapshots are kind of backups. Can I skip this? If possible, how? Tried excluding a test directory using the below command ran from /home/store/
tar cvfX store.tar <(echo /home/store/test) /home/store/
But this is not excluding the test directory from the tar created.
Also, tried this
tar cvf store.tar /home/store/ --exclude-file=exclude.txt
Output:
a /home/store// 0K
a /home/store//.profile 1K
a /home/store//local.profile 1K
a /home/store//.vas_logon_server 1K
a /home/store//.vas_disauthcc_611400381 1K
a /home/store//.bash_history 7K
a /home/store//test/ 0K
a /home/store//test/1.txt 1K
a /home/store//test/migrate-perf3.txt 3958K
a /home/store//test.txt 1K
a /home/store//exclude.txt 1K
a /home/store//.snapshot/hourly.0/d2/dd/d5d/f82-1 59K
a /home/store//.snapshot/hourly.0/d2/dd/d5d/f83-1 58K
.....
tar: --exclude-file=exclude.txt: No such file or directory
/home/store/exclude.txt has the entry 'test'. Tried entering the following as well and got same error.
/home/store/test/
/home/store/test/1.txt
When I gave the full path to 'exclude.txt' like this
`tar cvf store.tar /home/store/ --exclude-file=/home/store/exclude.txt`
it's giving the below error
tar: can't change directories to --exclude-file=/home/store: No such file or directory
tar -h
Usage: tar {c|r|t|u|x}[BDeEFhilmnopPqTvw#[0-7]][bfk][X...] [blocksize] [tarfile] [size] [exclude-file...] {file | -I include-file | -C directory file}...
Thanks well in advance!
Van Peer
Try to do so:
tar cvfX /var/tmp/src.tar /var/tmp/excl.txt /var/tmp/src/
Your exclude file should contain path:
/home/store//.snapshot
Best practice not to use full path of your tar dir, because in future you can overwite your /etc , when extract tar archive from /var/tmp, for example.
For example:
sudo tar -zcvpf /backup/farm-backup-$(date +%d-%m-%Y).tar.gz --exclude ".snapshots" --exclude ".cache" farm
Did not use a backslash in the command ie:/farm for the directory. Execute the tar command from the /home directory to back up "farm" user.
for making a backup in the root /backup directory.
OS: OpenSuse 15.1
I run wget to create a warc archive as follows:
$ wget --warc-file=/tmp/epfl --recursive --level=1 http://www.epfl.ch/
$ l -h /tmp/epfl.warc.gz
-rw-r--r-- 1 david wheel 657K Sep 2 15:18 /tmp/epfl.warc.gz
$ find .
./www.epfl.ch/index.html
./www.epfl.ch/public/hp2013/css/homepage.70a623197f74.css
[...]
I only need the epfl.warc.gz file. How do I prevent wget to creating all the individual files?
I tried as follows:
$ wget --warc-file=/tmp/epfl --recursive --level=1 --output-document=/dev/null http://www.epfl.ch/
ERROR: -k or -r can be used together with -O only if outputting to a regular file.
tl;dr Add the options --delete-after and --no-directories.
Option --delete-after instructs wget to delete each downloaded file immediately after its download is complete. As a consequence, the maximum disk usage during execution will be the size of the WARC file plus the size of the single largest downloaded file.
Option --no-directories prevents wget from leaving behind a useless tree of empty directories. By default wget creates a directory tree that mirrors the one on the host, and downloads each file into the appropriate directory of the mirrored tree. wget does this even when the downloaded file is temporary due to --delete-after. To prevent that, use option --no-directories.
The below demonstrates the result, using your given example (slightly altered).
$ cd $(mktemp -d)
$ wget --delete-after --no-directories \
--warc-file=epfl --recursive --level=1 http://www.epfl.ch/
...
Total wall clock time: 12s
Downloaded: 22 files, 1.4M in 5.9s (239 KB/s)
$ ls -lhA
-rw-rw-r--. 1 chadv chadv 1.5M Aug 31 07:55 epfl.warc
If you forget to use --no-directories, you can easily clean up the tree of empty directories with find -type d -delete.
For individual files (without --recursive) the option -O /dev/null will make wget not to create a file for the output. For recursive fetches /dev/null is not accepted (don't know why). But why not just write all the output concatenated into one single file via -O tmpfile and delete this file afterwards?