useradd doesn't seem to work correctly on a separate partition for home directories - yocto

I've started recently to create a distribution for a qemux86_64 machine using dunfell LTS branch. To be able to enable the read-only-rootfs image feature and have a writeable home directory, I've added a custom wks-file:
part /boot --source bootimg-pcbios --label boot --active --align 1024
part / --source rootfs --ondisk sda --fstype=ext4 --label root --exclude-path=home/
part /home --source rootfs --rootfs-dir=${IMAGE_ROOTFS}/home --ondisk sda --label home
bootloader --timeout=0 --append="rw oprofile.timer=1 rootfstype=ext4"
With this file in place I get a separate boot- and home-directory which works great.
The next thing I wanted to add was a user within its own recipe based on the Description in meta-skeleton/recipes-skeleton/useradd/useradd-example.bb which looks like the following:
SUMMARY = "Create test users"
DESCRIPTION = "test"
LICENSE = "MIT"
inherit useradd
USERNAME="test"
USERADD_PACKAGES = "${PN}"
USERADD_PARAM_${PN} = "-u 1000 -d /home/${USERNAME} -r -P 'test' -s /bin/bash ${USERNAME};"
do_install () {
install -d -m 755 ${D}/home/${USERNAME}
# The new users and groups are created before the do_install
# step, so you are now free to make use of them:
chown -R ${USERNAME} ${D}/home/${USERNAME}
}
FILES_${PN} = "/home/${USERNAME}"
INHIBIT_PACKAGE_DEBUG_SPLIT = "1"
...but unfortunately it doesn't seem to work correctly. The /home/test directory is created, but with the uid and gid from the user triggering the image build. Same issue with the /home/root directory.
The issue disappears as soon as I remove the home partition from the wks file, but I wouldn't expect that a separate partition is unusual, especially if you need a location for persistent data with the read-only-rootfs feature enabled.
That said, I'm confident that I've messed up something, but unfortunately I can't find the missing piece... Would be great if someone could help me resolving this issue.

Related

VS Code Remote-Containers: cannot create directory ‘/home/appuser’:

I'm trying to use the Remote - Containers extension for Visual Studio Code, but when I "Open Folder in Container", I get this error:
Run: docker exec 0d0c1eac6f38b81566757786f853d6f6a4f3a836c15ca7ed3a3aaf29b9faab14 /bin/sh -c set -o noclobber ; mkdir -p '/home/appuser/.vscode-server/data/Machine' && { > '/home/appuser/.vscode-server/data/Machine/.writeMachineSettingsMarker' ; } 2> /dev/null
mkdir: cannot create directory ‘/home/appuser’: Permission denied
My Dockerfile uses:
FROM python:3.7-slim
...
RUN useradd -ms /bin/bash appuser
USER appuser
I've also tried:
RUN adduser -D appuser
RUN groupadd -g 999 appuser && \
useradd -r -u 999 -g appuser appuser
USER appuser
Both of these work if I build them directly. How do I get this to work?
What works for me is to create a non-root user in my Dockerfile and then configure the VS Code dev container to use that user.
Step 1. Create the non-root user in your Docker image
ARG USER_ID=1000
ARG GROUP_ID=1000
RUN groupadd --system --gid ${GROUP_ID} MY_GROUP && \
useradd --system --uid ${USER_ID} --gid MY_GROUP --home /home/MY_USER --shell /sbin/nologin MY_USER
Step 2. Configure .devcontainer/devcontainer.json file in the root of your project (should be created when you start remote dev)
"remoteUser": "MY_USER" <-- this is the setting you want to update
If you use docker compose, it's possible to configure VS Code to run the entire container as the non-root user by configuring .devcontainer/docker-compose.yml, but I've been happy with the process described above so I haven't experimented further.
You might get some additional insight by reading through the VS Code docs on this topic.
go into your WSL2 and check what is your local uid (non-root) using command id.
in my case it is UID=1000(ubuntu).
Change your dockerfile, to something like this:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /home/ubuntu
COPY . /home/ubuntu
# Creates a non-root user and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN useradd -u 1000 ubuntu && chown -R ubuntu /home/ubuntu
USER ubuntu
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "app.py"]

Yocto Warrior Cannot Set Password for root or other users

I am using the meta-tegra warrior branch layer to build an sd card image for the Nvidia Jetson Nano. The image completes and the board boots, but I cannot log in if I try to set any kind of password in Yocto. I've tried creating users other than root and setting their passwords, but the same problem occurs where I cannot log in.
If I leave "debug-tweaks" enabled, and do not attempt to modify the root password at all, I can successfully log in without a password.
I am using warrior branch for OE and haven't modified other layers. How can I set a password for root?
Here are my local.conf password related lines:
# Password Stuff
INHERIT += "extrausers"
#EXTRA_IMAGE_FEATURES = "debug-tweaks"
EXTRA_USERS_PARAMS = "usermod -P mypassword123 root; "
EXTRA_USERS_PARAMS = " useradd testing; \
useradd mts; \
usermod -p 'testing12345' testing; \
usermod -p 'comp12345' comp; \
usermod with -p (minus p) needs a hash generated from openssl passwd command so you need to set Yocto variable as following:
EXTRA_USERS_PARAMS = "usermod -p $(openssl passwd <some_password>) root;"
If you want to append something to bitbake variable, you need to use _append or += operators, ie:
EXTRA_USERS_PARAMS_append = " useradd testing;"
EXTRA_USERS_PARAMS_append = " useradd mts;"
...

FIle is not being copied in WORKDIR from Image recipe in Yocto

I am trying to install a simple file to /etc directory of target rootfs. I am building core-image-sato. "raxy_test" file(in below recipe) is not even being copied in WORKDIR also.
Am I doing anything wrong?
I am able to do same with normal recipe but not with image recipe.
What is the difference between normal and image recipe?
DESCRIPTION = "Image with Sato, a mobile environment and visual style for \
mobile devices. The image supports X11 with a Sato theme, Pimlico \
applications, and contains terminal, editor, and file manager."
IMAGE_FEATURES += "splash package-management x11-base x11-sato ssh-server-dropbear hwcodecs"
LICENSE = "MIT"
inherit core-image
TOOLCHAIN_HOST_TASK_append = " nativesdk-intltool nativesdk-glib-2.0"
TOOLCHAIN_HOST_TASK_remove_task-populate-sdk-ext = " nativesdk-intltool nativesdk-glib-2.0"
LICENSE="CLOSED"
LIC_FILES_CHKSUM=""
SRC_URI = "\
file://raxy_test \
"
do_install() {
install -d ${D}${sysconfdir}
install -m 0755 raxy_test ${D}${sysconfdir}
}
I expect "raxy_test" file to be present in WORKDIR as well as in /etc directory of target.
Any help will really be appreciated, Thanks...!!!
Multiple things:
You use a image recipe (core-image-sato) to add a file in your image. You should use a separate recipe for this modification;
The install is not correct (WORKDIR is not used);
You do not populate the packages (FILES_${PN} not present).
For the separate recipe, create a file (for example myrecipe.bb or whatever you want) in a recipes-* sub directory (you need to place it at the same folder level than other recipes !). I did not test it but I think this can be a base:
DESCRIPTION = "My recipe"
LICENSE="CLOSED"
PR = "r0"
PV = "0.1"
SRC_URI = " file://raxy_test "
# Create package specific skeleton
do_install() {
install -d ${D}${sysconfdir}
install -m 0755 ${WORKDIR}/raxy_test ${D}${sysconfdir}/raxy_test
}
# Populate packages
FILES_${PN} = "${sysconfdir}"
You can notice that some things have changed:
The install must include the ${WORKDIR} path:
install -m 0755 ${WORKDIR}/raxy_test ${D}${sysconfdir}
And we need to populate the package:
FILES_${PN} = "${sysconfdir}"
This will add the files in ${sysconfdir} into the package ${PN} (which is by default the recipe name).

Where to get the eligible library tag file in Android O

In https://source.android.com/devices/architecture/vndk/deftool, it mentions that Google provides a tag file to classify the framework shared libraries, including LL-NDK, SP-NDK, VNDK, VNDK-SP and etc. However, after searching on this website and googling it, I'm not able to find the tag file. Where does Google provide it?
Thanks
Jincan
I found how to get such files.
You must get the file of vendor.img and system.img, for that is a file for deploying at "vendor partition" and "system partition" at a device.
Step 1
Please visit to Driver Binaries for Nexus and Pixel Devices.
There are images for two devices.
taimen (Pixel 2 XL)
walleye (Pixel 2)
Step 2: Method for file expand
Please read README.md.
There is undermentioned code
$ simg2img system.img system.raw.img
$ simg2img vendor.img vendor.raw.img
$ mkdir system
$ mkdir vendor
$ sudo mount -o loop,ro system.raw.img system
$ sudo mount -o loop,ro vendor.raw.img vendor
$ sudo python3 vndk_definition_tool.py vndk \
--system system \
--vendor vendor \
--aosp-system /path/to/aosp/generic/system \
--tag-file eligible-list-v3.0.csv
For detail, Please see that "README.md".
Thank you
git clone https://android.googlesource.com/platform/development
~/tools/development/vndk/tools/definition-tool/datasets[master]$ ls
eligible-list-o-mr1-release.csv eligible-list-o-release.csv minimum_dlopen_deps.txt minimum_tag_file.csv

Yocto post script to move two deployed image files into a folder

I'm trying to create a recipe that moves two image archives and places them into a directory inside Yocto's deploy directory /tmp/deploy/images. I have already created a new image that simply includes the other two recipes, however I haven't been able to utilize any of the available scripting functions to copy the generated images into a separate folder scheme. I've tried using do_install_append() to simply touch a new file, but to no avail and no warnings/errors are shown inside of the terminal during image creation.
Essentially, the workflow would be as follows inside of my-image.bb
....
require my-1st-image.bb
require my-2nd-image.bb
post_script(){
# rm -rf ${WORKDIR}/images/<machine>/USB
# mkdir ${WORKDIR}/images/<machine>/USB
# cp <my-1st-image.tar.gz> ${WORKDIR}/images/<machine>/USB
# cp <my-2nd-image.tar.gz> ${WORKDIR}/images/<machine>/USB
}
You need to use install
Try the following
post_script() {
install -d ${WORKDIR}/images/<machine>/USB
install -m 0755 <my-1st-image.tar.gz> ${WORKDIR}/images/<machine>/USB
install -m 755 <my-2nd-image.tar.gz> ${WORKDIR}/images/<machine>/USB
}
The first of the 3 install commands above will create a working directory. The other two will copy the files.