Removing busybox completely from a Yocto generated image - yocto

I'm trying to build a yocto image without busybox and without any busybox
applet deployed.
I have tried that configuring my distro.conf file in this way:
DISTRO_FEATURES_remove = " busybox"
VIRTUAL-RUNTIME_base-utils = ""
PREFERRED_PROVIDER_virtual/base-utils = ""
Nonetheless, busybox binary and two related applets (syslog and udhcpc) are
installed in the generated image:
$ rpm -qa | grep busybox
busybox-syslog-1.24.1-r0.corei7_64
busybox-1.24.1-r0.corei7_64
busybox-udhcpc-1.24.1-r0.corei7_64
I have tried disabling syslog applet appending to my distro.conf file:
VIRTUAL-RUNTIME_syslog ?= ""
But syslogd applet is still installed:
# ls -l /sbin/syslogd
lrwxrwxrwx 1 root root 19 Feb 15 14:03 /sbin/syslogd -> /bin/busybox.nosuid
Is there some way to remove busybox completely from the generated image?

You need to break apart packagegroup-core-boot:
make a copy of it;
remove busybox from it;
make your image inherit it instead of the original.

Finally I found the right answer to my question. The trick to disable busybox completely is defining these variables in the distro.conf file:
VIRTUAL-RUNTIME_base-utils = ""
VIRTUAL-RUNTIME_login_manager = "shadow"
Last variable (login_manager) is only needed if you install "packagegroup-core-boot" in your custom image, such as in my case.
So, this question is solved. Thank you so much for all your support! :-)

Related

Yocto & wpa_supplicant - installing libwpa_client.so

I'm working with this Yocto/ Bitbake recipe for wpa_supplicant (v2.9) which is a bbappend for this recipe. I had significant difficulties getting the associated library (libwpa_client.so) included in the install process during my Yocto/ Poky build. Eventually I created my own bbappend which contained:
# Force build of library libwpa_client.so
do_configure_append() {
echo 'CONFIG_BUILD_WPA_CLIENT_SO=y' >> wpa_supplicant/.config
}
# Avoid QA errors:
INSANE_SKIP_${PN} += " ldflags"
INHIBIT_PACKAGE_STRIP = "1"
INHIBIT_SYSROOT_STRIP = "1"
SOLIBS = ".so"
FILES_SOLIBSDEV = ""
# Force install of libwpa_client.so
do_install_append(){
install -d ${D}${libdir}
install -m 0644 ${S}/wpa_supplicant/libwpa_client.so ${D}${libdir}
}
Using the config option CONFIG_BUILD_WPA_CLIENT_SO=y failed to install the library so thats why I went with the do_install_append() method. I find it hard to believe this hasn't come up before for others so I think I'm missing something, I can't find similar hacks/ patches.
Can't someone point out if I'm missing something obvious?? There's a patch file for v2.10 which may be associated with this issue, but I think it's more an issue with the Makefile as the library file was successfully built for v2.9, just not populated to the final image.
If this isn't a waste of half a day perhaps this will help others.
Regards,

Is there a way to automatically create a container when starting Azurite?

For test purposes I create and run an Azurite docker image, in a test pipeline.
I would like to have the blob container automatically created though after Azurite is started, as it would simplify things.
Is there any good way to achieve this?
For the Postgres image we use, we can specify an init.sql which is run on startup. If something similar is available for Azurite, that would be awesome.
You can use the following Dockerfile to install the azure-storage-blob Python package on the Alpine based azurite image. The resulting image size is ~400MB compared to the ~1.2GB azure-cli image.
ARG AZURITE_VERSION="3.17.0"
FROM mcr.microsoft.com/azure-storage/azurite:${AZURITE_VERSION}
# Install azure-storage-blob python package
RUN apk update && \
apk --no-cache add py3-pip && \
apk add --virtual=build gcc libffi-dev musl-dev python3-dev && \
pip3 install --upgrade pip && \
pip3 install azure-storage-blob==12.12.0
# Copy init_azurite.py script
COPY ./init_azurite.py init_azurite.py
# Copy local blobs to azurite
COPY ./init_containers init_containers
# Run the blob emulator and initialize the blob containers
CMD python3 init_azurite.py --directory=init_containers & \
azurite-blob --blobHost 0.0.0.0 --blobPort 10000
The init_azurite.py script is a local Python script that uses the azure-storage-blob package to batch upload files and directories to the azurite blob storage emulator.
import argparse
import os
from time import sleep
from azure.core.exceptions import ResourceExistsError
from azure.storage.blob import BlobServiceClient, ContainerClient
def upload_file(container_client: ContainerClient, source: str, dest: str) -> None:
"""
Upload a single file to a path inside the container.
"""
print(f"Uploading {source} to {dest}")
with open(source, "rb") as data:
try:
container_client.upload_blob(name=dest, data=data)
except ResourceExistsError:
pass
def upload_dir(container_client: ContainerClient, source: str, dest: str) -> None:
"""
Upload a directory to a path inside the container.
"""
prefix = "" if dest == "" else dest + "/"
prefix += os.path.basename(source) + "/"
for root, dirs, files in os.walk(source):
for name in files:
dir_part = os.path.relpath(root, source)
dir_part = "" if dir_part == "." else dir_part + "/"
file_path = os.path.join(root, name)
blob_path = prefix + dir_part + name
upload_file(container_client, file_path, blob_path)
def init_containers(
service_client: BlobServiceClient, containers_directory: str
) -> None:
"""
Iterate on the containers directory and do the following:
1- create the container.
2- upload all folders and files to the container.
"""
for container_name in os.listdir(containers_directory):
container_path = os.path.join(containers_directory, container_name)
if os.path.isdir(container_path):
container_client = service_client.get_container_client(container_name)
try:
container_client.create_container()
except ResourceExistsError:
pass
for blob in os.listdir(container_path):
blob_path = os.path.join(container_path, blob)
if os.path.isdir(blob_path):
upload_dir(container_client, blob_path, "")
else:
upload_file(container_client, blob_path, blob)
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Initialize azurite emulator containers."
)
parser.add_argument(
"--directory",
required=True,
help="""
Directory that contains subdirectories named after the
containers that we should create. Each subdirectory will contain the files
and directories of its container.
"""
)
args = parser.parse_args()
# Connect to the localhost emulator (after 5 secs to make sure it's up).
sleep(5)
blob_service_client = BlobServiceClient(
account_url="http://localhost:10000/devstoreaccount1",
credential={
"account_name": "devstoreaccount1",
"account_key": (
"Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq"
"/K1SZFPTOtr/KBHBeksoGMGw=="
)
}
)
# Only initialize if not already initialized.
if next(blob_service_client.list_containers(), None):
print("Emulator already has containers, will skip initialization.")
else:
init_containers(blob_service_client, args.directory)
This script will be copied to the azurite container and will populate the initial blob containers every time the azurite container is started unless some containers were already persisted using docker volumes. In that case, nothing will happen.
Following is an example docker-compose.yml file:
azurite:
build:
context: ./
dockerfile: Dockerfile
args:
AZURITE_VERSION: 3.17.0
restart: on-failure
ports:
- 10000:10000
volumes:
- azurite-data:/opt/azurite
volumes:
azurite-data:
Using such volumes will persist the emulator data until you destroy them (e.g. by using docker-compose down -v).
Finally, init_containers is a local directory that contains the containers and their folders/files. It will be copied to the azurite container when the image is built.
For example:
init_containers:
container-name-1:
dir-1:
file.txt
img.png
dir-2:
file.txt
container-name-2:
dir-1:
file.txt
img.png
I've solved the issue by creating a custom docker image and executing azure-cli tools from a health check. There could certainly be better solutions, and I will update the accepted answer if someone posts a better solution.
In more details
A solution to create the required data on startup is to run my own script. I chose to trigger the script from a health check I defined in docker-compose. What it does is use azure cli tools to create a container and then verify that it exists.
The script:
AZURE_STORAGE_CONNECTION_STRING="UseDevelopmentStorage=true"
export AZURE_STORAGE_CONNECTION_STRING
az storage container create -n images
az storage container show -n images
exit $?
However, the azurite image is based on alpine, which doesn't have apt, so installing azure cli was a bit tricky. So I did it the other way around, and based my image on mcr.microsoft.com/azure-cli:latest. With that done I installed Azurite like this:
RUN apk add npm
RUN npm install -g azurite --silent
All that's left is to actually run azurite, see the official azurite dockerfile for details.
It is possible to do this without azure-cli and use curl instead (and with that, not having to use the azure-cli docker image). However this was a bit complicated to get the authentication header working properly, so using azure-cli was easier.

Include systemd-journal-remote with Bitbake

I am using an embedded Linux system based on Yocto/Open Embedded Linux and the systemd-journald-remote program is missing.
When I look at the systemd recipe the program is mentioned. It seems like it is not compiled or added by default to the image. I understand how to add normal recipes but unfortunately I don't understand how to add such a "subpackage".
The Bitbake documentation is unfortunately overwhelming for a beginner like me. Can someone help me?
Create bbappend for systemd in your meta-layer with following path recipes-core/systemd/systemd_%.bbappend and:
PACKAGECONFIG_append = " \
microhttpd \
"
You can add it into your image .bb or .bbappend file with following parameter:
IMAGE_INSTALL += "systemd-journal-remote"
This will add systemd-journal-remote into your image. Install the image on your target board, log in to your target and configure the file /etc/systemd/journal-remote.conf.
Then, enable the service with systemctl enable systemd-journal-remote, and then restart it with systemctl restart systemd-journal-remote.

Yocto remove unused init system (on per-image basis)

I would like to change the init system on a per-image basis.
I have created a sample image as pointed out here.
This works well, but I also want to remove the unused init system (in this case SysVinit) from rootfs.
Therefore I tried something like this inside my distro config: (REQUIRED_DISTRO_FEATURES = "systemd" is set inside my image.bb)
DISTRO_FEATURES_BACKFILL_CONSIDERED = "${#bb.utils.contains('REQUIRED_DISTRO_FEATURES', 'systemd', 'sysvinit', '', d)}"
Finally it results into this, exactly what I expect:
$ bitbake sample-image-systemd -e | grep DISTRO_FEATURES_BACKFILL_CONSIDERED=
DISTRO_FEATURES_BACKFILL_CONSIDERED="sysvinit"
So far so good. But the final rootfs still contains sysvinit scripts (/etc/init.d/*)
If I do the following inside my distro config everything works well and /etc/init.d is not created:
DISTRO_FEATURES_BACKFILL_CONSIDERED = "sysvinit"
So I don't really understand the difference and why my solution doesn't work.
The difference is that that systemd recipe does not have a dependency to your recipe which is good because if it had it would be a circular dependency. So, variables defined in your recipe are not expanded or accessible by the systemd recipe
Now to better explain this you can run the following command.
$ bitbake systemd -e | grep DISTRO_FEATURES_BACKFILL_CONSIDERED=
The result should be
DISTRO_FEATURES_BACKFILL_CONSIDERED=""
This happens because the sample-image-systemd recipe variables are not expanded and used while you are building/packaging/etc the systemd recipe. To make a variable global or accessible by all recipes you should add it to your distro config or your local config.

SQLGetPrivateProfileString failed with

Typing the command: odbcinst -q -s on RHEL 6, I get the following error message:
odbcinst: SQLGetPrivateProfileString failed with .
All my DSN's are also not showing up when I run:
odbcinst -q -d
Type the command: env |grep 'ODBC' to check if the ODBCSYSINI and the ODBCINI variables are set. If no results are returned - you need to add the variables to the environment variable pointing to the directory and the path to where the odbc.ini file is located as follows (in my case for RHEL 6 it is located at /etc - others may have it on /usr/local/etc):
Edit ~\.bash_profile and add the following lines:
export ODBCSYSINI=/etc
export ODBCINI=/etc/odbc.ini
You are good to go!
In my case (ubuntu 16.04) it was related to this bug, just not ~/.odbc.ini but /etc/odbc.ini. Adding a line to /etc/odbc.ini
[empty-sys]
fixed the problem.
Its too late to answer on this question probably, but it is for those who still couldn't get this resolved using #kapil Vyas answer-
Adding to his answer, you will need to logout and then login again from your user for export commands (saved in .bash_profile) to work.
When I had this problem, I edited /usr/local/etc/odbcinst.ini to add:
[MySQL]
Description = ODBC for MySQL
Driver = /usr/lib/x86_64-linux-gnu/odbc/libmyodbc8a.so
Setup = /usr/lib/x86_64-linux-gnu/odbc/libodbcmyS.so
FileUsage = 1
Pooling = Yes
CPTimeout = 120
I hope this is helpful.