yocto sunxi machine name - yocto

While running nanopi-neo image in yocto it throws the following error.
In local.conf MACHINE ??= "nanopi-neo
ERROR: OE-core's config sanity checker detected a potential misconfiguration.
Either fix the cause of this error or at your own risk disable the checker (see sanity.conf).
Following is the list of potential problems / advisories:
MACHINE=nanopi-neo is invalid. Please set a valid MACHINE in your local.conf, environment or other configuration file.
Can anyone please tell me to fix this error?

Apparently, nanopi-neo is unknown target device for your setup.
MACHINE ??= nanopi-neo looks as default value, so you most probably should set this variable to the target that is available in your bsp layer, which typically has name meta-bsp-smth. You can find the list of available devices in meta-bsp-smth/conf/machine folder (e.g. meta-bsp-smth/conf/machine/some_dev_name.conf). Then add to local.conf:
MACHINE ?= "some_dev_name"
Check also in conf/bblayers.conf which layers are enabled if error remains, that bsp layer's full path should be in the list of BBLAYERS.
Update
You can also check available products by running
# bitbake-layers show-products
and check the first column for availability and correct name of product. Then, available layers you can check by running:
# bitbake-layers show-layers
and check if meta-sunxi is in the output list.

you need to add meta-sunxi to your layermix.
git clone git://git.yoctoproject.org/poky
cd poky
git clone git://github.com/linux-sunxi/meta-sunxi
. ./oe-init-build-env
bitbake-layers add-layer ../meta-sunxi
MACHINE=nanopi-neo-air bitbake core-image-minimal
you can bitbake any image you like, if you want to not use
MACHINE on cmdline it can be added to local.conf for persistence
MACHINE = "nanopi-neo-air"

Related

Yocto build immediately failed

I have a PC that usually used for Yocto image building. Now I need to add ROS2 packages to the same image. After all it's turned out the disk is full so I've connected a SSD external disk to build the image on it. I did the same steps as before, run the same command etc. but after the build starts if crashed at the first package. I've reinstalled all the sources from zero, I've deleted tmp and ssstate-cache but nothing help. I don't understand what this error says.
This is error trace log
As I see Yocto fail to write something into ssstate-cache/61, I don't really know what that is. A user has read/write permissions.
The build system: Ubuntu 20.04
Yocto version: zeus
In the linked error log, the relevant part is:
SignatureGeneratorOEBasicHash.dump_sigtask(fn='/media/sw/Samsung/yocto/sources/poky/meta/recipes-extended/texinfo-dummy-native/texinfo-dummy-native.bb', task='do_fetch', stampbase='/media/sw/Samsung/yocto/build-xwayland/sstate-cache/61/sstate:texinfo-dummy-native::1.0:r0::3:610ed4b8e8bf78bbcd4a667b6645a0276f5c8bfce5de4822923850d44d032bbe_fetch.tgz.siginfo', runtime='customfile:/media/sw/Samsung/yocto/build-xwayland/tmp/stamps/x86_64-linux/texinfo-dummy-native/1.0-r0'):
os.chmod(tmpfile, 0o664)
> os.rename(tmpfile, sigfile)
except (OSError, IOError) as err:
OSError: [Errno 22] Invalid argument: '/media/sw/Samsung/yocto/build-xwayland/sstate-cache/61/sigtask.twkjztl9' -> '/media/sw/Samsung/yocto/build-xwayland/sstate-cache/61/sstate:texinfo-dummy-native::1.0:r0::3:610ed4b8e8bf78bbcd4a667b6645a0276f5c8bfce5de4822923850d44d032bbe_fetch.tgz.siginfo'
It is likely that the new name is not valid for the target disk filesystem. Typically the : character is invalid on FAT/NTFS filesystems. Native Linux filesystems like Ext4, XFS and Btrfs will not have this limitation.

How to change timezone in read-only rootfs in Yocto poky warrior

I am trying to change the timezone on an embedded Linux (Yocto poky warrior) for Raspberry-pi Cm3.
But I am unable to do so. I get an error message stating
root#raspberrypi-cm3:~# timedatectl set-timezone "America/New_York"
Failed to set time zone: Failed to set time zone: Read-only file system
This worked before changing the rootfs to read-only.
How can I change timezone on read-only rootfs?
/etc/localtime is recreated (by an equivalent of ln -fs) by timedated when needed... which obviously can't be done because it's on an RO FS.
It's not really possible out of the box, you'll need to either pick (and maintain) the following patch or use overlayfs or other kinds of work-arounds.
See this for full explanation.

"Unable to find component name" on myodbc-installer of driver

Trying to follow the directions for installing the MySQL ODBC driver.
When I try to run:
myodbc-installer -a -d -n "MySQL ODBC 8.0 Driver" -t "Driver=/usr/local/lib/libmyodbc8w.so"
It says:
[ERROR] SQLInstaller error 6: Unable to find component name
I've found a handful of cases of people reporting this same message, e.g., here and here. Yet there seems to be no resolution.
Noticing the slight variations in the -n name string for the various drivers, I wondered if perhaps the name was something subtly different and the documentation hadn't been updated. But I used a hex editor to look in /usr/local/lib/libmyodbc8w.so and the literal string "MySQL ODBC 8.0 Driver" is in it.
There may be some instances of a name mismatch causing the problem (e.g. in one of the linked-to cases, they use -n "MySQL" instead of the prescribed -n "MySQL ODBC 5.3" from the notes).
However...in my case it was a matter of not using sudo. The error message is not very helpful in indicating that the problem could be a matter of privileges! :-/ But at the very top of the linked instruction page it says (emphasis mine):
To install the driver from a tarball distribution (.tar.gz file), download the latest version of the driver for your operating system and follow these steps, substituting the appropriate file and directory names based on the package you download (some of the steps below might require superuser privileges)
What's going on is that unixodbc has system-wide odbcinst.ini and odbc.ini. It is stated that you should not be editing these files directly, but they are edited via an API that unixodbc provides. That API is called by the MySQL helper utility called myodbc-installer:
The error message is delivered by this print_installer_error() routine
...which is called from add_driver() when the routine SQLInstallDriverExW() returns false
(Note: unixodbc on most platforms provides the (W)ide Character version of SQLInstallDriverEx(), but myodbc-installer defines its own SQLInstallDriverExW() if it is not available via a shim.)
This API apparently doesn't have a way of saying it can't get the necessary privileges to the files (in /usr/local/etc or perhaps just in /etc). So myodbc-installer is just parroting what it got. Sigh.

How to load fish configuration from a remote repository?

I have a zillion machines in different places (home network, cloud, ...) and I use fish on each of them. The problem is that I have to synchronize their configuration every time I change something in there.
Is there a way to load the configuration from a remote repository? (= a place where it would be stored, not necessarily git but ideally I would manage them in GitHub). In such a case I would just have a one liner everywhere.
I do not care too much about startup time, loading the config each time would be acceptable
I cannot push the configuration to the machines (via Ansible for instance) - not of them are reachable from everywhere directly - but all of them can reach Internet
There are two parts to your question. Part one is not specific to fish. For systems I use on a regular basis I use Dropbox. I put my ~/.config/fish directory in a Dropbox directory and symlink to it. For machines I use infrequently, such as VMs I use for investigating problems unique to a distro, I use rsync to copy from my main desktop machine. For example,
rsync --verbose --archive --delete -L --exclude 'fishd.*' krader#macpro:.config .
Note the exclusion of the fishd.* pattern. That's part two of your question and is unique to fish. Files in your ~/.config/fish directory named with that pattern are the universal variable storage and are currently unique for each machine. We want to change that -- see https://github.com/fish-shell/fish-shell/issues/1912. The problem is that file contains the color theme variables. So to copy your color theme requires exporting those vars on one machine:
set -U | grep fish_color_
Then doing set -U on the new machine for each line of output from the preceding command. Obviously if you have other universal variables you want synced you should just do set -U and import all of them.
Disclaimer: I wouldn't choose this solution myself. Using a cloud storage client as Kurtis Rader suggested or a periodic cron job to pull changes from a git repository (+ symlinks) seems a lot easier and fail-proof.
On those systems where you can't or don't want to sync with your cloud storage, you can download the configuration file specifically, using curl for example. Some precious I/O time can be saved by utilizing HTTP cache control mechanisms. With or without cache control, you will still need to create a connection to a remote server each time (or each X times or each Y time passed) and that wastes quite some time already.
Following is a suggestion for such a fish script, to get you started:
#!/usr/bin/fish
set -l TMP_CONFIG /tmp/shared_config.fish
curl -s -o $TMP_CONFIG -D $TMP_CONFIG.headers \
-H "If-None-Match: \"$SHARED_CONFIG_ETAG\"" \
https://raw.githubusercontent.com/woj/dotfiles/master/fish/config.fish
if test -s $TMP_CONFIG
mv $TMP_CONFIG ~/.config/fish/conf.d/shared_config.fish
set -U SHARED_CONFIG_ETAG (sed -En 's/ETag: "(\w+)"/\1/p' $TMP_CONFIG.headers)
end
Notes:
Warning: Not tested nearly enough
Assumes fish v2.3 or higher.
sed behavior varies from platform to platform.
Replace woj/dotfiles/master/fish/config.fish with the repository, branch and path that apply to your case.
You can run this from a cron job, but if you insist to update the configuration file on every init, change the script to place the configuration in a path that's not already automatically loaded by fish, e.g.:
mv $TMP_CONFIG ~/.config/fish/shared_config.fish
and in your config.fish run this whole script file, followed by a
source ~/.config/fish/shared_config.fish

restore complete filesystem to default security context

I'm a selinux newbie and had to change the security context of a mercurial repo and config file on a CentOS box to get it serves from httpd.
Accidentally I issued "chcon -Rv --type=httpd_sys_script_exec_t /", which I could only stop when already masses of files and directories have been modified.
I read about restorecon to restore something to its default context, but it doesn't work for me, I got "permission denied".
What can I do to restore the whole filesystem to its selinux defaults?
You could try doing a fixfiles relabel to get things back in order. Else you could edit /etc/selinux/config and set the system to no longer enforce SELinux. Good luck!
You could either of the following to fix this.
fixfiles
create a file /.autorelabel and reboot the system.
restorecon -f file
Usually the conf file will be /etc/selinux/targeted/contexts/files/file_contexts