Yocto Rocko bitbake process reboot on Ubuntu - yocto

Yocto Rocko bitbake on Ubuntu 16.04 Machine reboots after reaching a particular stage. The PC running Ubuntu has a RAM of 16 GB. How to overcome this issue?

TL;DR
Switch to another tty (pressing CTRL + ALT + F[1-6]), login and run bitbake from there.
The root cause seems to be a SIGNAL sent by bitbake and not correctly handled by the X server: http://lists.openembedded.org/pipermail/openembedded-core/2016-December/130621.html.
The first workaround suggested was to lower the concurrent bitbake processes setting BB_NUMBER_THREADS to, at least, 4 (but I've experienced soft reboot also with 4 concurrent threads and I had to lower it to 2 to be able to compile).
Unfortunately, this workaround implies longer building time (as if it were not already slow enough)
By the way, there is another workaround: instead of launching the bitbake command inside your tty7, default console where the X server is running, just switch to another tty (pressing CTRL + ALT + F[1-6]), login and run bitbake from there.
Doing this, I was able to build an entire image with 7 concurrent threads without experiencing soft reboots.
Another option is to use an awesome desktop/tiling manager: i3.

CTRL + ALT + F1 to another tty helped me greatly, I can run bitbake with 8 running tasks.
(this is already answered by #garlix, I just highlighted it as my easiest way to do).

Related

Vmmem does not automatically shut down after closing WSL2/WSLG/Vs Code on windows 11

I've been using WSL on VSCode. I've noticed recently that after closing either VSCode, or a single WSL terminal, a process called vmmem keeps using lots of RAM.
I've found how to shut it down manually in CMD or PowerShell using the following command:
wsl --shutdown
But I've been trying to find a way to make it close itself when I either close the WSL terminal or VSCode.
This problem hasn't occurred in the past, meaning it's most likely cause by a Windows update.
I've tried updating my WSL and updating my Windows version, but all my research has led to nothing so far.
Here is the current version I'm on for WSL:
Version WSL : 0.70.4.0
Version du noyau : 5.15.68.1
Version WSLg : 1.0.45
Version MSRDC : 1.2.3575
Version direct3D : 1.606.4
Version de DXCore : 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp
version Windows : 10.0.22000.1098
Yes, Vmmem is a process that runs as part of the WSL2 Virtual Machine. And yes, a wsl --shutdown will terminate the VM itself (along with any distributions running inside it).
Typically, WSL2 terminates in two stages:
First, when there are no "interactively started processes" (foreground or background) in a WSL2 distribution, it will terminate after 15 seconds. This is currently non-configurable.
Second, when all running WSL2 distributions have terminated, the WSL2 VM itself will shutdown after 60 seconds. This value is configurable on recent WSL2 releases on Windows 11. At this point, Vmmem should end and release its memory.
The first thing to check when Vmmem won't terminate is whether a distribution is still running. From PowerShell or CMD:
wsl -l -v
# wsl --list --verbose
If any WSL2 distributions are still in the Running state (and my guess is that one is, for you), then the WSL2 VM (and thus the Vmmem process) will also still be running.
Assuming that you are running just one distribution:
wsl -e ps axjff
If you aren't running Systemd, the only things that should be running are:
init processes
The ps command itself
plan9, a fairly new process in WSL2 (starting in 0.70.0), but that won't prevent the distribution from terminating.
If you are running Systemd, then there will be a lot of additional services. However, anything started by Systemd itself (the /sbin/init) should not prevent a WSL2 distribution from terminating.
I noticed a spurious xsel in one of my (non-Systemd) releases when checking just now, but I think that was simply due to installing the Fish shell. It was, however, preventing Ubuntu from terminating, and thus preventing the WSL2 VM from shutting down.
Is anything from the VSCode "WSL server" left over for you? Or something from your development that is spawning a background process?

How to enable test mode when deploying Windows 10 wim image with Dism?

I am upgrading about 200 machines in my lab from Windows 7 to Windows 10, and as part of the upgrade, I am also converting the file system on the machines to GPT.
I am doing this as an automated process with WinPE images that are loaded from my PXE server. In the image, there is a script that formats the hard drive with Diskpart, creates EFI boot partition and the OS partition, deploys the image like this:
*dism /Apply-Image /ImageFile:M:\Images[image file name].wim /Index:1 /ApplyDir:W:*
And after deployment, it runs bcdboot W:\Windows command so the PC will boot into Win10, then reboots the PC from hard drive with freshly deployed OS image.
Now it works fine, except for one problem:
For our needs (this is a testing lab), we use a proprietary driver, that is unsigned, and thus needs to run Windows in Test Mode.
With Win7 and MBR, I didn't have this problem, because I used Ghost to take the whole HDD image and just dump it on the HDD, without needing to overwrite the boot script.
Now, the bcdboot W:\Windows disables the test mode, and I am getting a BSOD when loading the said driver, because of it.
How can I enable Test Mode when deploying with DISM, before booting into OS, using command line? Is there a way to do it with bcdboot command somehow?
I have to automate it, because I need to do it on 200 machines.
The OS is Windows 10 RS4 x64 Enterprise.
Thanks in advance for the answer.
Found sort of solution.
If test mode is not enabled, then Windows 10 just starts with the unsigned drivers disabled, unlike 7, which gave BSOD on startup.
So the test mode can just be re-enabled after that with:
bcdedit /set testsigning on and shutdown -r -t 0 to restart the machine
Still would like to know if there is an option to re-enable test mode before booting into Windows.

kernel panic - not syncing: Attempted to kill init ! on centos running on my embedded board

I am currently working on centos running on intel atom board. I mistakenly renamed lic-2.17.so to _libc-2.17.so
library on my board, when I reboot the board it is giving me below error.
[ OK ] Reached target Initrd Default Target.
systemd-journald[136]: Received SIGTERM
Kernel panic - not syncing: Attempted to kill init! exitcode=0x00007f00
Is there any possible way to get back to the original state.
I entered into grub prompt and able to see cat /lib64/_libc-2.17.so. Not Sure,
how to rename this to original name
Thanks in advance.
Can you enter run-level 3 from grub?
if so,
sudo mv /lib64/_libc-2.17.so /lib64/libc-2.17.so
if you can't enter run-level 3, you can try using a live DVD/USB to run the above command, you're just going to have to manually search for the right partition which the incorrectly named file is located.
Otherwise, I'm afraid you're going to need to reinstall the OS.

qemu KVM kernel module no such file or directory

I am currently taking an operating systems class and I need to use qemu to run a small operating system that my professor provided. I am trying to use qemu within an ubuntu 12.04 virtual machine on virtualbox on my macbook air 5.2. I know the problems I am having probably have to do with nested virtualization but the specific error I get when I try to run qemu is:
Could not access the KVM kernel module: No such file or directory
failed to initialize KVM: no such file or directory
Back to tcg accelerator.
qemu does start up the os but the window flickers quite a lot and I would like to fix the KVM problem if possible. I've done research but I can't find a solution I can understand or that works so any help would be greatly appreciated.
Also for the ubuntu virtual machine in virtualbox I have both Enable VT-x/AMD-V and Enable Nested Paging checked under Hardware Virtualization. I've also tried using
modprobe kvm-intel
and I get this error:
FATAL: Error inserting kvm_intel (/lib/modules.3.5.0-22-generic/kernel/arch/x86/kvm/kvm-intel.ko): Operation not permitted.
In my case, the virtualization was disabled.
So sudo modprobe kvm-intel kept giving me the following error
could not insert 'kvm_intel': Operation not supported
I just had to go in the BIOS and enable Virtualization.
Try with sudo modprobe kvm-intel.
In order to have the module automatically loaded at the startup of the virtual machine, do the following:
Edit the corresponding file from the shell with sudo vim /etc/modules.conf
Possibly enter your username password.
Press the key G to go to the end of the document and then o to begin inserting.
Write kvm-intel and press Enter, producing a new line.
Press Esc to return to the Normal mode of vim. "--INSERT--" will disappear from
the bottom.
Save the file and exit vim by writing :wq.
You are done. Try to reboot and load the nested virtual machine.

Paste (Python) Web Server - Autoreload Problem

When I start the `Paste' web server in daemon mode, it seems to kill off it's ability to reload when the timestamp of a source file is updated.
Here is how I start the daemon...
cd ${project} && ../bin/paster serve --reload --daemon development.ini; cd ..;
...which defeats one of the main points of using Paste (for me).
Has anyone come across this or know what I'm doing wrong?
To be complete, the file that I'm changing is a controller file.
The version is `PasteScript 1.7.3'
I believe that the two options are essentially incompatible, since the reloader stops the server with a SIGTERM and the daemon-ized server is impervious to that -- and since daemon is intended for running in a production environment, and reload for a development/debugging environment, I guess that their incompatibility is not seen as a big loss. I imagine a customized reloader, tailored to properly stop and restart the daemonized server, could certainly be developed, but I don't know of any existing one.
I had a similar problem and circumvented the problem. I currently have paster running on a remote host, but I am still developing, so I needed a means to restart paster, but manually by hand was too time consuming, and daemon didnt work. So I always had to keep a shell window open to the server and running paster without --daemon in there. Once I finished my work for that day, and i closed the shell, paster died, which is bad.
I circumvented that by running paster non daemonized in a "screen".
Simply type "screen" in your shell of choice, you will usually depending on your linux be presented with a virtual terminal, that will keep running even when you log out your remote session. Start paster as usually in your new "window" (the screen) with --reload but without daemon, and then detach the window, so you can return to your normal shell (detach = CTRL-A, then press D). You can re-enter that screen by typing "screen -r". If you would like to kill it, reconnect it (screen -r) and inside the screen type CTRL-A, then press K.
Hope that helps.