Adding graphic environment to yocto image - yocto

I have build a linux image with yocto that has xfce as desktop environment,
But what i actually need is just a terminal and a browser.
Is it possible to remove the desktop environment and just create an application that has two options to open 'terminal' and 'browser' and make my image boots on it directly.
If this is possible would it reduce the image size or not, beacause i am looking for a minimum sie of the image.

If you want to remove your desktop env then add below line in your disto.bb or local.conf
DISTRO_FEATURES_remove += "x11 wayland"
I dont think without desktop env you cant run browser.
are you using imx6 and vivante graphics ?

Related

cant get unity hub to install unity editor

So, when I try to install the unity editor from unity hub 2.4.3, it says I don't have enough memory. I have over 600 gigabytes, which should be more than enough. I think it might be trying to install on another drive, and if it is, it wont give me an option to change which drive it will download to. How would I make it install to the right location? If this isn't the right place to ask, please show me where to ask this.
One of the potential reasons for Unity being unable to install an instance of Unity Editor as you mentioned could be a lack of space in the specific drive that you're trying to install the editor on.
In order to change the install location, go to preferences by clicking on the gear icon in Unity Hub:
After that, you should see Unity Editors Path which you can use to change the install location of all the Unity editors:
On the mac the problem for me was, that the folder was not writable. I changed the owner of the directory to my user and the installtion worked.

How do I customise Rundeck with a logo?

I've just installed Community Rundeck 3.2.2 with RPM on a RHEL 8.1.
I've tried to customise it with my logo images with no success:
Created user-assets directory in /var/lib/rundeck
Copied the images there (png and jpg)
Defined the settings in /etc/rundeck/rundeck-config.properties
rundeck.gui.logo=logoTNC600x600grey.jpg
rundeck.gui.logoSmall=logoTNC200x200white.png
rundeck.gui.instanceNameLabelColor=#ededed
rundeck.gui.instanceNameLabelTextColor=#000000
rundeck.gui.title=TNC Rundeck
rundeck.gui.staticUserResources.enable=true
rundeck.gui.login.welcome=Welcome to TNC
Restarted Rundeck service to no avail: picture not shown in web browser
What have I missed?
Regards,
Raul Costa
You need to add your assets at /var/lib/rundeck/user-assets path (create if it doesn't exist, remember: the "rundeck" user needs to reach that path). Also, verify the files extensions.
Update: change the order of elements in your configuration. Works in the following order:
# custom logo
rundeck.gui.staticUserResources.enabled=true
rundeck.gui.logo=logo.jpg
rundeck.gui.logoSmall=logosmall.jpg
rundeck.gui.instanceNameLabelColor=#ededed
rundeck.gui.instanceNameLabelTextColor=#000000
rundeck.gui.title=TNC Rundeck

How to get ML Agents to run in Unity?

I've tried going through the documentation but the steps mentioned there aren't quite clear enough. Is there a good step by step video tutorial that can be helpful? The ones I saw on YouTube are pretty old and don't work with the latest updates on GitHub https://github.com/Unity-Technologies/ml-agents
This will help you to setup ml agent version 14.
https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md
I am currently using this version.
I suggest you create a new python environment for only this purpose.
Don't use pip install mlagents in your python environment terminal. This is not updated to 14 when i installed it, so use the link above to see complete guide but here's the important stuff.
Download zip file
Ml Agents master
When you extract this Zip, you should open this extracted folder .
Then open cmd at that location. Activate your python environment and follow these steps:
You should install the packages from the cloned repo rather than from PyPi. To do this, you will need to install ml-agents and ml-agents-envs separately.Open cmd inside ml-agents=master folder and activate your python environment then, From the repo's root directory, run:
cd ml-agents-envs
pip3 install -e ./
cd ..
cd ml-agents
pip3 install -e ./
It is very important that both packages are installed from same ml agent folder, this will not work if the version are not compatible. if installed from same folder both packages will have same version i.e. 14 in this case.
These two packages will help you to use predefined PPo and SAC algo.
I suppose you have installed 2018 or 2019 Unity. Open it and Goto File → Open project
Now in open dialog box select folder Project inside ml-agents-master folder that you have downloaded.
Sorry that most of the things are named project but don't be confused, earlier project was a folder inside ml-agents-master but after opening it you will see a Project toolbar. Follow Assets\ML-Agents\Examples\3DBall\Scenes now double click on 3Dball.
This will open a scene as you can see here. You can also see TFModels and Scirpts they are predefined neural network and code, respectively.
Select Agent in Hierarchy toolbar in left side (this will make change for only that instance of 3Dball it will be better to go to prefabs then double click on 3Dball this will open only one 3Dball whose settings will be applied on all instances now in hierarchy you will see only one 3Dball, now select it's Agent rest is same but now changes will affect all copies of 3Dball, prefabs are used to control all the copies this helps to train multiple agents at same time) then in right side Inspector view will open, inside Behaviors parameters you can see Model and a input in its box. Keep the agent selected otherwise this inspector view will disappear .
Now Goto TFModels folder, you will see a 3DBall file that looks like Neural network. Drag this to that Agent's Behavior parameters Model.
After Following all these steps Click on play option on top . Now the predefined model will start playing and you will see that it can balance the ball pretty well.
Now that you are able to see how trained model work and want to train again using predefined PPO and SAC, follow this
goto: ml-agents-master\config, here you will find a file trainer_config.yaml, now open cmd then activate your environment and enter code
mlagents-learn trainer_config.yaml --run-id=firstRun --train
When the message "Start training by pressing the Play button in the Unity Editor" is displayed on the screen, you can press the ▶️ button in Unity to start training in the Editor. You can press Ctrl+C to stop the training, and your trained model will be at models run-identifier → behavior_name.nn where behavior_name is the name of the Behavior Name of the agents corresponding to the model.
Move your model file into Project/Assets/ML-Agents/Examples/3DBall/TFModels/.
Open the Unity Editor, and select the 3DBall scene as described above.
Select the 3DBall prefab Agent object.
Drag the <behavior_name>.nn file from the Project window of the Editor to the Model placeholder in the Ball3DAgent inspector window.
Press the ▶️ button at the top of the Editor.
Now, for your own RL algorithm in python:
see this jupyter notebook it shows how we activate unity gym and get observations, rewards and reset environment. For this use can also create Unity executable, it's just creating exe files with some settings which you will find here
Hope this works without any issues. And Good Luck with this.

touch screen touch not working properly even after calibration using ts_calibrate

We are using touchscreen with touch driver (ft5x06). Now to calibrate and test the touch device i downloaded tslib from github and installed to separate folder by following commands:
cd ~/tslib
./autogen.sh
./configure --prefix=/home/user2/Desktop/tslib_arm
make
sudo make install
now on my embedded board's Desktop has got a folder tslib_arm which consists of compiled code for tslib (like bin, etc,lib etc..)
when i run ts_calibrate(./ts_calibrate) from /home/user2/Desktop/tslib_arm/bin then the calibration screen is coming and calibrating. and if i run ./ts_test , it gives options like drag and draw, which are also working fine.
But after closing these apps (ts_calibrate or ts_test) and if we check the touch on the Desktop or any application or normal operation touch seems to be uncalibrated only.
Why is it so???
Do i need to copy this tslib_arm or any other files from tslib_arm folder to system's rootfs location????
That's because your "Desktop or any application" most probably doesn't directly implement and use the API that tslib offers (to read touch input samples).
tslib includes documentation on how to use the filtered input in your environment.
What should always work, is using the "ts_uinput" daemon program that comes with tslib (just like "ts_calibrate" does). It's a driver for tslib that creates a (second) touchscreen input event device for you in /dev/input/. All you need to do is tell your "Desktop or application" to use it. All desktop environments have options to choose which input device you want to use.

Where is my .rpi-sdimg (Yocto Project - Toaster)?

I am currently trying to build a simple image for my raspberry pi 2 using toaster, the web gui of yocto project.
Everything works fine and the build succeds.
Every tutorial I've found is telling me to use dd on the file "core-image-weston-raspberrypi2.rpi-sdimg".
Sadly there is no such file on my pc.
Did someone had the same problem and managed to fix it?
Please find attached the Configuration and BitBake Variables:
Can you check what's the value of the IMAGE_FSTYPES variable in your build? Click the image recipe name in one of your finished builds, then the "configuration" link on the left hand side, and select the "BitBake variables" tab. Once there, search for IMAGE_FSTYPES.
If the value of IMAGE_FSTYPES does not include "rpi-sdimg", you will need to add it. If you are using Toaster from the Yocto Project master branch (it looks like you are from your screenshots), you can do that from Toaster. Go to the "BitBake variables" page in the project configuration, click the "change" icon next to the value of IMAGE_FSTYPES, then type "rpi-sdimg" at the end of the variable value. After that you will need to rebuild the core-image-weston-raspberrypi2 image.
In theory, this should generate the core-image-weston-raspberrypi2.rpi-sdimg file you need (unless something in the meta-raspberrypi layer is dictating otherwise).
If you are not using the master branch, Toaster will have a bug that prevents you from adding custom values to IMAGE_FSTYPES, but you can still do so by editing the configuration files.