I am currently trying to build a simple image for my raspberry pi 2 using toaster, the web gui of yocto project.
Everything works fine and the build succeds.
Every tutorial I've found is telling me to use dd on the file "core-image-weston-raspberrypi2.rpi-sdimg".
Sadly there is no such file on my pc.
Did someone had the same problem and managed to fix it?
Please find attached the Configuration and BitBake Variables:
Can you check what's the value of the IMAGE_FSTYPES variable in your build? Click the image recipe name in one of your finished builds, then the "configuration" link on the left hand side, and select the "BitBake variables" tab. Once there, search for IMAGE_FSTYPES.
If the value of IMAGE_FSTYPES does not include "rpi-sdimg", you will need to add it. If you are using Toaster from the Yocto Project master branch (it looks like you are from your screenshots), you can do that from Toaster. Go to the "BitBake variables" page in the project configuration, click the "change" icon next to the value of IMAGE_FSTYPES, then type "rpi-sdimg" at the end of the variable value. After that you will need to rebuild the core-image-weston-raspberrypi2 image.
In theory, this should generate the core-image-weston-raspberrypi2.rpi-sdimg file you need (unless something in the meta-raspberrypi layer is dictating otherwise).
If you are not using the master branch, Toaster will have a bug that prevents you from adding custom values to IMAGE_FSTYPES, but you can still do so by editing the configuration files.
Related
I tried searching with as many different terms as I could and couldn't find exactly what I'm looking for.
I have a C++ Project developed in Visual Studio 2019 and I am trying to build and deploy it in Azure Pipelines. It uses Boost and OpenCV. I skipped trying to include these in Azure Artifacts because of a rabbit hole with Azure CLI errors that took me almost half a day.
So it seems that there is a task to publish pipeline artifacts in the .yml file. How do I do this when my project needs to reference a certain directory, instead of one specific file or .dll? Here are images for how this is configured in Visual Studio:
include directory for boost image
include directory settings for opencv image
Edit: Still trying, see my comment. Thinking about switching over to CircleCI.
I found out what to do. Hopefully no one else wastes as much time as I did.
The key was MSBuild. One needs to first find out the values of $(IncludePath) and $(LibraryPath) by doing the following first in Visual Studio:
Right-click on your project, choose "Properties"
Go to the Build Events tab, and click "Pre-Build Event"
Click on and expand the Command Line row, and click "Edit"
Now click the button that says "Macros>>"
You will see a bunch of different variables and their values. Find the values for LibraryPath and IncludePath, copy and past them into a text file.
Now, assuming you already set up a local agent, follow these steps:
Put the text file in the root folder of where your agent is installed. For me, this was "C:\agents"
Have the first line be "LibraryPath=value" and the other line be "IncludePath=value". Use double slashes for the directory paths.
Rename the file to .env. If the agent is currently running, restart it so it can read in the environment variables it will use during your build.
In the MSBuild task of your pipeline, specify arguments. For my case, it was simply this: /p:IncludePath="C:\Program Files\boost_1_77_0;$(IncludePath)" /p:LibraryPath="$(LibraryPath)"
Run the pipeline. You can check your completed build on the local machine. For me, the path it kept going to was "C:\agents_work\2\s"
I've tried going through the documentation but the steps mentioned there aren't quite clear enough. Is there a good step by step video tutorial that can be helpful? The ones I saw on YouTube are pretty old and don't work with the latest updates on GitHub https://github.com/Unity-Technologies/ml-agents
This will help you to setup ml agent version 14.
https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md
I am currently using this version.
I suggest you create a new python environment for only this purpose.
Don't use pip install mlagents in your python environment terminal. This is not updated to 14 when i installed it, so use the link above to see complete guide but here's the important stuff.
Download zip file
Ml Agents master
When you extract this Zip, you should open this extracted folder .
Then open cmd at that location. Activate your python environment and follow these steps:
You should install the packages from the cloned repo rather than from PyPi. To do this, you will need to install ml-agents and ml-agents-envs separately.Open cmd inside ml-agents=master folder and activate your python environment then, From the repo's root directory, run:
cd ml-agents-envs
pip3 install -e ./
cd ..
cd ml-agents
pip3 install -e ./
It is very important that both packages are installed from same ml agent folder, this will not work if the version are not compatible. if installed from same folder both packages will have same version i.e. 14 in this case.
These two packages will help you to use predefined PPo and SAC algo.
I suppose you have installed 2018 or 2019 Unity. Open it and Goto File → Open project
Now in open dialog box select folder Project inside ml-agents-master folder that you have downloaded.
Sorry that most of the things are named project but don't be confused, earlier project was a folder inside ml-agents-master but after opening it you will see a Project toolbar. Follow Assets\ML-Agents\Examples\3DBall\Scenes now double click on 3Dball.
This will open a scene as you can see here. You can also see TFModels and Scirpts they are predefined neural network and code, respectively.
Select Agent in Hierarchy toolbar in left side (this will make change for only that instance of 3Dball it will be better to go to prefabs then double click on 3Dball this will open only one 3Dball whose settings will be applied on all instances now in hierarchy you will see only one 3Dball, now select it's Agent rest is same but now changes will affect all copies of 3Dball, prefabs are used to control all the copies this helps to train multiple agents at same time) then in right side Inspector view will open, inside Behaviors parameters you can see Model and a input in its box. Keep the agent selected otherwise this inspector view will disappear .
Now Goto TFModels folder, you will see a 3DBall file that looks like Neural network. Drag this to that Agent's Behavior parameters Model.
After Following all these steps Click on play option on top . Now the predefined model will start playing and you will see that it can balance the ball pretty well.
Now that you are able to see how trained model work and want to train again using predefined PPO and SAC, follow this
goto: ml-agents-master\config, here you will find a file trainer_config.yaml, now open cmd then activate your environment and enter code
mlagents-learn trainer_config.yaml --run-id=firstRun --train
When the message "Start training by pressing the Play button in the Unity Editor" is displayed on the screen, you can press the ▶️ button in Unity to start training in the Editor. You can press Ctrl+C to stop the training, and your trained model will be at models run-identifier → behavior_name.nn where behavior_name is the name of the Behavior Name of the agents corresponding to the model.
Move your model file into Project/Assets/ML-Agents/Examples/3DBall/TFModels/.
Open the Unity Editor, and select the 3DBall scene as described above.
Select the 3DBall prefab Agent object.
Drag the <behavior_name>.nn file from the Project window of the Editor to the Model placeholder in the Ball3DAgent inspector window.
Press the ▶️ button at the top of the Editor.
Now, for your own RL algorithm in python:
see this jupyter notebook it shows how we activate unity gym and get observations, rewards and reset environment. For this use can also create Unity executable, it's just creating exe files with some settings which you will find here
Hope this works without any issues. And Good Luck with this.
I'm trying out yocto (2.0, jethro) and I want to build an image starting from core-image-minimal. This works fine.
Every website out there mention modifying the file build/config/local.conf with (some of) my customization. For example, the target machine (through MACHINE) or some global settings (through EXTRA_IMAGE_FEATURES).
I also need to modify some specific packages and the way to do it is to create a custom layer. So far so good.
What I don't understand is how to "save" all my configuration to version control. I want everything I change to be locate in files that I can commit so that anybody else can reproduce the exact same build (or even contribute to that project). Putting almost everything in build/config/local.conf goes against that goal; the file is under a "build" directory and so I can't just clone a git repo and start the building...
Is it really the way the yocto project works? Or am I missing a different configuration file where I need to put these settings? I though I could place all these in a custom layer but it does not seem to work...
Any idea or suggestion?
Thanks!
Thanks Ross, that clarified it!
Here's some notes about my file organization which I couldn't format into a comment to your answer.
Thanks. So all my custom configurations went into meta-mylayer/conf/distro/mylayer.conf
Almost all my customization went into a layer meta-mylayer, except:
DISTRO which is set in build/conf/local.conf. This is how you tell yocto what you want to build.
MACHINE which is also set in build/conf/local.conf. The reason is that the same image/distro combination could be built for different machines and thus this can't be hard-coded for every images.
Layers are manually added to build/conf/layers.conf. That's the last bit I wish I could moved to my DISTRO or something. For now the folders are git submodules and they are added using bitbake-layers add-layer.
In general everything in your local.conf that is "your project" should be moved to your own distro configuration (MACHINE, image features, package lists). Stuff like where DL_DIR is can be moved to a common site.conf if you wish. Eventually you should end up with a local.conf which just sets DISTRO and some other personal variables.
I'm working with some third party tools that generate xcode project files for a few subcomponents. Their tools generate the project files with Generate Position-Dependent Code set to YES (potentially because the tool generates project files for OS X builds too and the latest update has it confused).
While I could simply turn these flags off in the GUI, it's not as convenient as my build process is scripted to generate each project file, build it, move binaries around, lipo them together, etc.
I'm fairly sure these settings can be overridden on the command line, but I'm curious as to what the setting key actually is. For instance, I don't know if the setting=value means that the setting name is verbatim to how it displays in Xcode (Generate Position-Dependent Code), as there are spaces in it.
If anyone can provide a listing of all settings that can be passed to xcodebuild, that would be super.
The setting name is actually GCC_DYNAMIC_NO_PIC. "Generate Position-Dependent Code" is just the description.
For future reference, when I copy (Command+C) that setting when it is highlighted in project Build Settings...
...then paste into Text Edit I see the actual command line setting key.
//:configuration = Debug
//:configuration = Release
//:completeSettings = some
GCC_DYNAMIC_NO_PIC
That works for all build settings, too.
Is there a way to create a "build" but not actually compile the output of the site? Bascially I want to push the files live from the source control in TFS to the final IIS folder destination.
I have used CopyDirectory on my other project builds, but that requires a BuildDetail.DropLocation (compiled Build). Maybe there is another option for the CopyDirectory Source I could use that wouldn't require a build DropLocation.
To simplify, I want to copy files directly from a tfs Source control to a folder, using a Build Template, but that won't compile the files. Is that possible?
While the default .xaml build workflow for Team Foundation Build is indeed a compilation build it does not have to be. I usually recommend teams to at least have one compile and one deploy .xaml workflow.
1) The CompileMyStuff.xaml (DefaultBuildTemplate.xaml) should take my stuff from source control and do whatever is needed to create a Build Drop folder with my output. I may or may not need to actually compile before creating the drop, and it looks like you just want to copy to the drop location.
2) The DeployMyStuff.xaml should take a build number and deploy my code to an environment of my choice.
It looks like you want to skip the "Drop" and go state to deploy and while I would never recommend this you do have a "BuildDetail.BuildLocation" for teh local workspace where the build server has done a get of your code. You can just "CopyDirectory" from there to your server/host for the website.
If you are having a little trouble you could use the Community Build Extensions and fire up PowerShell to do your copy/deploy.
I figured out a solution to this problem. I created a new .xaml file and then the only item that I put in the sequence was "DownLoadFiles". Then I filled out the properties of the task and ran a "build" and it worked.