In my project, I want to take an existing Yocto setup for the Automotive Grade Linux distribution and add some layers with recipes for our own components.
There exists a manifest file, publicly available, on their gerrit site. What I'd like to do is basically
<manifest>
<include url="<url of AGL manifest>" />
<remote name="mysite" fetch="ssh://gerrit.mysite.com" />
<project name="mylayer1" path="mylayer1" />
<project name="mylayer2" path="mylayer2" />
</manifest>
The aim being that a repo init command pointed to my manifest first fetches all the repositories mentioned in the "included" manifest, then proceeds to fetch all my own meta layers.
The problem is that the include element is meant for including other manifests within the same repository specified on the repo init command line.
I could simply copy their manifest into my own repository, with a different name, and create my own manifest along side it. Or I could just reproduce their file and edit it.
But maintaining it will be a pain and extremely error prone. Especially as the upstream manifest is used not only to specify the repositories, but to pin each one to specific commits as a form of version control within Yocto.
I can't believe such an obvious use-case hasn't been considered and addressed.
So, at the risk of being closed as "too broad" or for requesting recommendations, has anyone already solved this problem? If so, how?
I highly doubt there is a way to do this using the repo tool.
Wind River has a solution, and there has been talk of moving this into oecore:
https://github.com/Wind-River/wr-lx-setup
I'm not sure if this will do exactly what you are looking for, but it solves the problem that you are describing.
Historically, people have used repo (freescale-community-bsp), combo-layers (Ostro), or simply rolled their own solution. This setup tool is an attempt to standardize the way layers are assembled.
You can use local_manifest.xml. Under .repo/ create a directory named local_manifests/. You can add a file local_manifest.xml
You can add you own remote, default and projects that are to be fetched from that remote.
I have used this feature with repo 1.23
Related
In my repo I have a .net 5 web application project that (for now) references another library that is setup as an executable also. Both of these project have their own appsettings.json.
When I publish, I notice that the appsettings.json that ends up in the final folder is always the one from the project being referenced and not the one from the project actually being published. This seems truly odd behavior.
I realize the "right" approach is for the web app to not directly reference the other exe project (already have a story to fix that), but it is still a current concern.
Is there a command argument to have publish prefer appsettings.json from the project being published, as opposed to what seems like arbitrarily selecting one of multiple (or preferring the file from a lower dependency)?
For now, I suspect I can add an additional step to my build to copy the correct config file into place before the publish is "complete", but hoping for a better solution (one that doesn't require extra copy file steps).
Is there a command argument to have publish prefer appsettings.json from the project being published, as opposed to what seems like arbitrarily selecting one of multiple (or preferring the file from a lower dependency)?
The best solution is definitely to find the cause of this issue.
For example, if I add following setting in the project file actually being published:
<ItemGroup>
<Content Update="appsettings.json" CopyToPublishDirectory="Never" />
</ItemGroup>
Then the dotnet will publish the file appsettings.json from the project being referenced.
Of course, the cause of your problem may be other possibilities, which requires more actual information about your project to be able to know.
So, the current solution for this issue, you could add above settings for the project being referenced.
With above setting, the file appsettings.json from the project being referenced will not be published.
Yocto has a set of independent repositories containing the base system (Poky) and various software components (all the meta-* repositories here, and also openembedded layer index). So when you want to build an image for specific device and purpose, you need a handful of repositories checked out.
These are all tied together by the conf/bblayers.conf and conf/local.conf files in the build directory. But that is a build directory—it is supposed to be disposable, containing only information that can be easily regenerated on request. And it does—except for the list of layers in conf/bblayers.conf and a couple of definitions like the MACHINE in the conf/local.conf that define the target system to build for.
How should I version this information?
Currently we have a rather hacky script that assembles the build directory and writes the config files, but it does not know how to properly update them when it changes.
So is there a better option? Preferably one that would avoid any additional steps between checkout/update (with submodules or repo), oe-init-build-env init script (which creates the build directory if it does not exist) and running bitbake with appropriate target image?
Actually, repo is a convenient tool for managing manifest files with all the needed repositories.
Then you can use TEMPLATECONF to version local.conf and bblayers.conf. Here is how we do it: https://pelux.io/software-factory/master/chapters/baseplatform/building-PELUX-sources.html
The Poky distribution itself uses the Combo Layer tool, which seems to be designed to address this particular problem. However, it's not very clear what the workflow is supposed to look like, when using this tool.
Regarding the default bblayers.conf and local.conf files, you can either version them anywhere in your project and have a script copy them in your build folder after calling oe-init-build-env, or simmply use meta-poky/conf/bblayers.conf.sample and meta-poky/conf/local.conf.sample, which are automatically installed by oe-init-build-env when first creating the build directory.
Now, when you make changes or add layers, you will have to clear the build directory for the changes in the .sample files to take effect.
So, I have a webapp I am creating using the 3 muskateers yeoman, grunt and bower.
My questions are:
What is best practice when it comes to uploading my webapp into a git/mercurial repo? Do I include the entire project? What about directories like 'node_modules' or 'test', etc?
Also, when deploying to live production site: Will my 'dist' folder be what I should be uploading?
With research yielding no results (I could be searching the wrong things?).. I'm a bit new to this process so any feedback is greatly appreciated. Thanks!
You should always commit all of your yeoman, grunt, and bower config files.
There are two schools of thought on committing the output they produce or dependencies they download:
One is, you should upload everything needed for another user to deploy the web app after cloning the repository, without performing any additional operations. The idea is, dependencies may not exist anymore, network connections might be down, etc.
Another is, keep the repository small and don't commit node_modules, etc, since they can be downloaded by the user.
As far as the dist folder goes, yes you'll be uploading it to your server, as it contains all of your minified files. Whether or not you want to commit it to the repository is a separate question. You might let the user build every time, assuming they can get all the dependencies one way or another (from above choice). Or you might want to commit it to tag it with a release version along with your source code.
There's some more discussion on this here: http://addyosmani.com/blog/checking-in-front-end-dependencies/
Is there a way to maintain two different update sites/p2 repositories pointing to same folder having the plugins and features files?
I need to to maintain two sites: one for full fledged features list and one with limited features. So instead of maintaining duplicate copies of features and plugins, I want to refer it to full fledged site directory's features and plugins. How can I achieve this?
The p2 artifact repository format allows to configure the location where it expects the artifacts through mapping rules. On of the default rules is
<rule filter='(& (classifier=osgi.bundle))' output='${repoUrl}/plugins/${id}_${version}.jar'/>
So if for example your full repository is at http://example.org/full/ and the limited repository at http://example.org/limited/, you could have the limited repository point to the artifact files in the full repository with the following rule:
<rule filter='(& (classifier=osgi.bundle))' output='${repoUrl}/../full/plugins/${id}_${version}.jar'/>
Just update all the rules in the same way, and it should work. Never tried this myself though.
I am attempting to setup a deployment on appharbor for code hosted on bitbucket. I have a number of projects that make use of a library project that I have, so I use subrepos to keep my code manageable. This prevents appharbor from deploying because the subrepos aren't included in the download.
This post led me to why this problem occurs:
AppHarbor, BitBucket and SubRepo Work Around
What I'm struggling with is how to implement the workaround that they stated. Is this cron job/zip file something that I'm going to have to host myself or is it possible to do with just bitbucket and appharbor's post build events? Thanks for any help that can point me in the right direction.
Instead of complicated workarounds, you might want to look into adding your library dependency as a NuGet package from a private feed. That could be hosted here: http://www.myget.org