Is it possible to include multiple doxygen config files in one config file?
Let's say you had ~15 different doxygen config files spread across multiple repositories. Let's also say that you were pulling in multiple source files from different repositories into one folder. If each different config file has slightly different settings, then you wouldn't be able to generate documentation by just randomly picking one of the config files.
Can you include multiple different config files or would you have to create one amalgamated master doxygen config that combines all the settings from each different config file?
Related
For example, I have three apps that I want to deploy, and a database as well. When running on a dev machine (docker-for-desktop context) or in an integration or test cluster, I would want to run 2 replicas of each app, plus have a SQL container that they all connect to. In staging or production, I want to be able to set the replicas as per traffic needs, and I want to connect to a different (external) SQL server.
I would then want those yaml files to be kept in source control, so depending on your env it uses the correct context and "creates" all of the yaml files.
Is this possible with contexts? Or is this a namespace problem? Or do I just need to have yaml files in separate folders (a local folder, a staging folder, a production folder) and have copies of the yaml files in each folder. Or maybe some other option?
There are multiple options to maintain the same set of YAML files for different configurations, for example:
Helm
Kustomize
ksonnet
You can use these tools to keep only per environment configurations in a different files in git.
Yocto has a set of independent repositories containing the base system (Poky) and various software components (all the meta-* repositories here, and also openembedded layer index). So when you want to build an image for specific device and purpose, you need a handful of repositories checked out.
These are all tied together by the conf/bblayers.conf and conf/local.conf files in the build directory. But that is a build directory—it is supposed to be disposable, containing only information that can be easily regenerated on request. And it does—except for the list of layers in conf/bblayers.conf and a couple of definitions like the MACHINE in the conf/local.conf that define the target system to build for.
How should I version this information?
Currently we have a rather hacky script that assembles the build directory and writes the config files, but it does not know how to properly update them when it changes.
So is there a better option? Preferably one that would avoid any additional steps between checkout/update (with submodules or repo), oe-init-build-env init script (which creates the build directory if it does not exist) and running bitbake with appropriate target image?
Actually, repo is a convenient tool for managing manifest files with all the needed repositories.
Then you can use TEMPLATECONF to version local.conf and bblayers.conf. Here is how we do it: https://pelux.io/software-factory/master/chapters/baseplatform/building-PELUX-sources.html
The Poky distribution itself uses the Combo Layer tool, which seems to be designed to address this particular problem. However, it's not very clear what the workflow is supposed to look like, when using this tool.
Regarding the default bblayers.conf and local.conf files, you can either version them anywhere in your project and have a script copy them in your build folder after calling oe-init-build-env, or simmply use meta-poky/conf/bblayers.conf.sample and meta-poky/conf/local.conf.sample, which are automatically installed by oe-init-build-env when first creating the build directory.
Now, when you make changes or add layers, you will have to clear the build directory for the changes in the .sample files to take effect.
In the product that I work on, there are many configuration tables. I need to find a way to track configuration changes (hopefully with some kind of version/changeset number), deploy the configuration changes to other environments using the changeset number and if needed rollback particular configuration based on changeset number.
I am wondering how can I do that?
One solution that I think could work is to write a script(s) to take all the configurations from all the config tables and create Json file(s). I can then check-in that file(s) to tfs or github to maintain versioning and write another script(s) to load that configuration file(s) in any environment.
I know how to make a shared config file for traditional projects and adding them to each project with the following tag:
<appSettings file="../other_project/foo.config">.
How do I share application settings in VSTS, ensuring every role can access the shared config settings? I assume you can't directly reference other projects' config files using relative path names, like in my example above.
I would like to centralize my configuration and make my config transform file relatively short, as there are a lot of projects.
I assume you can't directly reference other projects' config files
using relative path names, like in my example above.
You can manage the config file into solution directory or the root of your git repo.
Then you can add (Add -> Existing Item) the config file for each project separately.
And keep the config file as artifacts, so even when deploying different projects into different machines, the config file will always accessible.
I have this Scenario wherein I have to connect to multiple repositories in github and extract properties file of individual projects.