What is the best practice for managing dependencies within a Simulink Project when the project is worked on across a team and the project has dependencies on different models and libraries?
An parallel example would be when building an application using Gradle and declaring the dependencies of a project including the required version numbers. Grade will resolve and download the versions that are required to build the project.
e.g. the following declares a dependency on version 2.1 of library and version 1.0 upwards of some-library, so that the latest version 1.x (1.0, 1.1, 1.2...) that is available will be downloaded and used.
dependencies {
compile("com.example:library:2.1")
compile("com.example:some-library:1.+")
}
The documentation for Simulink (and also here covering manifests) seems to talk about models within a project having version numbers. It doesn't seem to mention libraries that are imported into the project. Models that are only used within a single project could all be contained in the overall project, but what happens if there are (for example) generic S-Functions defined within a separate project or library (or library defined within a project) that are applicable across multiple projects? This requirement is all with the aim of helping to support an automatic build process triggered by a Continuous Integration server, such as Jenkins.
I'm interested in a workflow that will easily support dependency management and automatic dependency resolution with a Github Flow git branching policy.
I've spent much time on this problem. Finally I didn't find an appropriate solution online, but I'd like to share the workflow we are using now and which fulfills our needs.
In short: We created our own dependency management by using git submodules.
Assumption: In fact, it is more a version management of persistent dependencies rather than offering the possibility to dynamically add new or remove old packages or libraries. This also works, but requires the git submodules to be added to or removed from the main git repository.
Objectives:
Consistent setup for everyone who works on the project.
Traceability of depdendencies.
Continous Integration with less effort.
How we do it (Example):
We have Project A and Project B which shall be used in Project C.
All three projects are under git version control and still under development.
We have set up additional release repositories for Project A and Project B, e.g. located on a network drive.
In Project C we add the release repositories of Project A and Project B as git submodules
We have set up some kind of auto-deployment to push only relevant files into these release repositories. For example if we want to make changes of Project B accessible to Project C, we only create a version tag in Project B's repository and it gets pushed to its release repository.
In Project C we update our git submodules and can checkout a new submodule version (if needed).
Advantages:
Since git stores the checked out version (commit) of git submodules in the main project, we can ensure that everyone works with the same files.
Changing the commit of a submodule is traceable in the main project.
The relation between the main project and the dependencies is always consistent.
Continuous Integration should work "out of the box". We are using GitLab and GitLab Runner and only had to setup our runner to fetch submodules recursively (in case of nested submodules).
I think this approach works as long as the repositories won't get too big, since you do not fetch only the version you need but also the whole version history.
Related
In a project I'm planning to have following items/projects:
.Net Server, Ionic App, Angular Website and a C# Admin tool.
At first I made a project, created one repository and folders; Server, App, Website and AdminTool in the root. But as I want to use pipelines and structure my code best possible way, I'm thinking it might have some advantages creating a repository for each project, in my project.
This way I will trigger exactly the pipeline of the project which needs to be build and it might be more module structured.
But I also see the disadvantage of having to push multiple times for the same feature - Each for each involved project (e.g. IonicApp and Server). This way it's not that clear what is made across projects for one feature, which could be seen in one push.
Which way to structure this would you recommend?
Generally, a Git repository on Azure Repos should be no larger than 10GB. This aims to ensure reliability and availability for all customers.
If you put too many projects into one repository, and these projects may also contain some large files, it may dramatically increase the time to checkout, branch, fetch, and clone your code. This could bring you a bad experience with Git. For more details, you can see "Git limits".
So, in your case, maybe you can consider using Submodules.
Create a repository for the main project.
Create a repository for each sub-project.
Set the repositories of sub-projects as the submodules of the main project's repository.
For the source codes of the features that are involved in multiple projects, you also can set up a specific repository for each feature, and then set the feature repositories as submodules of the involved project repositories.
With this way, you can set up the pipeline for each repository. And you also can using the "pipeline-completion triggers" feature when you want the changes in the submodule repositories also can trigger the pipelines for the repositories that is using the submodules.
A separate repository for each project is highly recommended and considered best practice.
With this you will have benefits, like;
smaller sized repos,
every project integration with CICD separately.
Because at the moment you will be updating single app project, so why to bother other running projects
I have a situation where we have multiple C# projects that use similar set of Nuget packages (ex. Newton Json, Microsoft Compilers and CodeDom, Owin, log4net, jQuery, EntityFramework etc.)
I am trying to see how we can have a shared location for all Nuget packages to reduce the footprint of those binaries in Git, having a single repo for them by centralizing them in one place.
One option I found is to use Nuget.config in each project with repositoryPath set to point at the shared location. This works great for adding/upgrading/restoring Nuget packages in the project but it is not very clean when a package gets removed from one project but is still required in a different one. Basically the package will get removed from the shared location and the change is committed to Git, then when the other project requires it, it would get restored and added back to Git. Not a perfect solution in my mind.
I have a two part question:
1. Is there a way to improve the above workflow when packages get removed?
2. What is the industry standard for handling third party libraries delivered via Nuget? Or if there is none, can you share your experience handling Nuget packages across multiple projects.
If the concern lies with the footprint/organization of the Git repository, maybe you can do a .git ignore for the dependencies folders to prevent git from committing them into the repositories. When you need to build the projects from source, just do a dotnet /nuget restore to get the dependencies from the source you configured in the Nuget.config
Not sure if it is the industry standard, but we host our own Nuget server to control the libraries that the different teams can use. Microsoft has an article on Hosting your own NuGet feeds
Hope it helps.
VIPM stands for Virtual Instrument Package Manager. It is a manager of install-able packages for NI LabVIEW. It is published by JKI Software and a free version of it is distributed with LabVIEW.
Registered (paying) users can set up public or private VI Package repositories. I would like to set one up on GitHub.
I attempted to do so by first creating a VI Repository on my local hard drive, publishing some packages to it, then making a remote clone on GitHub. Using the VIPM Repository Manager, I added the repository by browsing to the index.vipr file on my remote GitHub clone. However, VIPM gives me an error saying that the repository was not found.
Has anyone managed to set up and subscribe VI package repository on GitHub?
The short answer is that GitHub and a VIPM repository are fundamentally different and unless VIPM adds support for git repositories and GitHub then I doubt it will be possible.
If you are considering managing dependencies of any project using GitHub as a source for your shared libraries then you might want to consider a package manager like yarn.
Yarn (and others like npm and bower) are capable of fetching (cloning) from GitHub and follow the common practice from the Web developer world (and others) of having all a project's dependencies contained within the project; This is a departure from the VIPM view where you update your development environment (LabVIEW) by installing the packages 'globally'.
A list of the project's installed libraries and the library versions are stored in a human-readable file called package.json which provides a portable way of getting the project setup on another machine.
As new releases of the libraries happen, you can choose when to update the library in your project by selectively updating the libraries.
This approach works well with LabVIEW packed libraries (.lvlibp) as opposed to VIPM packages as there is no install-into-LabVIEW-IDE step with packed-libraries. If you have a hierarchy of packed libraries then they can also specify their dependent libraries using a package.json and then yarn can install all the libraries recursively.
It is possible to configure Yarn to place libraries into a folder of your own choosing instead of the default node_modules (as used by Node.js).
The advantages of this are:
You can choose which versions of libraries to use per project
Package managers integrate nicely with automatic testing and building setups
You can use GitHub or other git-providers for publishing your libraries
The disadvantages are:
More setup
It is not a common approach in the LabVIEW development world
Your VIs won't be installed into the LabVIEW palettes unless you explicitly install them
I have a couple of projects which are developed and released on different branches, namely development and release. The process works pretty well but unfortunately it has some drawbacks and I have been wondering if there is a better versioning scheme to apply in my situation.
The main development happens on a development branch (i.e. Subversion trunk but it doesn't matter much) where team of developers commit their changes. After building and packaging artifacts, Jenkins deploys them to maven repository and development integration application server. This is a DEVELOPMENT-SNAPSHOT and basically is just a feature branch containing all developed features on one common branch:
<groupId>pl.cyfrowypolsat.process-engine</groupId>
<artifactId>process-engine</artifactId>
<version>D.16-SNAPSHOT</version>
When one particular business change is done and requested by QA team, this single change is then being merged to the release branch (branches/release). Jenkins deploys the resulting artifact to QA application server:
<groupId>pl.cyfrowypolsat.process-engine</groupId>
<artifactId>process-engine</artifactId>
<version>R.16-SNAPSHOT</version>
Then there's a release which happens via maven-release-plugin on the release branch version of software (which creates a maintenance tag/branch for quick bug fixing). (R.16-SNAPSHOT => R.16)
Development and release branches are currently being versioned as D.16-SNAPSHOT and R.16-SNAPSHOT respectively. This allows to separate artifacts in maven repository but creates a problem with different maven mechanisms which rely on standard maven versioning style. And this breaks OSGI versioning as well.
Now, how would you name and version maven artifacts in such a scheme? Is there a better way? Maybe I could make some changes to maven structures other than simply changing the versioning and naming schemes? But I need to keep development and QA (release) SCM branches separate.
Would a maven classifier of 'development'/'production' be a reasonable alternative?
<groupId>pl.cyfrowypolsat.process-engine</groupId>
<artifactId>process-engine</artifactId>
<version>16-SNAPSHOT</version>
<classifier>D</classifier>
As far as I know, a common naming extension for a release artifact would be just the name of the artifact, without any stuff, only the version specified. A development branch would have the same artifact name but with snapshot.
For example, take twitter4j. The artifact name of the release version is
twitter4j-2.5.5
Snapshot of their(his) development version
twitter4j-2.6.5-SNAPSHOT
That is the naming convention almost everybody uses and is recognized by most tools. For example, my Nexus repository can specify a policy to ignore development releases which basically means it ignores the artifacts containing -SNAPSHOT in their name.
EDIT:
To your followup question:
Well, depending on your build tool, you can create your snapshots to have the timestamp or some other unique identifier. However, I have never heard of some branching logic being embedded in the artifact's name just so the continuous int server can distinguish it. From the artifact's perspective, it is either a release, or a SNAPSHOT, I don't see the benefit of embedding more logic into the name of the artifact just cause your Hudson allows yo to do so. To be honest, your release cycle seems OK to me, but it would require some fine tweaking of your maven tools. If you can't live with that I would suggest you to use a classifier instead of relying on the name as it is always easier to tweak the integration server than a lot of plugins that rely on standard naming convention. In conclusion, I believe you are on the right track.
I think you could simply the process by having only two types as far as maven is concerned
Snapshot (In perpetual development)
Releasable (with a version number that can be deployed to maven repository or production release)
I would handle your branching a little differently, If you look at the iterative/scrum development model your code should be releasable/shippable at end of a iteration/sprint
Main sub version trunk is where developers commit their code
At the end of the sprint/iteration branch the main trunk and called it release branch (there should not be a QA branch any code that is to be released is tested for quality)
Bug fixes should happen on the release branch and periodically merged back to main trunk
This way you can keep creating branches for a release and any bug fixes are committed to branch
Always make sure before creating a new branch from main trunk, It has all the merges from previous branches
The release plugin from Maven supports branching. It appears to work by assuming that the branch is created to support the next version of your code.
Personally, I'm more inclined to use the versions plug-in, and explicitly set my Maven project's version numbers.
In the spirit of keeping my SVN trunk clean and ready for deployment, I've been utilizing the following source control model. For the impatient, the basic concept is that you create development branches to do actual development, and leave the trunk clean and ready for deployment, at any time (no junk in the trunk).
In addition to this, I am configuring TeamCity for continuous integration. Within TeamCity, I'd like to ensure that all development branches, as well as the deployment-ready branch (the trunk, in my case) build correctly and pass all unit tests.
This might be a stupid question, but not being overly familiar with TeamCity, should I create a new TeamCity project for each branch? The deployment-ready branch, in particular, has a few additional rules than the development branch. For example, releases should be saved in versioned directories on the file system (e.g., C:\Projects\MyProject\1.0.187..., C:\Projects\MyProject\1.0.188...) to enable easy access to the binaries, at any point in time. On the other hands, saving versioned copies of the assemblies in the development branches is not necessary and would waste hard disk space.
Within TeamCity, I'd prefer to see only a single project for each software project. In other words, if my company is working on X number of development projects, I'd prefer to see that project listed only once, not X * 2 (assuming each project has only two branches).
You only need to create a single project, but you will need multiple build configurations - 1 for each branch. As far as I know, you can't customize the artifact folder name on disk (it's an auto-increment number), however you can download all artifacts as a zip file in TeamCity 4.5 from the UI. There is also a scheduler included with TeamCity that lets you cleanup artifacts so they don't consume too much disk space.
TeamCity 2018.1.5
TeamCity doesn't support multi branches for SVN as for GIT - so I solved such problem with Configuration parameter - where I set active branch from which I need to build and after can easily switch to another branch by running a custom build or change that configuration parameter.
After need just configures triggers to start building from a specific branch:
So on project side you can see different branches
And easily switch between branches by running Custom Build and change branch there: