How to handle versioning of install4j project files - install4j

We have our code committed to bit bucket where we are able to handle versioning of scripts and files used in install4j project but we are not able to handle versioning of install4j project XML files due to overriding behavior,
we have to do manual merging every time before code commit.
Is there's any way to handle versioning of install4j project XML files or avoid manual merging of changes done by all team members.

Related

Track changes in configuration tables and create automated scripts to deploy them other envionments

In the product that I work on, there are many configuration tables. I need to find a way to track configuration changes (hopefully with some kind of version/changeset number), deploy the configuration changes to other environments using the changeset number and if needed rollback particular configuration based on changeset number.
I am wondering how can I do that?
One solution that I think could work is to write a script(s) to take all the configurations from all the config tables and create Json file(s). I can then check-in that file(s) to tfs or github to maintain versioning and write another script(s) to load that configuration file(s) in any environment.

Should I publish the nuspec file in a repository?

When creating an open source library on GitHub or another public website, should I publish the .nuspec file that describes the corresponding NuGet package?
I've done this a couple of times (since no API key or other sensitive information is included in the .nuspec file) in order to allow myself to easily publish subsequent versions without keeping private file, and to allow other people to fork it and add their own descriptions easily. However, the developers of many top packages don't seem to publish .nuspec files in their repositories (sometimes they publish NuGet.exe along with a .targets file, and so on), so I'm thinking that maybe I'm doing something wrong.
The package authoring should be considered part of the source code since it is a required asset to build the fully usable output.
Some projects use special msbuild-based tooling to create the nuspec file during the build so it seems like there is none in the repository. the new "SDK-based" projects (e.g. .NET Standard libraries) have integrated nuget tooling to be able to create a nupkg file from the csproj without the need to create a nuspec file. This tooling is also being adopted by some popular packages (e.g. Newtonsoft.Json).

Nuget: Good idea to check in package folder

I'm actually thinking about the pro and cons about using NuGet. In our current software we're storing each external reference in a common reference folder (which is commited to our SW versioning system). Over time this approach becomes more and more painful because we've to store different versions to the same library.
Since our devs are sometimes at the customer site (where not all customers are offering internet connectivity ...) we won't use NuGet directly, because NuGet packages can't be restored.
Based on that I'm actually thinking about using NuGet and store the packages folder in our SW versioning system.
Does anybody know if there are some disadvantages about this solution? Does anybody have a better proposal?
Thx.
I would argue against storing external nuget packages in your version control system.
It's not your application's responsibility to archive third party packages. Should you need to take care of that risk then build a solution intended for such (for example: use private nuget repository that's properly backed up).
Avoid duplication in code base - provided you use properly released packages, then the packages.config file content is sufficient for reliably reproducing the exact dependencies your application needs.
Synchronization is an effort - keeping packages.config and packages folder in sync- once you start including them in source control every developer working with packages would monitor and add or remove packages to source control.
If devs ever forget to add then local build still fails.
If they forget to remove no longer necessary piece then your downloadable set would contain junk.
VCS dataset size - storing them would needlessly enlarge your version control storage. Quite often the packages contain N different platform dlls, tools and whatnot which add up quite fast. Should you keep your dependencies constantly up to date, then after 10 years your VCS history would contain huige amount of irrelevant junk. Storage is cheap, but still..
Instead, consider having a private nuget repository with the purpose of serving and archiving the packages your application needs and set up your project to check your project nuget repository first. If your developers need offline compile support then they can set up project repository mirrors on their build boxes and configure the following fallback structure for repos:
Developer local project repository (ex: folder)
Shared project repository (ex: Nuget.Server)
(nuget.org)
A guide how to configure multiple repositories can be found here: How to configure local Nuget Repository.

How to manage share libraries between applications?

We develop enterprise software and we wish to promote more code reuse between our developers (to keep this problem simple, let's assume all .NET). We are about to move to a new VCS system (mostly likely mercurial) and I want to have a strategy in place for how we will share libraries.
What is the best process for managing shared libraries that meets the following use cases:
Black Box - only the public API of the library is known and there is no assumption that consuming developers will be able to "step into" or set breakpoints into the library. The library is a black box. Often a dev does not care about the details, just give me the version of the lib that has always "worked".
Debug - the developer should be able to at least "step into" the library during development. Setting breakpoints would be a bonus too.
Parallel Development - while most likely the minority, there are seemingly valid use cases for developing the library in parallel with the consuming application. Often the authors of the library and component are the same developer. For better or worse, the applications and libraries can often be tightly coupled. Being able to make changes and debug into both can be a very productive way for us to develop.
It should be noted that solving 3, may implicitly solve 2.
Solutions may involve additional tools (such as NuGet, etc.).
By sharing libraries, you must distinguish between:
source dependencies (you are sharing sources, implying a recompilation within your project)
binary dependencies (you are share the delivery, compiled from common sources) and link to it from your project.
Regarding both, NuGet (2.0) finally introduced the "Package Restore During Build", in order to not commit to source control whatever is build in Lib or ExternalDependencies folder.
NuGet (especially with its new hierarchical config, NuGet 2.1) is well suited for module management within a C# project, and will interface with both git and Mercurial.
Combine it with the Mercurial subrepos, and you should be able to isolate in its own repo the common code base you want to reuse.
I have 2 possible solutions to this problem, neither of which seems ideal (and therefore why I posted the question).
Use the VCS to manage the dependencies. Specifically, use mercurial subrepos and always share by source.
Advantages:
All 3 usecases are solved.
Only one tool is required for source control and dependency management
Disadvantages:
Subrepos feature is considered a feature of last resort by Mercurial developers and from experimentation and reading has the following issues:
Tags cannot be easily or atomically applied to multiple repos.
Root/Shell repos are inherently fragile (can be broken if the pathing to subrepos changes). Mercurial developers suggest mitigating this issue by including no content in the shell repo and only use it to define (and track the revision) of the subrepos. Therefor allowing a dev to manually recreate a moment in time even if the subrepo pathing is broken.
Branching cannot cross repo boundaries (most likely not a big issue as one could argue that branches should only occur in a given subrepo).
Use Ivy or NuGet to manage the dependencies. There are two ways this could work.
Dependencies/Packages can simply contain official binaries. A build server can be configured to publish a new dependency/package into the company repository when a developer submits a build for new version. This solves case 1. Nuget seems to support symbol packages that may solve case 2. Case 3 is not solved and leaves developers in that case out to dry and come up with there own solution (there is basically no way to commit applications to the VCS that include dependencies by source). This seems to be the traditional way that dependency management tools are used.
Dependencies/Packages can contain a script that gets the source from mercurial. The script could be automatically executed when the dependency/package is installed. Some magic has be performed to have the .NET solution include the reference by project (rather than by browsing the filesystem), but in theory this could happen in the NuGet install script and reversed in the uninstall script.
Switching between "source" and "binary" dependencies seems to be a manual step. I would argue devs should switch to binary dependencies for releases and perhaps this could be enforced on the build server when creating a release. This further complicated by the fact that the VS solution needs to be modified to reference a project vs a binary.
How many source packages exists? Does every binary package contain the script to fetch the source that it was built with? Or do we create separate source packages that use the install script magic to get the source? This leads to the question is there a source package for every tag in mercurial? Every changeset? Or simply 1 source package that just clones and updates to the tip and leaves the dev to update to a previous revision (but this creates the problem of knowing what revision to update to).
If the dev then uses mercurial to change the revision of the source, how can this be reflected in the consuming application? The dependency/package that was used to fetch the source has not changed, but the source itself has...

TeamCity Projects and Multiple SVN Branches

In the spirit of keeping my SVN trunk clean and ready for deployment, I've been utilizing the following source control model. For the impatient, the basic concept is that you create development branches to do actual development, and leave the trunk clean and ready for deployment, at any time (no junk in the trunk).
In addition to this, I am configuring TeamCity for continuous integration. Within TeamCity, I'd like to ensure that all development branches, as well as the deployment-ready branch (the trunk, in my case) build correctly and pass all unit tests.
This might be a stupid question, but not being overly familiar with TeamCity, should I create a new TeamCity project for each branch? The deployment-ready branch, in particular, has a few additional rules than the development branch. For example, releases should be saved in versioned directories on the file system (e.g., C:\Projects\MyProject\1.0.187..., C:\Projects\MyProject\1.0.188...) to enable easy access to the binaries, at any point in time. On the other hands, saving versioned copies of the assemblies in the development branches is not necessary and would waste hard disk space.
Within TeamCity, I'd prefer to see only a single project for each software project. In other words, if my company is working on X number of development projects, I'd prefer to see that project listed only once, not X * 2 (assuming each project has only two branches).
You only need to create a single project, but you will need multiple build configurations - 1 for each branch. As far as I know, you can't customize the artifact folder name on disk (it's an auto-increment number), however you can download all artifacts as a zip file in TeamCity 4.5 from the UI. There is also a scheduler included with TeamCity that lets you cleanup artifacts so they don't consume too much disk space.
TeamCity 2018.1.5
TeamCity doesn't support multi branches for SVN as for GIT - so I solved such problem with Configuration parameter - where I set active branch from which I need to build and after can easily switch to another branch by running a custom build or change that configuration parameter.
After need just configures triggers to start building from a specific branch:
So on project side you can see different branches
And easily switch between branches by running Custom Build and change branch there: