a bit of background first...
I am setting up a versioning numbering system for our project which currently only has a development branch, but we are now moving towards our first deployment. We are using TFS and we use nightly builds on our dev branch.
The way we are probably going to go with this is that when we get ready for a release we take a branch off dev and call it 1.x. This shall be a test branch: we test it, fix it (then merge back to dev), test it some more then when it is all good we take another branch off the 1.x branch and call it 1.0. It is this branch that gets deployed to production. Any fixes in production will be made to the 1.x, tested and then a new branch 1.1 will be made.
My issue is with the testing of the 1.x branch. Before testing the branch will be locked for obvious reasons. My issue is that QA requires that a round of testing be conducted against a "version number" and if testing fails the next round of testing will be against a new "version number." Us developers want to tie the "version number" to the release and testing can iterate over that version...so there is a conflict here.
My first thought is to use the build number as the point in time that the code is tested against. When its time to submit a new version for testing, the 1.x branch is locked again and a build is kicked off and the VSTS number that is generated becomes a "release condidate 1 for v1.0." Mapping the RC to a build we can do manually in a spreadsheet...
...then someone mentions labels, and that the code should be locked labled and built prior to testing. I have never used labelling before and have just read that build itself creates a label in TFS.
I am now confused about what is the best way to go here. Is using a build number for a release candidate enough? Does manually labelling serve any purpose here (the only benefit I can see is that we can give it out own name and description)? Can I tell TFS NOT to create a label whenever it runs a build and just do all our own labelling at significant points in time (not every build is going to be a release candidate for example)? If so, is NOT creating a label after each build a bad idea, what does labelling give me?
I guess I am confused about where changsets, build numbers/names and labels all fit in with one another...
This is a broad question but its one of those ones where I am not 100% sure what to ask. Any help appreciated.
...then someone mentions labels, and that the code should be locked labled and built prior to testing. I have never used labelling before and have just read that build itself creates a label in TFS.
What you have read is correct. With TFS (unlike, say, SourceSafe), every server action constitutes a 'known point in time' which can later be referred to. You can see what I mean by doing a Get Specific Version... and looking in the Version dropdown: in TFS 2005 the relevant ones I see are Changeset, Date, Label. Now, as you correctly say, every build automatically creates a label. This means that at any future time it will be possible to retrieve the code exactly as it was after any given changeset; at any given date; and when any specific label was applied, thus including when any build was done.
The upshot is that you can use your own labels or not, entirely at your own discretion - the ability to retrieve a given snapshot of the code will be there anyway. I wouldn't suggest trying to inhibit TFS from producing a label for each build (I don't know if this is even possible) - labels cost you nothing.
Your branch 1.x is a consolidation branch which will contain many incremental small evolutions.
Locking the branch is not the answer.
Setting a label (specifically named "for QA test", and locked in order to not be able to move that label, is the usual way to signal the QA team they can build their own workspace and retrieve that exact label.
Then they can begin their tests against the code.
Creating a label after each build is not always practical, since not every build is meant to be tested by QA.
Related
My question is a continuation to this post:
Close work items automatically on Release to specific environment
This accepted answer will work perfectly, but only if I can make sure that a build shows just the delta since last build as associated WorkItems, instead of all work items from history. Sometimes I see all items in history as associated work items in a build.
Builds are happening for several environments (Dev, QA, UAT, Prod). How do I make sure that when I run a new build, it only has delta since the last build in that same environment so that I am only looking at new changes that are coming in with a new build?
Update:
I think I get what you mean. Please see if my understanding is accurate: your master branch has many PRs and links to many workitems. You create release1 from the master, and then when the branch is run based on release1 for the first time, the API lists the associated workitems of all the commits of the master. The second time, the incremental workitem compared to the first time can be displayed normally. Later, you created release2 from the master. When the branch was run based on release2 for the first time, the API listed the associated workitems of all commits of the master(This is not you want.). The second time, the incremental workitem compared to the first time can be displayed normally. . What you want is to display incremental workitems from the last run of the branch based on release1 the first time the branch is run based on release2?
If so, it's obviously not possible to use this API to achieve the requirements. As I said in the answer, this API fetch increment refers to the increment based on the same branch, it does not apply to different branches.
But you still have a way to get the "increment" you want, check out this API:
https://learn.microsoft.com/en-us/rest/api/azure/devops/build/builds/get-work-items-between-builds?view=azure-devops-rest-6.0
You just need to compare.
Original Answer:
but only if I can make sure that a build shows just the delta since
last build as associated WorkItems, instead of all work items from
history. Sometimes I see all items in history as associated work items
in a build.
Why do you say 'make sure'?
The API should only get the delta workitems(under the same branch).
I think changes should not contain all of the work items linked to all of the previous commits(since it is only 'changes').
Do you mean the behavior on your side is unstable, do you mean sometimes contains delta changes but sometimes contains all of the changes during the branch lifecycle?
If not, then the understood of yours maybe a little false.
If yes, I think you need to report this issue to the Developer Community. Please also provide your problematic build url there in this situation. StackOverflow is an open forum, so it's not suitable for handling stuff with private information.
Builds are happening for several environments (Dev, QA, UAT, Prod).
How do I make sure that when I run a new build, it only has delta
since the last build in that same environment so that I am only
looking at new changes that are coming in with a new build?
I suggest you put each environment in a different branch, then when you run pipeline based on the related branch, the pipeline should only get the delta changes of current branch and the API should only get the delta workitem commits since the last pipeline run of current branch.
The project contains a test folder that is ignored by npm but is not ignored by GitHub. When a change occurs in a file under the test folder, should it be also published to npm in order to keep versions matching? Also, in that case, semantic versioning should be increased while there is no change for npm.
Assume that there is a repo in GitHub which has a test folder that is ignored by npm. It also has the package.json file which is tracking version number inside the repo.
Q1. When a change occurred in a file under the test folder, should the patch version number be increased?
Q2. Somehow (if the answer of Q1 is yes, as it is in the first question but there might be other similar cases), when a minor version increase happened in the package.json file, but nothing is changed in the files on npm side, what should be done?
Last edit first:
The short answer to your question is no. The repo will march on ahead of whatever is published in your feeds until you cut a new release.
So the problem here is that you apparently track version numbers either in a file in your repo, or in the commit labels/tags. Don't do that, it's pointless. I know it's a very common practice (I've made that mistake myself), but it is born out of lazy thinking. Repo's are not the right kind of database to track this sort of thing. Only published packages need to have any kind of version label on them. The data-flow should be from repo => build system => package repository. The arrows should never be reversed.
When you apply that rule, your question is moot. Your test content is a separate package feed from your release content (NPM). It already has repo hashes, that are unique, unambiguous and immutable. Those hashes should flow to the build system, then to the package feed/repository system.
There is always a difference in the release date/time stamps between test and product development. Product releases always lag test releases. The purpose of a particular version of the test suite, is to validate a product release. So you should always see test suite X.Y.Z+repoHashN, precede product version X.Y.Z+repoHashN. Note that the values of X, Y and Z for test and product, need not have the same values (a product patch, might be the result of fixing known bug surfaced by the test suite), but there should always be a 'repoHashN' that uniquely matches a test version triple and product version triple.
You produce two products, an app and a test suite, from the same repo, and most of the tooling out there tracks a version per branch that is applied to all packages produced from that branch. I can't directly address any specific NPM behavioral issues, I don't use it enough, but I am fairly sure you are not encountering a bug, rather you are most likely using it incorrectly.
Because your two products develop on separate cadences, you should consider maintaining a test branch for versioning test code. There's a seemingly endless variety of possibilities for your workflows here.
I recommend using a master or main branch for the tip of all development. Always cut prerelease (-<#>+master.<repoHash>) versions from this branch, and you might as well only ever bump either the patch number or the prerelease tag number (NPM supports either scenario). Then, when you are ready to cut a release, you fork master or main into a release branch (named after the target major.minor version) and cut only release packages out of that branch. Because your test code version is not relevant to the release package version, you don't need to track that specifically, it's always updated when you merge from main/master or test/dev branches to cut the next patch level release. The patch level of the release branch only gets bumped when you are ready to release it to the public.
Do test development in a test branch to hide the churn that doesn't need to show up in the master/main history. A dozen "BUG:### WIP" entries aren't needed there so you squash merge from test to master or main. Same thing holds for product code development. Do that in a Dev branch. Test and dev branches should only ever cut packages with something like a -<#>+<devNam>.<repoHash> tag on them and only published to private feeds.
The purpose of master/main branch is to provide frequent (at least daily) build and test cycles that include merged content from test and dev branches. This is where you maintain ground truth. At any given time you should be able to safely fork a new branch from here and count on it to build a product that meets your quality specs. This allows you the freedom to cut release branches at whatever cadence you need, independent of the current state of the test or dev branches. Test and dev, should attempt to merge their work into master or main, at least daily, and immediately fix any merge conflicts or build failures.
In the case where test == dev, you can combine them into a single branch. Generally, each developer will have many branches in progress, for independent work on various tasks. Having them merge directly to main or master is a valid workflow and even considered a best practice in many shops. Sometimes, too many cooks in the kitchen however, can cause problems, and when you find your dev's are spending too much time resolving merge conflicts with master or main. You'll want separate branches for different lines of effort, for them to stage their work into, before taking it to master. Then flow into master or main isn't quite so random, and will be easier to manage.
My team recently switched to all three technologies in the past several months and have worked hard to get it up and running. Next step is automating our changelogs. We have JIRA set up look for the tags (ex. TAG-123) in github commit messages. Jenkins monitors the GitHub commits on a 5 minute timer, pulls, builds, etc.
What I would like to see is a changelog generated automatically when a build is marked as "Promoted to Production." I would like to see it do something akin to the following:
Query Jenkins for the previous build marked as a production release and get the corresponding git commit SHA1.
Run a diff in between the current Git commit and the previous commit
Find all JIRA tickets that are referenced
Compile a list of JIRA titles
Have list export to a text file and placed in build drop (bonus if it can be accessed directly through Jenkins as well)
Whether this flow is followed as written or not is irrelevant--I'm after the end result and am not looking to re-invent the wheel.. Surely somebody done something like this before?
As far as reinvention goes, I was able to find https://wiki.jenkins-ci.org/display/JENKINS/Promoted+Builds+Plugin which allows somebody to piggy-back on the Promote to Production action and run a separate script. It would then be a matter of gathering functionality to accomplish the above. (I also noticed Jenkins can tag the current GitHub commit, which my team would likely do in addition.)
Anything closer to accomplishing this would be greatly appreciated.
Thank you!
Since nobody on our team had excess time to devote to this, we ended up throwing together a quick solution.
The Process
Install and setup the All Changes plugin for Jenkins.
When we release, we use "build promotion" system which puts stars next to the previous build, so we can easily see the build# looking at the history.
Copy and paste the relevant output from All Changes into something like notepad++ (human-diff'ing ftw!)
Run a regex find/replace. Search on the regex string, replace with an empty string. (below -- there's the big-bang option or its broken up for understandability.)
Manually organize and release in whatever form is the current agreed up standard.
Everything at once
(\s*\(commit:\s[a-z0-9]{40}.\s..detail)|([\r][\n]#.*\B[\r][\n][\r][\n])|(^[ \t]*)
Remove commit hash\s*\(commit:\s[a-z0-9]{40}.\s..detail
Remove time and surrounding line breaks[\r][\n]#.*\B[\r][\n][\r][\n]
Removing leading whitespace: ^[ \t]*
The Analysis
Pros:
Effective overall
Relatively quick to implement
Cons:
Not fully automated.
Need to revert to commit id if you release from multiple Jenkins jobs.
All Changes history only appears to go back as far as the Jenkins job (I could be mistaken about the specifics of this--I just remember minor grievances with something like this at one point.)
As a whole, the cons are somewhat "the nature of the beast." I'd love to read some other solutions. (For when we have that elusive thing called Time, of course!)
Alternative to dependancy on issue trackers could be to purely use the pull requests themselves. For us they had enough context to generate release notes and we used labels for categorisation. I created PullRequestReleaseNotes and you can try it. It supports GitHub, GitLab, BitBucket and TFS and it can generate release notes in markdown from merged pull requests and its labels and optionally post it to an Atlassian Confluence page and post it to a Slack channel as a post. It can be run as part of continuous integration. Here is a sample:
1.2.1 (MASTER) - XX XXX 2016
Enhancements
Category A
Awesome new feature #1854
Fixes
Category Z
Fixed problem with widget #1792
Category Y
Fixed problem with widget #1792
Fixed exception with view layout #1848
we recently decided to move to TFS 2010. we would like also to improve our source control structure and projects structure.
here is the structure the team agreed on:
|OurCompanyName (or common root name)
|
+--Windows
+----Applications
+------App1
+------App2
+----Services
+------WindowsService1
+------WindowsService2
|
+--Web
+----Applications
+------WebApp1
+------WebApp2
+----Services
+------WebService1
+------WebService2
|
+--Common
+----ThirdParty
+----Libs
+------DataAccessLib
+------BusinessLogicLib
|
+--Tests
+----TestProject1
+----TestProject1
The common folder holds 3rd party and our in-house libraries which is used all-over(App1,App2,WebApp1...etc)
We need to acheive the following :
Release versions must depend on latest production release of Libs.
if tests failed, depended projects shouldn't build and team should be notified.
simple branching: development, production,versioned releases and how we can structure them accordingly.
I have already read the following guide Visual Studio TFS Branching Guide 2010 but it only addresses the branching bit of it.
You aren't really asking a question from what I can tell. But I can give some feedback/discussion on your goals.
Release versions must depend on latest production release of Libs.
A release version should depend on whatever it used while it was being developed. Not whatever the current version is. You may want to go into more depth on what this requirement is and why you think you need it.
if tests failed, depended projects shouldn't build and team should be notified.
TFS doesn't support chaining builds out of the box, you can modify the build template to add support, but it's not a particularly clean solution (imo).
You can self subscribe to failing builds utilising the built in tfs alerts subscriptions, however it is up to each developer to do so. (Unless you subscribe a mailing group or create a custom event mailer).
Again why are you automatically updating dependencies in other projects? surely you'd be better off using a pull for updates than a push and use a technology like NuGet to handle your references.
simple branching: development, production,versioned releases and how we can structure them accordingly.
That sounds like simply branching each time you do a release, which is very simple.
If you however knew which changeset you releases you wouldn't have to branch and could branch only if you needed it (eg to fix a production bug). It takes a lot more work as you either need to manually label your code on release (at which point you gain nothing over branching) or have an automated release process which does it for you.
Other notes
You dont' want to use multiple Team Project Collections - this adds in a nightmare when it comes to managing build servers.
You may want to update your diagram to show what is a Team Project, Branch, and what is just a standard folder.
Having used TFS for a while, I would like to give a caution:
You look at things from the developer's side, as we did when we started thinking about how to best deploy the projects. However, you should also take under account project management requirements.
Having different TFS projects, means different reporting data to the manager.
Thus if App1 and WebApp1 are to the person that runs your projects part of the same overall project, then if you have them in different TFS projects, questions of the form: 'How many hours did my team spend on this project' will be difficult to answer.
I would seriously look into this issue before deciding on the project structure.
Now regarding your questions:
Release versions must depend on latest production release of Libs
As Betty (above) mentions this is not good practive. What will happen if development took place with production release of Lib v1.0 and sometime during stabilization Lib changed to v2.0 ?
if tests failed, depended projects shouldn't build and team should be notified.
I believe this is a matter of your build script, not of your layout
-simple branching
We try to implement a simple MAIN-line based approach,where we have one or more development branches (depends really on your specific requirements).
Once in a while, when dev code is considered 'stable' i.e. passed basic unit tests, it is merged onto the MAIN line. Developers carry on, on their development branches whereas code on the MAIN line goes more extensive testing. Bugs found are reported and fixed initially on the DEV branches and merged back onto the main line. Once code on the MAIN line is good enough, stabilization starts on a RELEASE branch. After that point, bugfixes take place on the RELEASE branch and merged back into the MAIN line. Note that 'stable', 'good enough' are values that mean different things to organizations.
We have many projects that use a common base of shared components (dlls).
Currently the development build for each project links against dlls built from the trunk of the components. (ie trunk builds use the dlls from other trunk builds)
When we do a release build, we have a script that goes through the project files and replaces the trunk references to specific numbered versions of the components (that are built from a tagged branch)
I think this weakens the testing that we do during development because the project that I am actually working on is using diferent dlls to what the release build will be using. I would like to always develop against the numbered versions of the components and only ever update them when there is a specific need.
However others in the team argue that unless we develop against trunk (and update to the newer versions of the components with each release) we will have the problem that (a) our products will hardly ever update to the newer version of the components then (b) when we do need to update it will be a huge task because the component source/interfaces will have changed so much.
What practices do you follow, and why?
Edit: Sorry all, I have just realised I have confused things by mentioning that there are several main products sharing components - although they share the components they don't run on the same PCs. My concern relates to the fact the because the components are likely to change with each release of a product (even though there was no specific requirement to update the component) that testing would miss some subtle change that was done in a component and not related to the specific work being done on the product.
Hmm, I may be in a minority here, but this comes down to release management.
Developing against the trunk of a set of shared components means, by definition, that the components are a "moving target" -- a developer using those shared components won't necessarily know if a newly found defect or failure is due to the project code or the shared components, which leads to a loss of productivity, IMNSHO.
The "shared components" have a release cycle all their own. Give your other developers a break and fix the version of the shared components that the projects are going to use and use tags, labels or branches to identify the shared component release. On the next iteration for the projects, bump up to the latest "stable" or "production" build of the shared components.
There's another "smell" here, if you'll pardon the expression. Having "shared components" whose "source/interfaces will have changed so much" between project releases sounds like the components aren't so solid or shouldn't necessarily be shared.
See also the answer to this question Shared components throughout all projects, is there a better alternative than svn:externals?
You should have strong interfaces that rarely change, so changing versions shouldn't be that hard.
Separating the versions and working against specific versions will increase overhead when you need to change, but it should also encourage less interface changes overall, which will help in the long term.
We develop against multiple branches and trunk simultaneously and we have chosen to build and test each branch with the code we'll be pushing out to production. I don't think it is safe any other way.
Basically, if a developer is working on trunk, all they have to do is worry about building from trunk and committing code to trunk.
Any developer working on a branch needs to build and test off that branch (there are multiple projects all branched/tagged the same build/release). When they commit changes to that branch, they must also merge those individual changes into trunk.
We expect all developers to be familiar with SCM (SVN) and to be capable of maintaining multiple branches of code. As a team we handle major framework shifts or huge code changes to minimize troublesome merging.
Two things here. First, I think you're right; you want to build against the most current development versions, not against the old versions. If you haven't already, you will see a situation in which the build-for-release blows up and you have to do an all-nighter cleaning up the diffs.
I'm personally a fan of the "commit to trunk, release from branch" model anyway. All commits go to the trunk, overnight builds or CI builds are against the trunk, and people create branches freely. When you have a trunk that meets acceptance criteria, tag a release candidate, BUT KEEP MAKING UPDATES TO THE TRUNK. If you hae a long release cycle, then you might have changes for release n+1 being added to the trunk, but ideally you should just shorten your release cycle instead. If there are changes to the trunk that shouldn't be in the released version, AND you have a problem that requires changes, create a branch against the tagged version --- and make sure you merge any changes back to the trunk once you have an actual release.
We are using the scons building system, and have our own file in the root directory which specifies what version of each library we're going to use when building the application.
That reduces the need to change version names in several locations like you mentioned.
Whether (b) is a valid argument depends on how often your shared components change and by how much. If they change often in your workplace, it might be a valid that you are "forced" to develop off the newest version. Whether that in itself is a problem is a valid question.
However, from your side of things, I don't see how you can push code into production without it being tested against the shared components being used in production. Do you do a second test cycle against the release build? Do you just pray that nothing breaks? Frankly, (b) can be reversed in these cases to support your point of view: If the trunk is different enough from the most recent tagged branch, then that effort has to be made to ensure your app works properly with it.
If your shared components are tagged often enough, then your colleagues are probably right, and it's easier to manage the incremental changes from the most recent tag to the trunk than it is to manage the change from arbitrary version X (determined at the last build) to arbitrary version Y (determined when you choose to upgrade).