I am currently in a weird situation. We have one repository with multiple branches (CI and an official branch). Our current job structure has two jobs, one for each branch, for each platform we support. Our official build has different steps than our development build, all of which can be run using a script. My question is can we set up one job per platform that will look at both of the branches and run different steps according to what branch is changed? I am aware of the environment vars that get set with the github plugin. Is this something that would need to be utilized?
Thanks!
Related
At work, we're now using GitHub, and with that GitHub flow. My understanding of GitHub flow is that there is a master branch and feature branches. Unlike git flow, there is no develop branch.
This works quite well on projects that we've done, and simplifies things.
However, for our products, we have a development and production environment. For the production environment, we use the master branch, whereas for the development environment we're not sure how to do it?
The only idea I can think of is:
When a branch is merged with master, redeploy master using GitHub actions.
When another branch is pushed, set up a GitHub action so that any other branch (other than master) is deployed to this environment.
Currently, for projects that require a development environment, we're essentially using git flow (features -> develop -> master).
Do you think my idea is sensible, and if not what would you recommend?
Edit:
Just to clarify, I'm asking the best way to implement development with GitHub Flow and not git flow.
In my experience, GitHub Flow with multiple environments works like this. Merging to master does not automatically deploy to production. Instead, merging to master creates a build artifact that is able to be promoted through environments using ChatOps tooling.
For example, pushing to master creates a build artifact named something like my-service-47cbd6c, which is a combination of the service name and the short commit hash. This is pushed to an artifact repository of some kind. The artifact can then be deployed to various environments using tooling such as ChatOps style slash commands to trigger the deloy. This tooling could also have checks to make sure test environments are not skipped, for example. Finally, the artifact is promoted to production.
So for your use case with GitHub Actions, what I would suggest is this:
Pushing to master creates the build artifact and automatically deploys it to the development environment.
Test in development
Promote the artifact by deploying to production using a slash command. The action slash-command-dispatch would help you with this.
You might also consider the notion of environments (as illustrated here)
Recently (Feb. 2021), you can:
##Limit which branches can deploy to an environment
You can now limit which branches can deploy to an environment using Environment protection rules.
When a job tries to deploy to an environment with Deployment branches configured Actions will check the value of github.ref against the configuration and if it does not match the job will fail and the run will stop.
The Deployment branches rule can be configured to allow:
All branches – Any branch in the repository can deploy
Protected branches – Only branches with protection rules
Selected branches – Branches matching a set of name patterns
That means you can define a job to deploy in dev environment, and that job, as a condition, will only run if triggered from a commit pushed from a given branch (master in your case)
For anyone facing the same question or wanting to simplify their process away from gitflow, I'd recommend taking a look at this article. Whilst it doesn't talk about Github flow explicitly it does effectively provide one solution to the OP.
Purests may consider this to be not strictly Gitflow but to my mind it's a simple tweak that makes the deployment & CI/CD strategy more explicit in git. I prefer to have this approach rather than add some magic to the tooling which can make a process harder for devs to follow and understand.
I think the Gitflow intro is written fairly pragmatically as well:
Different teams may have different deployment strategies. For some, it may be best to deploy to a specially provisioned testing environment. For others, deploying directly to production may be the better choice...
The diagram in the article sums it up well:
So here we have Master == Gitflow main and the useful addition is the temporary release branch from which you can deploy to other environments such as development. What is worth considering is what you choose to call this temporary branch, in the above it's considered a release, in your process it may be a test branch, etc.
You can take or leave the squashing and tagging and the tooling will change between teams. Equally you may or may not care about actual version numbers.
This isn't a million miles away from VonC's answer, the difference is the process is more tightly defined and it's more towards having multiple developers merge into a single branch & apply fixes in order to get a new version ready for production. It may well be that you configure the deployment of this temporary branch via a naming convention as in his answer.
The way I've implemented this flow is using PRs. I did it with Azure DevOps, but I'd say that the same can be achieved with GitHub Actions.
When you have a branch that you intent to test and eventually merge to master and release to production, you create a PR from that branch to master. The PR will trigger a pipeline, which will run your build, static analysis and tests. If that passes, the PR is deployed to a test environment where further automated and manual testing can happen. That PR can be reviewed and approved by other developers and, if you need to, by QA after manual testing. You can configure GitHub PR rules to enforce the approvals. Once approved, you can merge the PR to master.
What happens once in master is independent of the workflow above, but most likely a new pipeline will be triggered, which will build a release candidate and run the whole path to production (with or without manual intervention).
One of the tricks is how the PR pipeline decides which environment to deploy the PR too. I can think of three options:
Create an environment on the fly which will be killed once the PR is merged or closed. This is the most advanced and flexible option. This would require the system to publish the environment location to the PR.
Have a pool of environments and have the automation figure out which are free and automatically choose one. The environments could be stopped, so you find an environment which is stopped, start it up and deploy there. Once the PR is closed/merged, stop the environment again.You can publish the environment location to the PR.
Add a label to the PR indicating the environment (ie. env-1, env-2, etc.). This is the simplest option, but it requires that developers look at the open PRs to see which environments are already in use in other PRs to avoid overwriting other people's code.
With all these options, once the PR is created, you can just push new commits to the branch and the environment will be updated.
You also need to decide what you want to do when a new commit is pushed to master. You most likely want to trigger a new PR build to update the environments with the latest master, but you can do this automatically or manually, depending on how busy your master is.
Nathan, adding a development branch is good idea, you can work on development changes in new branch and test them in dev environment and after getting signoff to move to production environment you can merge your changes in master branch.
Don't forget to perform regression testing on merged master branch to test both old features and new features are working fine before releasing your code for installation in production
seeking for advice about such problem.
We have stack of microservices written on NodeJs and running on Kubernetes cluster. We have separate GitHub repository for each of them and currently using Circleci for our CI/CD process. As of now we have about 25-30 repos, but their number will increase and problem that we faced now is that we need to have Circleci config yaml in each repository and if we need to change something globally in our ci/cd pipeline, we need to update this in each repository, which is obviously pretty painful process and Circleci doesn't support to have one config file for multiple repos.
I believe our situation/setup in terms of multiple repos is not unique, does anybody have experience/ideas of which CI tool support described scenario of having one config file for multiple repos?
Below are 2 approaches that I considered when had to deal with similar situation. You'd need to define for yourself what you want to optimize for and make a decision based on that
Optimizing for flexibility and isolation. In this scenario instead of making all repos use the same config file, you're keeping the file in each repo and automating how you manage this file.
For example: you'll have to create a CLI tool or a script to automate copying circle file and committing to appropriate repos (whenever a change needs to happen)
PROS: isolation - all repos have their own configuration, if you ever going to have a golang microservice or different config in one of your nodejs services, modifying CI pipeline wouldn't be an issue
CONS: a bit of extra work to write automation around managing this config separately
Optimizing for easier maintainability. Figure how to share single pipeline configuration across your repos.
For example: use git submodules for keeping circle.yml file, or use separate npm package with circle.yml file. Another alternative is to use a CI tool that supports templating, then define pipeline template and re-use it for each individual pipeline (one of the CI tools that supports it - Teamcity)
I personally picked approach #1 in similar situation. IMHO, this is a price one have to pay when one decides to go with microservices to not end up with a platform that is rather a distributed monolith :) also I really liked when all repos are descriptive and self contained and CI pipeline as code is one of the ways to help achieve that
In my mind you have 2 options - you could have a single CI job/config that can deploy any single/multiple services (if all the services are the same). Or if every service is different than you need a separate job/config for each. If it's somewhere in the middle it's a question of whether you want a single job that has a bunch of if/then statements e.g. "if repo = user then do this special thing." The if/then approach worked fine for me up to a point, but eventually, there were too many special cases at it was easier to just go with the unique config for each service.
I solved the issue of it "being hard to make a 1 line change across 30 git repos" by having a git superuser. Basically, normal users can only merge using PRs, but the superuser can commit directly. Since I'm only changing things like config files there are rarely merge conflicts or broken test cases so it works. Here's some sample code:
#!/usr/bin/env bash
for dir in /temp/*/
do
cd $dir
git pull
sed 's/Nick/John/g' report.txt > report_new.txt
git commit -m "CI change" && git push
cd ..
done
We used to have a big project that had SonarQube analysis run on it for every pull-request on GitHub. Everything worked fine.
Then we did some refactoring, and split the code into separate projects. Since the code is related, the repo is still the same. But, instead of running just one build+analysis we run multiple ones per pull-request.
Everything else works fine, except that the SonarQube GitHub plugin writes the problems found in the first build, then removes them in the second build and so on. So I get an email about problems in the first build, but when I go and look at the PR in GitHub, it's all green and no messages anywhere.
Optimally I would like to specify to SonarQube GH plugin that these builds should be handled as separate in the PR, but I haven't found a way to do that yet.
What you are trying to achieve is not possible with the SonarQube GitHub plugin. If you want PR analysis back, you have 2 ways:
Either you gather those projects under the same umbrella, making them modules of a top project
Or you extract them in different repositories
The best solution depends on how your "new" projects are coupled to each other. If they have the same lifecycle (~ the same versioning scheme), then it's best to gather them under a top project. If not (i.e. they can be released independently with different versions), then moving them to dedicated repositories would be the best approach.
It is possible, but requires a complex setup:
- A SonarQube project for each language.
- A Github user for each language
- In each SonarQube project, under the General Settings -> Pull Requests, set a different access token to post back to github for each project.
Now you will have 2 different commenters, one for each project.
I'm using Visual Studio Team Services to build my project which is stored in GitHub (here). The master branch contains multiple projects which make up the solution. Amongst those are a WebAPI project and a Cordova project. I need to build those using two separate build definitions in VSTS.
Previously I had set-up my build definition and used the branch filters to filter on what had been pushed to the repo. For instance:
master/src/API
This worked, but it doesn't any more. It seems as if the underlying code has changed. A filter of 'master' still works and I understand how this feature is probably meant to filter specifically on branches and maybe not on folders within the branch?
It's not a huge problem, but at this time all of my builds will trigger with every check-in, even if nothing changed in the meantime for that source code. So I'm not wondering what a good solution for this issue would be:
Put every project in it's own branch. Seems like a workaround
Some other filter option or maybe another syntax or something?
Leave it as it and don't worry about the extra builds (but that itches, you know...)
Anyone running a similar set-up?
Path filters is not supported for VSTS GitHub CI Build, it is available for Git CI Build on VSTS. You can vote this user voice: https://visualstudio.uservoice.com/forums/330519-team-services/suggestions/15140571-enable-continuous-integration-path-filters-for-git
The workaround is as you said that put every project in its own branch.
I have been researching and am trying to figure out the best branching and deployment strategy to accomplish the requirements below. Maybe I’m missing something but it is more complicated than it seems. Ideally, we’d just have one permanent branch, ‘master’, that could have specific commits tagged to mark releases to production.
Our current strategy is based on Git Flow and has permanent branches ‘master’ (only has releases to production) and ‘develop’. The primary thing that complicates using a multiple permanent-branches model is the concept of “promoting” the same build from the staging environment to production. Currently, this needs to be done in a separate source code branch (deployments to staging come from ‘develop’, deployments to prod come from ‘master’).
Tools: Git (VSTS), TeamCity, Octopus Deploy
Requirements (feature and hotfix lifecycles):
All code is reviewed via pull requests (enforced via branch policies)
All code gets deployed to a staging environment for testing
We can quickly go back to any snapshot of code that was deployed previously
If testing is successful, then the same build can be “promoted” from our staging environment to production (no need to build again)
Features accumulate over time before pushing out to production as a single release. Hotfixes have to be able to go through without getting caught up in the "all or nothing" next regular release.
I like the idea of having one permanent branch with tags (re: The master/develop split is redundant, http://endoflineblog.com/gitflow-considered-harmful), but having additional permanent branches may better facilitate deploying to different lifecycles/versions (feature and hotfix) to Octopus.
I have been wrestling with how best to pull this off and I may be over complicating things. Any feedback is appreciated.
It seems you have a number of questions and they are quite broad... I'll add some comments to each of your requirements as a conversation starter, but this whole thread might get blocked by moderators as it is definitely not the style of questions SO was made for.
All code is reviewed via pull requests (enforced via branch policies)
I haven't looked at VSTS for ages, but I'd expect they already support branch policies and pull-requests, so not sure if there's anything you need here other than configure settings in your repositories.
In case VSTS does not support that, you might consider moving to a tool that does e.g. BitBucket, GitHub, etc. Both of these have an on-premises version in case you can't (or don't want to) use the cloud hosted version.
All code gets deployed to a staging environment for testing
You achieve that with setting up lifecycles in Octopus Deploy, to make sure deployments/promotions follow the the sequence you want.
We can quickly go back to any snapshot of code that was deployed previously
You already have source control, so all you need now is traceability from the code that is deployed in an environment, to the deployment version in Octopus Deploy, the build job in TeamCity, the branch and exact commit in your source control.
There's a few things that you can do, to achieve that:
Define a versioning scheme that works for you. I like to use semantic versioning. "Major" and "Minor" versions are defined by the developers, and the "Patch" is the auto-incremented number from TeamCity (%build.number%). Every git push build the code and generates a unique build version (%major%.%minor%.%build.number%)
As part of the build steps in TeamCity, before you compile the code, make sure your source files are patched with the version number assigned by each build, the commit hash from your source control, and the branch name. e.g. if you are using .NET, make sure all the AssemblyInfo.cs files are updated with that version, so that the version is embedded in the binaries. This allows anyone to query the version looking at the properties of the binary files, and also allows you to display the app version on the app itself (e.g. status bar, footer, caption, about box, etc.)
Have TeamCity tag your source control with the version number of every build, so you can quickly see on your source control history. You probably only want to do that for the master branch, though which is what you care about.
Have Octopus tag your source control with the deployment version number and the environment name, so that you can quickly see (from your source control) what got deployed where.
Steps 1 and 2 are the most important ones, really. 3 and 4 are just nice-to-have. Most of the time you'll just open the app in the environment, check the commit hash in the "About", and do a git checkout to that commit hash...
If testing is successful, then the same build can be "promoted" from our staging environment to production (no need to build again)
Again, Octopus Deploy lifecycles, and make sure anything different in each environment is defined in the configuration file of the application, which is updated during the Octopus deployment, using environment-specific variables.
In terms of branch workflow, this last requirement makes it mandatory to merge changes into master (or whatever your "production" branch is) before the deployment lifecycle can begin.