Micro Services and Version Control how to handle deployment - version-control

I am currently trying to figure out how to handle version control with microservices.
From what I have read the best strategy is to have a separate git repository for each microservice.
However when it comes to deployment having to upload multiple git repositories seems pretty complex.
Specifically I am scratching my head as how I would deploy an update where multiple microservices require changes that depend on each other, and how to roll back to the previous versions should there be an issue with a production deployment.
This seems like a headache that most developers who use micro services have had to deal with.
Any advice would be greatly appreciated, especially if this could be done with an existing library rather than building something from scratch,
thanks,
Simon

There is no easy answer or library that could solve the problem, however there are strategies that can help. I have outlined a few below
Backward compatibility of service - Whenever you are releasing make sure that your API (REST or otherwise) works with previous consumer, this could be done by proving default values for the newer attributes.
Versioning of API - When changes you are making are not small and breaking, introduce the new version of API so that older consumers can continue to work with previous version.
Canary Deployment - When you deploy a new version of micro-service route only a small percentage of calls to the new service and rest of previous version.Observe the behavior and rollback if required.
Blue Green deployment - Have two production environment, one blue which is proven working and other green which is staging containing the latest release. When the testing is done green environment and you have enough confidence, route all the calls to green.
References
Micro-services versioning
Canary deployment
Blue green deployment

Here's a plugin I wrote using some references: https://github.com/simrankadept/serverless-ssm-version-tracker
NPM package:
https://www.npmjs.com/package/serverless-ssm-version-tracker
The version format supported is YYYY.MM.DD.REVISION

Related

How to avoid microservice dependency without slowing down your release process?

How to avoid microservice dependency without slowing down your release process
We are deploying services and its a microservice based project but since we are having many small services going through parallel development we are struggling for deployment.
Example -
Admin build - v1.1.1
this is having dependancy on some other build v.x.x.x
so once build x is not fully tested we can not release admin v1.1.1
is there any solution for it to make deployment process more smooth ?
2 Pointers I can give you are:
Always have your API's backwards compatible - this way you lower your dependencies in one version or the other, creating higher isolation.
Use feature flags - when you change behaviour you can then deploy the code but it doesn't have to be active at that exact time. Use feature flags to wrap your changes and switch them on when you are ready.

Packaging a kubernetes based application

We have multiple(20+) services running inside docker containers which are being managed using Kubernetes. These services include databases, streaming pipelines and custom applications. We want to make this product available as an on-premises solution so that it can be easily installed, like a one-click installation sort of thing, hiding all the complexity of the infrastructure.
What would be the best way of doing this? Currently we have scripts managing this but as we move into production there will be frequent upgrades and it will become more and more complex to manage all the dependencies.
I am currently looking into helm and am wondering if I am exploring in the right direction. Any guidance will be really helpful to me. Thanks.
Helm seems like the way to go, but what you need to think about in my opinion is more about how will you deliver updates to your software. For example, will you provide a single 'version' of your whole stack, that translates into particular composition of infra setup and microservice versions, or will you allow your customers to upgrade single microservices as they are released. You can have one huge helm chart for everything, or you can use, like I do in most cases, an "umbrella" chart. It contains subcharts for all microservices etc.
My usual setup contains a subchart for every service, then services names are correctly namespaced, so they can be referenced within as .Release.Name-subchart[-optional]. Also, when I need to upgrade, I just upgraed whole chart with something like --reuse-values --set subchart.image.tag=v1.x.x which gives granular control over each service version. I also gate each subcharts resources with if .Values.enabled so I can individualy enabe/diable each subcharts resources.
The ugly side of this, is that if you do want to release single service upgrade, you still need to run the whole umbrella chart, leaving more surface for some kind of error, but on the other hand it gives this capability to deploy whole solution in one command (the default tags are :latest so clean install will always install latest versions published, and then get updated with tagged releases)

How do we control application type versions retention?

We want to execute external integration tests and manually call a rollback if something is wrong.
We're using the 'Service Fabric Application Deployment' task in Team Services (VSTS) and it seems to only keep the latest in the cluster.
Cluster --> Applications --> [Application], and then under Essentials. Only one row item is listed which shows the latest version.
Also, attempting Start-ServiceFabricApplicationUpgrade results in 'Application type and version not found.'
How do we alter the behaviour of previous version retention of application types? (And what is the default?)
I don't have the answer to your question, but I do offer this thought:
While I understand there may be a valid use case out there for trying to do this, I think a more accepted approach is to set up a test environment that matches production very closely. Deploy to test and test the heck out of it before approving the deployment to production.
One of the main selling points of Service Fabric is its ability to be so redundant, yet with your proposed workflow you are deploying code to that environment in which you're not entirely confident in. I think that really goes against what Service Fabric offers you.
Since you will be testing it so thoroughly on the Test environment, hopefully anything you end up finding in Production is small enough to be fixed through a patch a few hours later or however fast you can fix it.

Behavior difference between Actor and Service projects in Azure Service Fabric

In an Actor project, the AssemblyVersionAttribute value is used to update the ServiceManifest version, along with the code and config version. There is no such behavior for Service projects.
This updated version is also used to update the ServiceManifestRef 's ServiceManifestVersion reference in the ApplicationManifest. While the ApplicationManifest is modified on every build, it doesn't appear a manually set version within the Service project's ServiceManifest is updated in the ApplicationManifest either.
Is this planned or intended behavior for Service projects?
I'm running Visual Studio 2015 RC, the first preview of the Service Fabric SDK, and 4.0.95-preview1 of the NuGet packages.
Short answer: This behavior difference is temporary as we improve our tooling support for versioning and upgrade.
Slightly longer answer: Part of the original goal of the Service Fabric actor framework was to abstract away the details of manipulating the application and service manifests so that you can truly focus on your business logic. Hence, the SDK includes a tool (called FabActUtil) which is responsible for doing some of that manipulation on your behalf as a post-build step. There is currently no such tool for reliable services projects. We are considering options for reconciling this difference as part of adding upgrade support to Visual Studio. We need to strike a balance between keeping you in control of your versioning scheme and taking care of the chore of cascading your version changes throughout the application as required.

Solutions for automated deployment in developer environments?

I am setting up an automated deployment environment for a number of decoupled services that are in active development. While I am comfortable with the automated deployment/configuration management aspect, I am looking for strategies on how best to structure the deployment environment to make things a bit easier for developers. Some things to take into consideration:
Developers are generally building web applications, web services, and daemons -- all of which talk to one another over HTTP, sockets, etc.
The developers may not have all running on their locally machine, but still need to be able to quickly do end to end testing by pointing their machine at the environment
My biggest concern with continuous deployment is that we have a large team and I do not want to constantly be restarting services while developers working locally against those remote servers. On the flip side, delaying deployments to this development environment makes integration testing much more difficult.
Can you recommend a strategy that you have used in this situation in the past that was worked well?
Continuous integration doesn't have to mean continuous deployment. You can compile/unit test/etc the code "continuously" thoughout the day without deploying it and only deploy at night. This is often a good idea anyway - to deploy at night or on demand - since people may be integration testing during the day and wouldn't want the codebase to change out from under them.
Consider, how much of the software can developers test locally? If a lot, they shouldn't need the environment constantly. If not a lot, it would be good to set up mocks/stubs so much more can be tested on a local server. Then the deployed environment is only needed for true integration testing and doesn't need to be update constantly throughout the day.
I'd suggest setting up a CI server (Hudson?) and use this to control all deployments to both your QA and production servers. This forces you to automate all aspects of deployment and ensures that the are no ad-hoc restarts of the system by developers.
I'd further suggest that you consider publishing your build output to a repository manager like Nexus , Artifactory or Archiva. In that way deployment scripts could retrieve any version of a previous build. The use of a repository manager would enable your QA team to certify a release prior to it's deployment onto production.
Finally, consider one of the emerging deployment automation tools. Tools like chef, puppet, ControlTier can be used to further version control the configuration of your infrastructure.
I agree with Mark's suggestion in using Hudson for build automation. We have seem successful continuous deployment projects that use Nolio ASAP (http://www.noliosoft.com) to automatically deploy the application once the build is ready. As stated, chef, puppet and the like are good for middle-ware installations and configurations, but when you need to continuously release the new application versions, a platform such as Nolio ASAP, that is application centric, is better suited.
You should have the best IT operation folks create and approve the application release processes, and then provide an interface for the developers to run these processes on approved environments.