Managing dependency versions of services in Micro service architecture - deployment

In my current team we are running close to 70-80 micro services on cloud. The challenge what we face during each release is managing the dependency build versions of each service. Not all micro service
have changes for each release so we cannot update the versions for each service blindly.
It takes considerable amount of time to understand the current scope of release and change the dependency versions for only those services leaving rest unchanged. Is there a better way of managing the dependencies for releases. I am looking mainly for any open source tool or process which can be used for our team.

Related

How to avoid microservice dependency without slowing down your release process?

How to avoid microservice dependency without slowing down your release process
We are deploying services and its a microservice based project but since we are having many small services going through parallel development we are struggling for deployment.
Example -
Admin build - v1.1.1
this is having dependancy on some other build v.x.x.x
so once build x is not fully tested we can not release admin v1.1.1
is there any solution for it to make deployment process more smooth ?
2 Pointers I can give you are:
Always have your API's backwards compatible - this way you lower your dependencies in one version or the other, creating higher isolation.
Use feature flags - when you change behaviour you can then deploy the code but it doesn't have to be active at that exact time. Use feature flags to wrap your changes and switch them on when you are ready.

Micro Services and Version Control how to handle deployment

I am currently trying to figure out how to handle version control with microservices.
From what I have read the best strategy is to have a separate git repository for each microservice.
However when it comes to deployment having to upload multiple git repositories seems pretty complex.
Specifically I am scratching my head as how I would deploy an update where multiple microservices require changes that depend on each other, and how to roll back to the previous versions should there be an issue with a production deployment.
This seems like a headache that most developers who use micro services have had to deal with.
Any advice would be greatly appreciated, especially if this could be done with an existing library rather than building something from scratch,
thanks,
Simon
There is no easy answer or library that could solve the problem, however there are strategies that can help. I have outlined a few below
Backward compatibility of service - Whenever you are releasing make sure that your API (REST or otherwise) works with previous consumer, this could be done by proving default values for the newer attributes.
Versioning of API - When changes you are making are not small and breaking, introduce the new version of API so that older consumers can continue to work with previous version.
Canary Deployment - When you deploy a new version of micro-service route only a small percentage of calls to the new service and rest of previous version.Observe the behavior and rollback if required.
Blue Green deployment - Have two production environment, one blue which is proven working and other green which is staging containing the latest release. When the testing is done green environment and you have enough confidence, route all the calls to green.
References
Micro-services versioning
Canary deployment
Blue green deployment
Here's a plugin I wrote using some references: https://github.com/simrankadept/serverless-ssm-version-tracker
NPM package:
https://www.npmjs.com/package/serverless-ssm-version-tracker
The version format supported is YYYY.MM.DD.REVISION

Best Strategy for deploying test and dev app versions to Google Compute Platform

Google Compute Platform
I've got an Angular (2) app and a Node.js middleware (Loopback) running as Services in an App Engine in a project.
For the database, we have a Compute Engine running PostgreSQL in that same project.
What we want
The testing has gone well, and we now want to have a test version (for ongoing upgrade testing/demo/etc) and a release deployment that is more stable for our initial internal clients.
We are going to use a different database in psql for the release version, but could use the same server for our test and deployed apps.
Should we....?
create another GCP project and another gcloud setup on my local box to deploy to that new project for our release deployment,
or is it better to deploy multiple versions of the services to the single project with different prefixes - and how do I do that?
Cost is a big concern for our little nonprofit. :)
My recommendation is the following:
Create two projects, one for each database instance. You can mess around all you want in the test project, and don't have to worry about messing up your prod deployment. You would need to store your database credentials securely somewhere. A possible solution is to use Google Cloud Project Metadata, so your code can stay the same between projects.
When you are ready to deploy to production, I would recommend deploying a new version of your App Engine app in the production project, but not promoting it to the default.
gcloud app deploy --no-promote
This means customers will still go to the old version, but the new version will be deployed so you can make sure everything is working. After that, you can slowly (or quickly) move traffic over to the new version.
At about 8:45 into this video, traffic splitting is demoed:
https://vimeo.com/180426390
Also, I would recommend aggressively shutting down unused App Engine Flexible deployments to save costs. You can read more here.

Behavior difference between Actor and Service projects in Azure Service Fabric

In an Actor project, the AssemblyVersionAttribute value is used to update the ServiceManifest version, along with the code and config version. There is no such behavior for Service projects.
This updated version is also used to update the ServiceManifestRef 's ServiceManifestVersion reference in the ApplicationManifest. While the ApplicationManifest is modified on every build, it doesn't appear a manually set version within the Service project's ServiceManifest is updated in the ApplicationManifest either.
Is this planned or intended behavior for Service projects?
I'm running Visual Studio 2015 RC, the first preview of the Service Fabric SDK, and 4.0.95-preview1 of the NuGet packages.
Short answer: This behavior difference is temporary as we improve our tooling support for versioning and upgrade.
Slightly longer answer: Part of the original goal of the Service Fabric actor framework was to abstract away the details of manipulating the application and service manifests so that you can truly focus on your business logic. Hence, the SDK includes a tool (called FabActUtil) which is responsible for doing some of that manipulation on your behalf as a post-build step. There is currently no such tool for reliable services projects. We are considering options for reconciling this difference as part of adding upgrade support to Visual Studio. We need to strike a balance between keeping you in control of your versioning scheme and taking care of the chore of cascading your version changes throughout the application as required.

Solutions for automated deployment in developer environments?

I am setting up an automated deployment environment for a number of decoupled services that are in active development. While I am comfortable with the automated deployment/configuration management aspect, I am looking for strategies on how best to structure the deployment environment to make things a bit easier for developers. Some things to take into consideration:
Developers are generally building web applications, web services, and daemons -- all of which talk to one another over HTTP, sockets, etc.
The developers may not have all running on their locally machine, but still need to be able to quickly do end to end testing by pointing their machine at the environment
My biggest concern with continuous deployment is that we have a large team and I do not want to constantly be restarting services while developers working locally against those remote servers. On the flip side, delaying deployments to this development environment makes integration testing much more difficult.
Can you recommend a strategy that you have used in this situation in the past that was worked well?
Continuous integration doesn't have to mean continuous deployment. You can compile/unit test/etc the code "continuously" thoughout the day without deploying it and only deploy at night. This is often a good idea anyway - to deploy at night or on demand - since people may be integration testing during the day and wouldn't want the codebase to change out from under them.
Consider, how much of the software can developers test locally? If a lot, they shouldn't need the environment constantly. If not a lot, it would be good to set up mocks/stubs so much more can be tested on a local server. Then the deployed environment is only needed for true integration testing and doesn't need to be update constantly throughout the day.
I'd suggest setting up a CI server (Hudson?) and use this to control all deployments to both your QA and production servers. This forces you to automate all aspects of deployment and ensures that the are no ad-hoc restarts of the system by developers.
I'd further suggest that you consider publishing your build output to a repository manager like Nexus , Artifactory or Archiva. In that way deployment scripts could retrieve any version of a previous build. The use of a repository manager would enable your QA team to certify a release prior to it's deployment onto production.
Finally, consider one of the emerging deployment automation tools. Tools like chef, puppet, ControlTier can be used to further version control the configuration of your infrastructure.
I agree with Mark's suggestion in using Hudson for build automation. We have seem successful continuous deployment projects that use Nolio ASAP (http://www.noliosoft.com) to automatically deploy the application once the build is ready. As stated, chef, puppet and the like are good for middle-ware installations and configurations, but when you need to continuously release the new application versions, a platform such as Nolio ASAP, that is application centric, is better suited.
You should have the best IT operation folks create and approve the application release processes, and then provide an interface for the developers to run these processes on approved environments.