How do you track your current deployments? - deployment

Imagine there is an application consisting from bunch of microservices. All of these microservices can be developed/deployed completely independently from each other. Each microservice can be "described" with several attributes - e.g. current API version, release version, commit hash etc. Along with that, there are several environments used in development process - e.g. Testing environment (often called Sandbox), Staging environment, Pre-Release environment and obviously Production environment.
Is there a convenient tool/way/approach to track, basically, what attribute is currently deployed to which environment? For instance, get a quick access to information like "what is the current version of Restful API at Pre-Release environment"? Or more complex one - "what was this version two month ago"? And of course see the "global picture" as well?

Theres no ready to use solution on the market yet according to my knowledge.
Some teams are using git ops https://www.twistlock.com/2018/08/06/gitops-101-gitops-use/ to get ahead of the chaos challenge a lot of different micro services usually ship with.
Another technology in a somewhat different, yet related direction are micro service meshes, istio https://istio.io/ being one of them.
There are also test approaches like contract testing or heavy integration tests, that are more expensive, but also provide more confidence.

Related

Staging slot and vip-swap

Coming from the classic Cloud Service model, after having used it to 5 years now, we are very used to the concept of a staging slot and the vip-swap capability. Yes this upgrade model has many warts but also many benefits.
Clearly the SF doesn't expose this model. So I wonder was it just not a popular model in Cloud Services, or does it just really not make sense 6 years later?
Is this one of those paradigm changes where I just have to re-think how we deploy, and forge ahead with the newly prescribed model (rolling upgrades)? Or are there known techniques to setting up something like staging slots with SF?
Looking for advice...
VIP swaps don't make sense for stateful compute, and Service Fabric is largely a stateful compute platform (even if you only use stateless services, the system services themselves are stateful). If your services have your data in them, you have to do a rolling upgrade if you want to keep your data and keep it consistent.
So yeah, it's a paradigm change, but a good one. It encourages continuous delivery and frequent upgrades because upgrades are integrated right into the platform and don't cost you anything extra. You don't need to pay for staging VMs, which can get expensive for large deployments, and that might even discourage continuous delivery.
Now, you can do something similar to a staging deployment for stateless services. In Service Fabric, your "deployments" are applications, not VMs. So you can create an instance of a new application version side-by-side with an instance of the previous application version and route your traffic however you want, whether that's gradually move users to the instance of the new version, or just flip a switch and send all your traffic to the new version all at once. This of course doesn't work for stateful services, because all of your data is still in the previous version application instance.

Should actors/services be split into multiple projects?

I'm testing out Azure Service Fabric and started adding a lot of actors and services to the same project - is this okay to do or will I lose any of service fabric features as fail overs, scaleability etc?
My preference here is clearly 1 actor/1 service = 1 project. The big win with a platform like this is that it allows you to write proper microservice-oriented applications at close to no cost, at least compared to the implementation overhead you have when doing similar implementations on other, somewhat similar platforms.
I think it defies the point of an architecture like this to build services or actors that span multiple concerns. It makes sense (to me at least) to use these imaginary constraints to force you to keep the area of responsibility of these services as small as possible - and rather depend on/call other services in order to provide functionality outside of the responsibility of the project you are currently implementing.
In regards to scaling, it seems you'll still be able to scale your services/actors independently even though they are a part of the same project - at least that's implied by looking at the application manifest format. What you will not be able to do, though, are independent updates of services/actors within your project. As an example; if your project has two different actors, and you make a change to one of them, you will still need to deploy an update to both of them since they are part of the same code package and will share a version number.

Canary release strategy vs. Blue/Green

My understanding of a canary release is that it's a partial release to a subset of production nodes with sticky sessions turned on. That way you can control and minimize the number of users/customers that get impacted if you end up releasing a bad bug.
My understanding of a blue/green release is that you have 2 mirrored production environments ("blue" and "green"), and you push changes out to all the nodes of either blue or green at once, and then use networking magic to control which environment users are routed to via DNS.
So, before I begin, if anything I have said so far is incorrect, please begin by correcting me!
Assuming I'm more or less on track, then a couple of questions about the two strategies:
Are there scenarios where canary is preferred over blue/green, and vice versa?
Are there scenarios where a deployment model can implement both strategies at the same time?
I have written a detailed essay on this topic here: http://blog.itaysk.com/2017/11/20/deployment-strategies-defined
In my opinion, the difference is whether or not the new 'green' version is exposed to real users. If it is, then I'd call it Canary. A common way to implement Canary is regular Blue/Green with the addition of smart routing of specific users to the new version. Read the post for a detailed comparison
Blue/Green:
Canary:
Blue-green releasing is simpler and faster.
You can do a blue-green release if you've tested the new version in a testing environment and are very certain that the new version will function correctly in production. Always using feature toggles is a good way to increase your confidence in a new version, since the new version functions exactly like the old until someone flips a feature toggle. Breaking your application into small, independently releaseable services is another, since there is less to test and less that can break.
You need to do a canary release if you're not completely certain that the new version will function correctly in production. Even if you are a thorough tester, the Internet is a large and complex place and is always coming up with unexpected challenges. Even if you use feature toggles, one might be implemented incorrectly.
Deployment automation takes effort, so most organizations will plan to use one strategy or the other every time.
So do blue-green deployment if you're committed to practices that allow you to be confident in doing so. Otherwise, send out the canary.
The essence of blue-green is deploying all at once and the essence of canary deployment is deploying incrementally, so given a single pool of users I can't think of a process that I would describe as doing both at the same time. If you had multiple independent pools of users, e.g. using different regional data centers, you could do blue-green within each data center and canary across data centers. Although if you didn't need canary deployment within a data center, you probably wouldn't need it across data centers.
Although both of these terms look quite close to each other, they have subtle differences. One put confidence in your functionality release and the other put confidence the way you release.
Canary
The canary release is a technique to reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users before rolling it out to
the entire infrastructure.
It is about to get an idea of how new version will perform (integrate with other apps, CPU, memory, disk usage, etc).
Blue/Green:
It is more about the predictable release with zero downtime deployment.
Easy rollbacks in case of failure.
Completely automated deployment process
Here are some inline definition -
Blue-Green Deployment - When deploying a new version of an
application, a second environment is created. Once the new
environment is tested, it takes over from the old version. The old
environment can then be turned off.
 
A/B Testing - Two versions of an application are running at the same time. A portion of requests go to each. Developers can then compare the versions.
 
Canary Release - A new version of a microservice is started along with the old versions. That new version can then take a portion of the requests and the team can test how this new version interacts with the overall system.
A good start of definitions.
I think it also helpes in making a decision for your strategy if you split your "release" definition in "deploy" and "release(functionality)".
Deploy (binaries)
The action of binary deployment of your product to a (production) system.
Release (functionality)
The action of managing availability of functionality to (groups of) users.
Why? You typically have (multiple) two concerns when "releasing":
1) Bugs / backwards compatibility /etc
2) Verifying the validness/usability of new features
Then ask yourselves, before choosing a Canary or Blue/green or whatever gray/mixed mode strategy: What concern(s) do we have when releasing/deploying the new version? And only then if you know your concerns, choose your strategy.
Additionally, it is possible to do more complex Deploy/Release strategies.
E.g, in some clouds/infra it is possible to have multiple production servers, and relay load in different proportions to different servers and versions of your product, and monitor soundness before scaling a release/deploy up to all users.
Feature flagging
The action of "configuring" (cold, or even hot) which functionality is (not)available for which (group) of users
If you also do something like "feature flagging" you can deploy first, measure soundness of your release in backwards compatibility/bug perspective, and release new functionality gradually to different users, or vice versa (scale down or even rollback functionality and/or binaries).
Feature flagging allows for splitting availability of functionality from deployment of binaries, and gives much more fine-grained decision making then only "deploy/rollback"
May, 2022 Update:
The difference between Blue-Green Deployment(Blue-Green Release) and Canary Deployment(Canary Release) is:
Blue-Green Deployment is quick
Canary Deployment is gradual
Blue-Green Deployment:
There are two environments, Blue environment which is "old" and contains one or more applications (instances or containers) and Green environment which is "new" and contains one or more applications (instances or containers).
Then, 100% traffic is quickly switched from Blue environment to Green environment at once as shown below and you can say Blue-Green Deployment is the quick way of Canary Deployment.:
This image above is from https://www.encora.com/insights/zero-downtime-deployment-techniques-blue-green-deployments originally created by the company "Encora"
Canary Deployment:
There are two environments, Blue environment which is "old" and contains one or more applications(instances or containers) and Green environment which is "new" and contains one or more applications(instances or containers).
Then, 100% traffic is gradually switched from Blue environment to Green environment taking a longer time(30 minutes, hours, or days) than Blue-Green Deployment as shown below and you can say Canary Deployment is the gradual way of Blue-Green Deployment:
This image above is from https://www.encora.com/insights/zero-downtime-deployment-techniques-canary-deployments originally created by the company "Encora"

Typical best-practice ClearCase project structure

During a development project, the delivered code can go between different stages different environment before it reaches the production (e.g. Development Environment for testing deployment processes, Internal Testing for QC, Pre-Production and finally production).
This development effort produces many candidate release in which a certain release can be nominated to move upwards in the development process until it reaches production, also, there might be some cases where the code deployed on the production might require hot-fixes in parallel to the current internal development lines (i.e. Parallel Development).
For a certain UCM project maintained by IBM Rational ClearCase (CC), what is the recommended project structure to be created on "Project Explorer" to accommodate for the following:
The developers should mainly connect and deliver their work on the internal development line (or in CC terminology the development stream).
Once the delivered code to this development stream is considered acceptable, the Technical Team Lead (TTL) can create a baseline. This baseline can be later retrieved by the Deployment Engineer to be deployed on the local Development Environment.
If this baseline was found acceptable, this baseline can be delivered as a whole to the Internal Testing stream to be deployed for further Quality Control (QC) test.
If this baseline was found acceptable, this baseline can be delivered as a whole to the Pre-Production and so forth to the production similar to what was described above.
Of course, if any of these baselines were not accepted by its receiving party, it can be rejected, and the receiving party will wait for another baseline to be recommended for their stream.
Note: The Deployment Engineer will always use a dedicated stream for each environment to get his/her files required to carry out the build/deployment activities.
My apologies for everybody here since I understand that answering this can be long, but my question more concentrates on the exact type of streams and/or views that need to be created in "Project Explorer" to suffice the above objectives.
I am really trying to come up with the best practice approach for release management using CC and how it can be best used this purpose.
I would appreciate your help guys and many thanks to all in advance ...
The rule of thumb is simple:
The less branch, the better.
I mean, if you ever done deliver and rebase before with ClearCase, you know:
how painful it is
how poorly it scales with the number of file (merging 1000 files is awfully long, merging 5000 files is murder)
So the real rule of thumb is:
if you don't have to modify any file for a given development stage, don't create a branch.
For instance, for promoting a code to QA, where you will only read it (and launch some tests, in order to accept that code if they pass, or to reject that code if they fail), don't create a QA Stream where you would deliver the code: it is too long for an non-existent added value.
Use baseline promotion level whenever you can, and recommend your promoted baselines.
The Deployment Engineer will always use a dedicated stream for each environment to get his/her files required to carry out the build/deployment activities.
Err... no, if you don't have any change to do.
The Deployment Engineer doesn't care at all where the baseline is coming from, only if the code deploys and runs successfully.

Who's responsible for deployment? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I work as an in-house developer for a manufacturing company. We make software for the manufacturing process, not really control software, more like process flow.
We are using a Scrum process to develop the software, albeit tailored to fit with our team and environment, and this is working quite well. We're coming to the end of a sprint and the software is at a stage where the product owner wants to deploy it.
Previously, i.e. before Scrum, we would have deployed the software. Now I feel like we have developed the software, we've passed all the user-defined/agreed release tests and demonstrated the software to the PO with a simulator, we have achieved our goals. We are ready to provide deployment support but I don't think it should be our responsibilty to deploy.
What are other peoples experiences? Should the dev team do the deployment or should we just hand over the completed software to the PO and provide support?
Summing up
A lot of great responses, thanks. The question may seem like I'm trying to squirm out of work or responsibility, maybe I am a little ;o) What I'm more interested in is other peoples processes. The problem we face here is that if the dev team deploy the software then we end up providing 24/7 support to production for the software. No prob, except there are only two of us. So, to allow us to get back to developing software rather than providing support all the time I thought it might be helpful to get the "IT" team involved in development process. Hopefully this will get the 'buy-in' and then allow them to deploy and provide first level support. We also have a plant in Mexico and its difficult for the dev team to go and deploy there, it makes more sense for the local support to do it, with guidance/advice from the developers.
Just to let you know, the IT engineers did deploy the software, with guidance/advice from dev. Its gone quite well, the customer is happy - he's got increased value for his software, and isn't that what its all about?
I don't think Scrum as a methodology addresses deployment responsibility. I've worked for large organizations with a deployment team completely independent of the development team. I've also worked at small organizations where the development team handles deployment. Ideally, the deployment team is separate from development, but it would almost never be the PO (which I assume means product owner). The PO usually signs off, but is not usually the best choice for handling deployment.
Who gets the call at 3am when the software isn't working or a system died? If it's the dev team then by all means expect to own the deployment (since you own production).
Best practice for organizations that can support is to provide the Operations group with deployment instructions and good wishes. Bottles of scotch help too.
If your production controls are lax, than tighten them up. A book like "Visible Ops" is a great guide to getting things under appropriate levels of control in the appropriate hands.
I'm a dev mgr with responsibility over multiple products. I have my dev teams produce builds of deployment artifacts, such as .war files, that can be simply deployed to Tomcat web server using it's manager interface or web service API. The configuration for the app is all set and self-contained within the .war file. Hence it is straightforward for the person doing deployment to just take it and "drop it in", so to speak.
If we don't get this level of ease of deployment to where deployment can be completely decoupled from the development team, then I view that as a failure on the dev team's part to adequately do their job.
The person doing deployment proceeds to release a given product to numerous customer sites - that's not a productive activity for me to let developers be doing - they have products to design and create as that is their specialization expertise.
In our organization the deployment responsibility also overlaps with the first tier production support responsibility.
We practice some scrum methodology but I've never viewed this issue as tied to software development process methodology, per se.
the software process is not completed until the working software is in the hands of the users that need it - otherwise it's just "shelfware"
if there is no one else to be responsible for deployment and configuration management, then you're it ;-)
I would think as an "In-house" developer it would be your responsiblity (unless there is a specialised deployment team) to deploy the new software whereas if you were from an external company then it would be upto them to deploy it themselves, unless specified in the contract.
Depends on the project and what "deployment" means for you. Since I am a web developer, deploying mostly .NET applications with an Sql Server database, I always prefer the deployment be done by a release manager or deployment manager. Why? Because separation of jobs ensures that problems are caught when they need to be.
The developers job should be to provide the required objects or instructions on how to deploy, then someone else deploys to a staging environment. If something goes wrong during deployment to staging, the deployment instructions are corrected until the deployment to staging works flawlessly. That way, there will hopefully be no mistakes when the same deployment script is used to move the code to production. In other words, not only must you test your code but also the deployment script.
of course, in the real world, this doesn't always happen because of personnel issues, but this would be my ideal.
Seems fairly simple to me- if not you, then who? Would actual deployment responsibility have fallen to some other team before you started using Scrum? If not, then I don't see why Scrum would change that.
If the dev team was providing the deployment before scrum, they should continue to do so, unless Management has specifically said that someone else should do it. If managment hasn't said, then they haven't really thought about it, and just expected that it would happen magically, like it always has.
If you don't like that, bring it up with management, but do the work until told differently.
I think you have to man-up and deploy the software. unless you are working in an organization that has some kind of serious data security issues and or SOX issues with allowing the unclean to deal with the production end of things.
I agree with the first comment - SCRUM has nothing to do with it. In fact I would think it's far better for you to be deploying as you'll know first hand how well things are working and be right there to get feedback from those users.