The shop I work for is using jenkins for continuous integration and its promoted builds plugin to deploy build artifacts. However, we're having trouble managing this set up as the number of configurations grows. So my question is:
How can I set up a handy CI system from which I can deploy various artifacts in various configurations without manually scripting every possible combination?
Some more details:
Let's say I have build configurations (i.e. branches) A, B and C. There are three deployment targets I, J and K (say for various clients or consumers). Finally, each deployed instance has various services X, Y and Z (e.g. web-site, background tasks and data-service). The various services are usually promoted together; but sometimes, particularly to get hotfixes out, they're not.
Currently, we have promotions for each of these combinations. So to install a typical build I'd need to run promotions J/X, J/Y and J/Z on config C. The number of services is unfortunately rising, and getting all those configurations in jenkins without making any error, and furthermore ensure that none of the components are forgotten or mixed up when deployment comes around is getting tricky. And of course, there are more than three build configs and more than three targets, so it's all getting out of hand.
Some options that don't quite work:
Parametrized promotions to disable various components. Jenkins allows parametrized promotions, but the values are fixed the first time you promote. I could remove a degree of freedom by just promoting J and setting some parameters, but if a later version breaks, I can't rollback only the component that broke, I need to rollback the entire deployment.
Dependant, parametrized builds. Jenkins doesn't seem to support parameters to choose which build to depend on, and if you manually code the options then of course the "run" selection parameter can't work.
What I'd really want:
After a build is manually accepted as ready for deployment, it should be marked as such including an argument for which target and arguments for which components.
the installation history is logged per-component per-target, not (only) per-build.
There may be some plugins to help, but you also may be approaching the point where looking at commercial tools is appropriate. I work for a build/deploy vendor (Urbancode) so by all means take this with a giant grain of salt.
We generally see people have different build types (or branches) for a single project and group those as a single 'project' with multiple 'build workflows' using the same basic configuration with some per workflow parameterization. Really simple reuse of process.
The number of services is unfortunately rising, and getting all those configurations in jenkins without making any error, and furthermore ensure that none of the components are forgotten or mixed up when deployment comes around is getting tricky. And of course, there are more than three build configs and more than three targets, so it's all getting out of hand.
If the challenge here is that you have multiple web services and promotions (especially to production) involve pushing lots of stuff, at specific versions, in a coordinated manner you're hitting the standard use case for our application release automation tool, uDeploy. It's conveniently integrated with Jenkins. It also has really nice tracking of what version of what went to what deployment target, and who ran that process.
Related
Imagine there is an application consisting from bunch of microservices. All of these microservices can be developed/deployed completely independently from each other. Each microservice can be "described" with several attributes - e.g. current API version, release version, commit hash etc. Along with that, there are several environments used in development process - e.g. Testing environment (often called Sandbox), Staging environment, Pre-Release environment and obviously Production environment.
Is there a convenient tool/way/approach to track, basically, what attribute is currently deployed to which environment? For instance, get a quick access to information like "what is the current version of Restful API at Pre-Release environment"? Or more complex one - "what was this version two month ago"? And of course see the "global picture" as well?
Theres no ready to use solution on the market yet according to my knowledge.
Some teams are using git ops https://www.twistlock.com/2018/08/06/gitops-101-gitops-use/ to get ahead of the chaos challenge a lot of different micro services usually ship with.
Another technology in a somewhat different, yet related direction are micro service meshes, istio https://istio.io/ being one of them.
There are also test approaches like contract testing or heavy integration tests, that are more expensive, but also provide more confidence.
I'm testing out Azure Service Fabric and started adding a lot of actors and services to the same project - is this okay to do or will I lose any of service fabric features as fail overs, scaleability etc?
My preference here is clearly 1 actor/1 service = 1 project. The big win with a platform like this is that it allows you to write proper microservice-oriented applications at close to no cost, at least compared to the implementation overhead you have when doing similar implementations on other, somewhat similar platforms.
I think it defies the point of an architecture like this to build services or actors that span multiple concerns. It makes sense (to me at least) to use these imaginary constraints to force you to keep the area of responsibility of these services as small as possible - and rather depend on/call other services in order to provide functionality outside of the responsibility of the project you are currently implementing.
In regards to scaling, it seems you'll still be able to scale your services/actors independently even though they are a part of the same project - at least that's implied by looking at the application manifest format. What you will not be able to do, though, are independent updates of services/actors within your project. As an example; if your project has two different actors, and you make a change to one of them, you will still need to deploy an update to both of them since they are part of the same code package and will share a version number.
I've only got a small amount of experience with A/B testing; but from what I've seen it seems like the standard approach to do an A/B test is to introduce some conditional logic in an application's code. This can be tricky to implement properly (depending on the complexity of the test) and requires extra work both for setup and cleanup.
It got me wondering: are there any frameworks or approaches to A/B testing that simplify matters using, e.g., Git branches? I'm envisioning something at the load balancer level, which directs half of traffic to a server where "master" or "default" has been deployed, and the other half to a server with "experiment" deployed. This way the code itself could always be completely agnostic of any A/B tests going on; and presumably the act of choosing either A or B for full deployment would be a simple flip of a switch.
I'm sure this would not be trivial to set up properly. But still I wonder if it's possible, and if in fact it's already been done.
it's relatively easy to build and definitely doable. you need to implement a deployment system where you deploy all branches starting with "ab_* for an example into different folders on your servers. then at some point in your code you can decide which folder should be included in the actual user session based on your actual test. it's not really a 'framework', it's a simple architecture design pattern you have to add to your own system, i was doing the same in production before.
I have decided to finally nail down my team's deployment processes, soup-to-nuts. The last remaining pain point for us is managing database and runtime data migration/management. Here are two examples, though many exist:
If releasing a new "Upload" feature, automatically create upload directory and configure permisions. In later releases, verify existence/permissions - forever, automatically.
If a value in the database (let's say an Account Status of "Signup") is no longer valid, automatically migrate data in database to proper values, given some set of business rules.
I am interested in implementing a framework that allows developers to manage and deploy these changes with the same ease that we manage and deploy our code.
So the first question is: 1. What tools/frameworks are out there that provide this capacity?
In general, this seems to be an issue in any given language and platform. In my specific case, I am deploying a .NET MVC2 application which uses Fluent NHibernate for database abstraction. I already have in my deployment process a tool which triggers NHibernate's SchemaUpdate - which is awesome.
What I have built up to address this issue in my own way, is a tool that will scan target assemblies for classes which inherit from a certain abstract class (Deployment). That abstract class exposes hooks which you can override and implement your own arbitrary deployment code - in the context of your application's codebase. the Deployment class also provides for a versioning mechanism and the tool manages the current "deployment version" of a given running app. Then, a custom NAnt task glues this together with the NAnt deployment script, triggering the hooks at the appropriate times.
This seems to work well, and does meet my goals - but here's my beef, and leads to my second question: 2. Surely what I just wrote has to already exist. If so, can you point me to it? and 3. Has anyone started down this path and have insight into problems with this approach?
Lastly, if something like this exists, but not on the .NET platform, please still let me know - as I would be more interested in porting a known solution than starting from zero on my own solution.
Thanks everyone, I really appreciate your feedback!
Each major release, have a script to create the environment with the exact requirements you need.
For minor releases, have a script that is split into the various releases and incrementally alters the environment. There are some big benefits to this
You can look at the changes to the environment over time by reading the script and matching it with release notes and change logs.
You can create a brand new environment by running the latest major and then latest minor scripts.
You can create a brand new environment of a previous version (perhaps for testing purposes) by specifying it to stop at a certain minor release.
Is writing deployment friendly code considered a good virtue on the part of a programmer?
If yes, then what are the general considerations to be kept in mind when coding so that deployment of the same code later does not become a nightmare?
The biggest improvement to deployment is to minimize manual intervention and manual steps. If you have to type in configuration values or manually navigate through configuration screens there will be errors in your deployment.
If your code needs to "call home", make sure that the user understands why, and can turn the functionality off if necessary. This might only be a big deal if you are writing off-the-shelf software to be deployed on corporate networks.
It's also nice to not have your program be dependent on too many environmental things to run properly. To combat this, I like to define a directory structure with my own bin, etc and other folders so that everything can be self-contained.
The whole deployment process should be automated to minimize the human errors. The software should not be affected by the envrionment. Any new deployment should be easily rolled back in case any problem occurs. While coding you should not hard code configuration values which may be different for each environment. Configuration should be done in such a way that it can be easily automated depending upon enviroment.
Client or server?
In general, deployment friendly means that you complete and validate deployment as you complete a small story / unit of work. It comes from continual QA more than style. If you wait till the last minute to build and validate deployment, the cleanest code will never be friendly.
Everything else deployment, desktop or server, follows from early validation. You can add all the whacky dependencies you want, if you solve the delivery of those dependencies early. Some very convenient desktop deployment mechanisms result in a sand-boxed / partially trusted applications. Better to discover early that you can't do something (e.g. write your log to c:\log.txt) than to find out late that your customers can't install.
I'm not entirely sure what you mean by "deployment friendly code." What are you deploying? What do you mean by "deploying"?
If you mean that your code should be transferable between computers, I guess the best things to do would be to minimize unnecessary (with a given definition of "unnecessary") dependencies to external libraries, and document well the libraries that you do depend on.