Naming conventions in blue/green deployment with different environments - deployment

In our deployment we have three environments: testing, staging, and production.
We recently introduced a blue-green setup so that we now have blue-production, blue-staging, green-production, green-staging and testing.
We now wonder about the naming of these "colors". Intuitively I'd go for blue/green-environment but this is currently in use.
Is there a standard or common naming schema in this setup? The best I came up with is to label "testing, staging, and production" as "stages" and "blue/green" as "environments".
Exemplary usage: "what is the status of the production stage in the blue environment?".
Is there a better alternative?

The use of Blue and Green can be ambiguous for users unfamiliar with this naming convention.
Instead of using Blue/Green I'd suggest using Live/Idle to be more explicit (as documented here).

Related

kubectl imperative object configuration use case

Out of the Kubernetes docs a kubectl tool has "three kinds of object management":
imperative commands
imperative object configuration
declarative object configuration
While the first and the last options' use cases are more or less clear, the second one really makes me confusing.
Moreover in the concepts section there is a clear distinction of use cases:
use imperative commands for quick creation of (simple)
single-container resources
use declarative commands for managing (more complex) set of resources
Also imperative style is recommended for CKA certification so it seems to be preferred for day-to-day cluster management activities.
But once again what is a best use case / practice for "Imperative object configuration" option and what is the root idea behind it?
There are two basic ways to deploy to Kubernetes: imperatively, with kubectl commands, or declaratively, by writing manifests and using kubectl apply. A Kubernetes object should be managed using only one technique. It is better to use only one way for the same object, mixing techniques for the same object results in undefined behavior.
Imperative commands operates on live objects
Imperative object configuration operates on individual files
Declarative object configuration operates on directories of files
Imperative object configuration creates, updates and deletes objects using configuration files, which contains fully-defined object definitions. You can store object configuration files in source control systems and audit changes more easily than with imperative commands.
You can run kubectl apply, delete, and replace operations with configuration files or directories containing configuration files.
Please refer to official documentation, where everything is fully described with examples. I hope it is helpful.

How to move code between similar versions targeting different environments?

I'm developing a script that performs a certain core task, and using versions of that script in two different environments where some settings and steps along the way need to be different. What I am looking for is whether there exists an elegant way to handle the small differences between the two versions of the script. I'm sure developers face similar problems when developing software to be deployed on multiple platforms, but I don't have a specific name to pin on it.
What I do now is to open up the second script and manually replace the lines that need to be different. This is cumbersome, time-consuming, and a bit of a headache whenever I inevitably forget to comment out a line or change a string.
Example
[...]
path_to_something = "this/is/different"
use_something(path_to_something)
[...]
do_thing_A() # Only in environment A.
[...]
do_thing_B() # Only in environment B.
[...]
The omitted [...] parts are identical in both versions, and when I make a change to them, I have to either copy and paste each changed line, or if the changes are significant, copy the whole thing, and manually change the A and B parts.
Some ideas for possible solutions that I've come up with:
Write a script that automates the steps I manually take when moving the code back and forth. This exactly replicates the necessary steps, and it's quick and easy to add or remove steps as necessary.
Is this a use case for gitattributes?
Factor all the code that is identical between versions into separate files, so that the files containing the heterogenous code don't need to change at all, and thus don't need to be version-controlled, per se.
Some other tool or best practice that I don't know about to handle this type of workflow.
Looking around, I've found a question with a similar premise of maintaining different versions of code that does the same thing:
Proper way to maintain a project that meets two versions of a platform?
Solutions offered to that question:
Get rid of all the differences, then there is no problem to solve. This may or may not be possible in my specific case, and certainly won't be possible in every case for everyone in the future. So maybe there is a more general solution.
Maintain two different branches of the code, even though they are nearly identical. This is similar to what I do now, but I end up having to do a lot of copying and pasting back and forth between branches. Is that just inherent to software development?
Perform platform detection and wrap the differences in conditionals. This adds a lot of ugly stuff in the code, but if I could successfully detect the environment and implement all the necessary differences conditionally, I would not have to make any changes to the code before sending it to the different environments.
How do developers move code back and forth between similar, but different, parallel branches of a project?
Language- and SCM-agnostic
Use one (two, three) common class(es), but different interfaces (interface per environment)
Move all env-specific from hardcode into external configuration, stored separately
SCM-agnostic
Separate tasks, i.e.:
Get clean common core
Get changes on top core for every and each environment
Move it into $ENVS+1 branches in any SCM (Core+...+EnvN)
Changed workflow become
Commit common changes to Core
Merge Core to each Env-branch
Test env-branches, fix env-specific changes if needed
Private and personal, preferred for me and my habits
Variation of branch-tree solution
Pure Mercurial-way, because I'm too lazy to maintain Env-specific branches
Mercurial, one branch + MQ-queue with a set of patches (one mq-patch per Env, may be "set of queues /queue per Env/, one patch per queue")
Common code stored in immutable changesets, any changes, which convert plain core into environment-specific products, stored in patches and applied on demand (and edited when|if it needed).
Small advantages over branch-way - single branch, no merges, more
clean history
Disadvantage - pure Mercurial, now trendy Git (git-boys
are crying)

What would be a good/correct term to designate a group of microservices? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am opening this topic looking for an advice to solve/help in the following problem:
I am currently working with other people on a project with a microservices architecture, using cloud computing.
There are 6 different microservices, and there are some couples of microservices that are not compatible, therefore not able of being instantiated within the same machine.
Each microservice has a version number.
In order to launch one or more new instances of any microservice, we have to define which microservices will run on this new machine, via a static configuration.
This static configuration, that we call so far as a "deploy" contains the microservices that are being deployed, and the version of each microservice. (ex: (XY,[(X,v1),(Y,v2)]) - X and Y are microservices, and the XY deploy instantiates version 1 of X and version 2 of Y)
Those "deploys" also have their own version number. Altering the version number of a microservice within a deploy requires altering the version of any "deploy" containing the microservice. (ex: (XY,v1,[(X,v1),(Y,v2)]) and (XY,v2,[(X,v1),(Y,v3)]))
The question is: what would be a correct, or at least, a good term to refer to this entity that I have previously called a "deploy"?
Many developers are writing programs around our architecture and using different names for such entity, which causes syntaxic and semantic incompatibility inside our team.
Of those different names, all have pros and cons:
deploy: makes sense because you are deploying all the microservices in the list. However, the term deploy already designate another part of our process, and there could be some over utilization of the same term. (Deploying the XY deploy will deploy microservices X and Y in a machine)
cluster: good name for a group of things, but you can deploy multiple machines from a configuration, and the term cluster already applies to this group of machines.
service: a service would be a group of microservices. Makes sense, but many pieces of codes refer to a microservice as 'service', and that could lead to a confusion. (def get_version(service) - Is he talking about a service or a microservice?)
Does any of you could give us any opinion or enlightenment on the question?
Thanks!
You might take a hint from the 12-factor App, and call them releases (http://12factor.net/build-release-run)
You then deploy a versioned release.
It sounds like you want a suitable collective noun. I suggest you Google "collective nouns", to find numerous lists. Read some of the lists and pick a noun that you think is appropriate.
Alternatively, the term cooperative (or co-op for short) might be suitable if one of the defining characteristics of an instantiation collection of microservices is that they complement, or cooperate with, each other.
I have used the term "complex" (as in the "mortgage risk" complex vs the "compliance" complex). It seemed unambiguous.
People also used the term within a project for deployed sets of microservices (e.g the production complex vs the test complex).

SpringXD Job split flow steps running in separate containers in distributed mode

I am aware of the nested job support (XD-1972) work and looking forward to that. A question regarding split flow support. Is there a plan to support running parallel steps, as defined in split flows, in separate containers?
Would it be as simple as providing custom implementation of a proper taskExecutor, or it is something more involved?
I'm not aware of support for splits to be executed across multiple containers being on the roadmap currently. Given the orchestration needs of something like that, I'd probably recommend a more composed approach anyways.
A custom 'TaskExecutor` could be used to farm out the work, but it would be pretty specialized. Each step within the flows in a split are executed within the scope of a job. That scope (and all it's rights and responsibilities) would need to be carried over to the "child" containers.

UML Deployment Diagram for IaaS and PaaS Cloud Systems

I would like to model the following situation using a UML deployment diagram.
A small command and control machine instance is spawned on an Infrastructure as a Service cloud platform such as Amazon EC2. This instance is in turn responsible for spawning additional instances and providing them with a control script NumberCruncher.py either via something like S3 or directly as a start up script parameter if the program is small enough to fit into that field. My attempt to model the situation using UML deployment diagrams under the working assumption that a Machine Instance is a Node is unsatisfying for the following reasons.
The diagram seems to suggest that there will be exactly three number cruncher nodes. Is it possible to illustrate a multiplicity of Nodes in a deployment diagram like one would illustrate a multiplicity of object instances using a multi-object. If this is not possible for Nodes then this seems to be a Long Standing Issue
Is there anyway to show the equivalent of deployment regions / data-centres in the deployment diagram?
Lastly:
What about Platform as a Service? The whole Machine Instance is a Node idea completely breaks down at that point. What on earth do you do in that case? Treat the entire PaaS provider as a single node and forget about the details?
Regarding your first question:
Is there anyway to show the equivalent of deployment regions /
data-centres in the deployment diagram?
I generally use Notes for that.
And your second question:
What about Platform as a Service? The whole Machine Instance is a Node
idea completely breaks down at that point. What on earth do you do in
that case? Treat the entire PaaS provider as a single node and forget
about the details?
I would say, yes for your last question. And I suppose you could take more details from the definition of the deployment model and its elements. Specially at the end of this paragraph:
They [Nodes] can be nested and can be connected into systems of arbitrary
complexity using communication paths. Typically, Nodes represent
either hardware devices or software execution environments.
and
ExecutionEnvironments represent standard software systems that
application components may require at execution time.
source: http://www.omg.org/spec/UML/2.5/Beta1/