how to achieve spring cloud config server canary - spring-cloud

Trying to wrap my mind around canary rollout of a config change. All the app instances are using the same profile & label.
Essentially i want to apply a dynamic change but not to all app instances at once.
Any thoughts ? I want to avoid writing any consumer side logic for this.

Related

How to create a programmable Kubernetes cron service?

I have read about Kubernetes CronJobs, but I'm looking for a more flexible scheduling solution (I'm using GKE). In particular, I have a web app that, upon some user setting a checkbox on a dashboard, I want to trigger some service every X minutes. If the user clears the checkbox, the trigger will stop. I was hoping there are such ready-made services. What's the best approach here?
I want to trigger some service every X minutes. If the user clears the checkbox, the trigger will stop
The simplest way to do this would be to have your web app create a CronJob in kubernetes API when user enables this, and deletes that object when he disables it.
Though I'm not sure something like this would scale very well. Depends on your app. Going with Kubernetes Cronjobs, each job would create a pod, allocating resources, pull image, start container, run stuff, terminate. There's some overhead that could be avoided -- depending on what you're doing, this may or might not make sense. Another way to do this would be to implement some jobs queue in your application.
Eg: in NodeJS, I would use something like bee-queue, bull or kue. A single "worker" could then process jobs from multiple users, in parallel, and/or with some concurrency limit, ... A timer (eg: node-schedule) could trigger jobs. Web frontend deals with enabling or disabling timers one behalf of users, user selection may be kept in whatever SGBD/noSGBD you have available. Or even in a ConfigMap (data has sizes limitation!).
With a couple workers (running as Deployments or StatefulSet), some master/slave redis setup, I should be able to deal with lots of different jobs. Maybe add some HorizontalPodAutoscaler, allowing for adding/removing workers depending on CPU or memory usage of your workers.
While if I were to create kubernetes CronJobs for each user requesting something, that could make for a lot of Pods to schedule, potentially waisting resources or testing my cluster limits.
Triggering schedules is a typical use case of Google cloud functions, that is the serverless approach.
I think it's also cost effective, instead of GKE.
Look at these docs:
https://cloud.google.com/scheduler/docs/tut-pub-sub
You might use a cloud function to invoke a GKE CronJob, or a kubernetes replica set creation with replicas 1 using an image for the scheduled job. It might be a spring boot micro-service with the #Scheduled and actual schedule loaded from parameters. To disable the schedule you scale down the pod to 0 replicas.
Remember that in order to access the VPC of GKE nodes you need a VPC access because cloud functions are serverless.
Anyway you can understand that GKE is a cumbersome and costly approach.

In a Blue / Green deployment system, what is each one called?

I am working on setting up Blue/Green deployments for my Kubernetes system. I need to make a variable for which one I am currently on (Blue or Green)
But I don't know what a single one of them is called. Channel, pipeline, side, part, state, ... ?
What is one side of the Blue/Green deployment system called?
Or is there no generally accepted name for this? (maybe I need to call my variable CurrentBlueGreenStatus)
I think what you are basically trying to do here is to "reinventing the wheel". Blue/Green deployment is just a release concept or model so to me the name is in fact deployment, eventually deploymentType. In some cases it's being called environment, app or even server.
In software engineering, blue-green deployment is a method of
installing changes to a web, app, or database server by swapping
alternating production and staging servers.

Is it good to put complete application in one kubernetes pod?

I have a application consisting of frontend, backend and a database.
At the moment the application is running on a kubernetes cluster.
Front-, backend and database is inside its own Pod communicating via services.
My consideration is to put all these application parts (Front-, Backend and DB) in one Pod, so i can make a Helm chart of it and for every new customer i only have to change the values.
The Question is, if this is a good solution or not to be recommended.
No, it is a bad idea, this is why:
First, the DB is a stateful container, when you update any of the components, you have to put down all containers in the POD, let's say this is a small front end update, it will put down everything and the application will be unavailable.
Let's say you have multiple replicas of this pod to avoid the issue mentioned above, this will make extremely hard to scale the application, because you will need a copy of every container scaled, when you might likely need only FE or BE to scale, also creating multiple replicas of a database, depending how it replicates the data, will make it slower. You also have to consider backup and restore of the data in case of failures.
In the same example above, multiple replicas will make the PODs consume too much resources, even though you don't need it.
If you just want to deploy the resources without much customization, you could just deploy them into separate namespaces and add policies to prevent one namespace talking to each other and deploy the raw yaml there, only taking care to use config maps to load the different configurations for each.
If you want just a simple templating and deployment solution, you can use kustomize.
If you want to have the complex setup and management provided by Helm, you could defined all pods in the chart, an example is the Prometheus chart.
You can create a helm chart consisting of multiple pods or deployments, so you do not need to put them in one pod just for that purpose. I would also not recommend that, as for example the Database would most likely fit better in a StatefulSet.

Why should I store kubernetes deployment configuration into source control if kubernetes already keeps track of it?

One of the documented best practices for Kubernetes is to store the configuration in version control. It is mentioned in the official best practices and also summed up in this Stack Overflow question. The reason is that this is supposed to speed-up rollbacks if necessary.
My question is, why do we need to store this configuration if this is already stored by Kubernetes and there are ways with which we can easily go back to a previous version of the configuration using for example kubectl? An example is a command like:
kubectl rollout history deployment/nginx-deployment
Isn't storing the configuration an unnecessary duplication of a piece of information that we will then have to keep synchronized?
The reason I am asking this is that we are building a configuration service on top of Kubernetes. The user will interact with it to configure multiple deployments, I was wondering if we should keep a history of the Kubernetes configuration and the content of configMaps in a database for possible roll backs or if we should just rely on kubernetes to retrieve the current configuration and rolling back to previous versions of the configuration.
You can use Kubernetes as your store of configuration, to your point, it's just that you probably shouldn't want to. By storing configuration as code, you get several benefits:
Configuration changes get regular code reviews.
They get versioned, are diffable, etc.
They can be tested, linted, and whatever else you desired.
They can be refactored, share code, and be documented.
And all this happens before actually being pushed to Kubernetes.
That may seem bad ("but then my configuration is out of date!"), but keep in mind that configuration is actually never in date - just because you told Kubernetes you want 3 replicas running doesn't mean there are, or if there were that 1 isn't temporarily down right now, and so on.
Configuration expresses intent. It takes a different process to actually notice when your intent changes or doesn't match reality, and make it so. For Kubernetes, that storage is etcd and it's up to the master to, in a loop forever, ensure the stored intent matches reality. For you, the storage is source control and whatever process you want, automated or not, can, in a loop forever, ensure your code eventually becomes reflected in Kubernetes.
The rollback command, then, is just a very fast shortcut to "please do this right now!". It's for when your configuration intent was wrong and you don't have time to fix it. As soon as you roll back, you should chase your configuration and update it there as well. In a sense, this is indeed duplication, but it's a rare event compared to the normal flow, and the overall benefits outweigh this downside.
Kubernetes cluster doesn't store your configuration it runs it, as you server runs your application code.

Bluemix Auto Scaling API

Is there a way for me to programmatically get notified when Bluemix auto scaling has scaled up or down?
I'm reading streaming data from a queue and would like to make sure the number of instances that I have are balanced and data is partitioned correctly
At present this kind of notification service is not available, only you can do is query the instance scaling history in Web UI. I think this requirement is interesting and should be considered to provide to developer in the future.
This kind of alert isn't available yet but you can write a simple script monitoring output of
cf app (appname)
It returns the number of instances running and the state of each one, with the right combination of awk and grep (or a perl script for example) you could have your own alerter while waiting for this of functionality