Why should I store kubernetes deployment configuration into source control if kubernetes already keeps track of it? - kubernetes

One of the documented best practices for Kubernetes is to store the configuration in version control. It is mentioned in the official best practices and also summed up in this Stack Overflow question. The reason is that this is supposed to speed-up rollbacks if necessary.
My question is, why do we need to store this configuration if this is already stored by Kubernetes and there are ways with which we can easily go back to a previous version of the configuration using for example kubectl? An example is a command like:
kubectl rollout history deployment/nginx-deployment
Isn't storing the configuration an unnecessary duplication of a piece of information that we will then have to keep synchronized?
The reason I am asking this is that we are building a configuration service on top of Kubernetes. The user will interact with it to configure multiple deployments, I was wondering if we should keep a history of the Kubernetes configuration and the content of configMaps in a database for possible roll backs or if we should just rely on kubernetes to retrieve the current configuration and rolling back to previous versions of the configuration.

You can use Kubernetes as your store of configuration, to your point, it's just that you probably shouldn't want to. By storing configuration as code, you get several benefits:
Configuration changes get regular code reviews.
They get versioned, are diffable, etc.
They can be tested, linted, and whatever else you desired.
They can be refactored, share code, and be documented.
And all this happens before actually being pushed to Kubernetes.
That may seem bad ("but then my configuration is out of date!"), but keep in mind that configuration is actually never in date - just because you told Kubernetes you want 3 replicas running doesn't mean there are, or if there were that 1 isn't temporarily down right now, and so on.
Configuration expresses intent. It takes a different process to actually notice when your intent changes or doesn't match reality, and make it so. For Kubernetes, that storage is etcd and it's up to the master to, in a loop forever, ensure the stored intent matches reality. For you, the storage is source control and whatever process you want, automated or not, can, in a loop forever, ensure your code eventually becomes reflected in Kubernetes.
The rollback command, then, is just a very fast shortcut to "please do this right now!". It's for when your configuration intent was wrong and you don't have time to fix it. As soon as you roll back, you should chase your configuration and update it there as well. In a sense, this is indeed duplication, but it's a rare event compared to the normal flow, and the overall benefits outweigh this downside.

Kubernetes cluster doesn't store your configuration it runs it, as you server runs your application code.

Related

In a Kubernetes Deployment when should I use deployment strategy Recreate

Kubernetes provides us two deployment strategies. One is Rolling Update and another one is Recreate. I should use Rolling Update when I don't want go off air. But when should I be using Recreate?
There are basically two reasons why one would want/need to use Recreate:
Resource issues. Some clusters simply do not have enough resources to be able to schedule additional Pods, which then results in them being stuck and the update procedure with it. This happens especially for local development clusters and/or applications that consume large amout resources.
Bad applications. Some applications (especially legacy or monolithic setups) simply cannot handle it when new Pods - that do the exact same thing as they do - spin up. There are too many reasons as to why this may happen to cover all of them here but essentially it means that an application is not suitable for scaling.
+1 to F1ko's answer, however let me also add a few more details and some real world examples to what was already said.
In a perfect world, where every application could be easily updated with no downtime we would be fully satisfied having only Rolling Update strategy.
But as the world isn't a perfect place and all things don't go always so smoothly as we could wish, in certain situations there is also a need for using Recreate strategy.
Suppose we have a stateful application, running in a cluster, where individual instances need to comunicate with each other. Imagine our aplication has recently undergone a major refactoring and this new version can't talk any more to instances running the old version. Moreover, we may not even want them to be able to form a cluster together as we can expect that it may cause some unpredicteble mess and in consequence neither old instances nor new ones will work properly when they become available at the same time. So sometimes it's in our best interest to be able to first shutdown every old replica and only once we make sure none of them is runnig, spawn a replica that runs a new version.
It can be the case when there is a major migration, let's say a major change in database structure etc. and we want to make sure that no pod, running the old version of our app is able to write any new data to the db during the migration.
So I would say, that in majority of cases it is very application-specific, individual scenario involving major migrations, legacy applications etc which would require accepting a certain downtime and Recreate all the pods at once, rather then updating them one-by-one like in Rolling Update strategy.
Another example which comes to my mind. Let's say you have an extremely old version of Mongodb running in a replicaset consisting of 3 members and you need to migrate it to a modern, currently supported version. As far as I remember, individual members of the replicaset can form a cluster only if there is 1 major version difference between them. So, if the difference is of 2 or more major versions, old and new instances won't be able to keep running in the same cluster anyway. Imagine that you have enough resources to run only 4 replicas at the same time. So rolling update won't help you much in such case. To have a quorum, so that the master can be elected, you need at least 2 members out of 3 available. If the new one won't be able to form a cluster with the old replicas, it's much better to schedule a maintanance window, shut down all the old replicas and have enough resources to start 3 replicas with a new version once the old ones are removed.

Kubernetes dynamic pod provisioning

I have an app I'm building on Kubernetes which needs to dynamically add and remove worker pods (which can't be known at initial deployment time). These pods are not interchangeable (so increasing the replica count wouldn't make sense). My question is: what is the right way to do this?
One possible solution would be to call the Kubernetes API to dynamically start and stop these worker pods as needed. However, I've heard that this might be a bad way to go since, if those dynamically-created pods are not in a replica set or deployment, then if they die, nothing is around to restart them (I have not yet verified for certain if this is true or not).
Alternatively, I could use the Kubernetes API to dynamically spin up a higher-level abstraction (like a replica set or deployment). Is this a better solution? Or is there some other more preferable alternative?
If I understand you correctly you need ConfigMaps.
From the official documentation:
The ConfigMap API resource stores configuration data as key-value
pairs. The data can be consumed in pods or provide the configurations
for system components such as controllers. ConfigMap is similar to
Secrets, but provides a means of working with strings that don’t
contain sensitive information. Users and system components alike can
store configuration data in ConfigMap.
Here you can find some examples of how to setup it.
Please try it and let me know if that helped.

Retaining and Migrating Actor / Service State

I've been looking at using service fabric as a platform for a new solution that we are building and I am getting hung up on data / stage management. I really like the concept of reliable services and the actor model and as we have started to prototype out some things it seems be working well.
With that beings said I am getting hung up on state management and how I would use it in a 'real' project. I am also a little concerned with how the data feels like a black box that I can't interrogate or manipulate directly if needed. A couple scenarios I've thought about are:
How would I share state between two developers on a project? I have an Actor and as long as I am debugging the actor my state is maintained, replicated, etc. However when I shut it down the state is all lost. More importantly someone else on my team would need to set up the same data as I do, this is fine for transactional data - but certain 'master' data should just be constant.
Likewise I am curious about how I would migrate data changes between environments. We periodically pull production data down form our SQL Azure instance today to keep our test environment fresh, we also push changes up from time to time depending on the requirements of the release.
I have looked at the backup and restore process, but it feels cumbersome, especially in the development scenario. Asking someone to (or scripting the) restore on every partition of every stateful service seems like quite a bit of work.
I think that the answer to both of these questions is that I can use the stateful services, but I need to rely on an external data store for anything that I want to retain. The service would check for state when it was activated and use the stateful service almost as a write-through cache. I'm not suggesting that this needs to be a uniform design choice, more on a service by service basis - depending on the service needs.
Does that sound right, am I overthinking this, missing something, etc?
Thanks
Joe
If you want to share Actor state between developers, you can use a shared cluster. (in Azure or on-prem). Make sure you always do upgrade-style deployments, so state will survive. State is persisted if you configure the Actor to do so.
You can migrate data by doing a backup of all replica's of your service and restoring them on a different cluster. (have the service running and trigger data-loss). It's cumbersome yes, but at this time it's the only way. (or store state externally)
Note that state is safe in the cluster, it's stored on disk and replicated. There's no need to have an external store, provided you do regular state backups and keep them outside the cluster. Stateful services can be more than just caches.

How to manage state in microservices?

First of all, this is a question regarding my thesis for school. I have done some research about this, it seems like a problem that hasn't been tackled yet (might not be that common).
Before jumping right into the problem, I'll give a brief example of my use case.
I have multiple namespaces containing microservices depending on a state X. To manage this the microservices are put in a namespace named after the state. (so namespaces state_A, state_B, ...)
Important to know is that each microservice needs this state at startup of the service. It will download necessary files, ... according to the state. When launching it with state A version 1, it is very likely that the state gets updated every month. When this happens, it is important to let all the microservices that depend on state A upgrade whatever necessary (databases, in-memory state, ...).
My current approach for this problem is simply using events, the microservices that need updates when the state changes can subscribe on the event and migrate/upgrade accordingly. The only problem I'm facing is that while the service is upgrading, it should still work. So somehow I should duplicate the service first, let the duplicate upgrade and when the upgrade is successful, shut down the original. Because of this the used orchestration service would have to be able to create duplicates (including duplicating the state).
My question is, are there already solutions for my problem (and if yes, which ones)? I have looked into Netflix Conductor (which seemed promising with its workflows and events), Amazon SWF, Marathon and Kubernetes, but none of them covers my problem.
Best of all the existing solution should not be bound to a specific platform (Azure, GCE, ...).
For uninterrupted upgrade you should use clusters of nodes providing your service and perform a rolling update, which takes out a single node at a time, upgrading it, leaving the rest of the nodes for continued servicing. I recommend looking at the concept of virtual services (e.g. in kubernetes) and rolling updates.
For inducing state I would recommend looking into container initialization mechanisms. For example in docker you can use entrypoint scripts or in kubernetes there is the concept of init containers. You should note though that today there is a trend to decouple services and state, meaning the state is kept in a DB that is separate from the service deployment, allowing to view the service as a stateless component that can be replaced without losing state (given the interfacing between the service and required state did not change). This is good in scenarios where the service changes more frequently and the DB design less frequently.
Another note - I am not sure that representing state in a namespace is a good idea. Typically a namespace is a static construct for organization (of code, services, etc.) that aims for stability.

Reverting mechanisms in Salt

I'm using Salt for my project deployment. What if the deployment is unsuccessful? Does any mechanism for reverting exist in Salt?
By reverting, I mean that I want to be able to revert my database and my system to the most recent working state.
As far I can tell Salt does not provide virtualization AND/OR snapshot capabilities out of box.
However, Salt state definitions are idempotent, meaning they describe final state and are safe to apply multiple times. Note, that it helps only in case something has been broken in the middle, then the fix is just to retry the state, e.g.
salt minion state.sls mystate
Having said that, if youn seek for some kind of recovery/failover guarantee - there are several ways how to skin the rabbit.
I have no experience in this particular area, so I might be wrong, however I believe SaltStack integrates with Docker and different containerization technologies. So, one way would be to build your failover procedure on top of that.
If your target platform is Cloud (AWS, DigitalOcean etc..) you might adjust you topology in the way you spin new instances from snapshots, perform salt state updates, perform smoke/sanity tests, register instances in LoadBalancer etc... If any of those states fails - no big deal, restart later.