I have an app I'm building on Kubernetes which needs to dynamically add and remove worker pods (which can't be known at initial deployment time). These pods are not interchangeable (so increasing the replica count wouldn't make sense). My question is: what is the right way to do this?
One possible solution would be to call the Kubernetes API to dynamically start and stop these worker pods as needed. However, I've heard that this might be a bad way to go since, if those dynamically-created pods are not in a replica set or deployment, then if they die, nothing is around to restart them (I have not yet verified for certain if this is true or not).
Alternatively, I could use the Kubernetes API to dynamically spin up a higher-level abstraction (like a replica set or deployment). Is this a better solution? Or is there some other more preferable alternative?
If I understand you correctly you need ConfigMaps.
From the official documentation:
The ConfigMap API resource stores configuration data as key-value
pairs. The data can be consumed in pods or provide the configurations
for system components such as controllers. ConfigMap is similar to
Secrets, but provides a means of working with strings that don’t
contain sensitive information. Users and system components alike can
store configuration data in ConfigMap.
Here you can find some examples of how to setup it.
Please try it and let me know if that helped.
Related
I have my application deployed on Kubernetes cluster, managed by operator (A). I am mounting secrets with ssl key materials to the deployment, so application could access to the content.
I have separate operator (B) deployment, which is responsible to create those secrets with the ssl key materials. Now I have a use case where my secrets are recreated by operator (B), and it deletes/restarts the pods, which managed by operator (A).
I am trying understand - is it common practice to allow separately deployed operator delete pods?
My perception was that operator should work only with resources it manages, nothing more.
Community wiki to summarise the topic.
If it is as you say:
both operators are proprietary,
it is impossible to give a definite yes or no answer. Everything will depend on what is really going on there, and we are not able to check and evaluate it.
Look at the well provided comments by David Maze:
That sort of seems like a bug...but also, Pods are pretty disposable and the usual expectation is that a ReplicaSet or another controller will recreate them...?
Note that the essence of the Kubernetes controller model is that the controller looks at the current state of the Kubernetes configuration store (not changes or events, just which objects exist and which don't) and tries to make the things it manages match that, so if the controller believes it should manage some external resource and there's not a matching Kubernetes object, it could delete it.
I have a application consisting of frontend, backend and a database.
At the moment the application is running on a kubernetes cluster.
Front-, backend and database is inside its own Pod communicating via services.
My consideration is to put all these application parts (Front-, Backend and DB) in one Pod, so i can make a Helm chart of it and for every new customer i only have to change the values.
The Question is, if this is a good solution or not to be recommended.
No, it is a bad idea, this is why:
First, the DB is a stateful container, when you update any of the components, you have to put down all containers in the POD, let's say this is a small front end update, it will put down everything and the application will be unavailable.
Let's say you have multiple replicas of this pod to avoid the issue mentioned above, this will make extremely hard to scale the application, because you will need a copy of every container scaled, when you might likely need only FE or BE to scale, also creating multiple replicas of a database, depending how it replicates the data, will make it slower. You also have to consider backup and restore of the data in case of failures.
In the same example above, multiple replicas will make the PODs consume too much resources, even though you don't need it.
If you just want to deploy the resources without much customization, you could just deploy them into separate namespaces and add policies to prevent one namespace talking to each other and deploy the raw yaml there, only taking care to use config maps to load the different configurations for each.
If you want just a simple templating and deployment solution, you can use kustomize.
If you want to have the complex setup and management provided by Helm, you could defined all pods in the chart, an example is the Prometheus chart.
You can create a helm chart consisting of multiple pods or deployments, so you do not need to put them in one pod just for that purpose. I would also not recommend that, as for example the Database would most likely fit better in a StatefulSet.
I have a few kubefiles defining Kubernetes services and deployments. When I create a cluster of 4 nodes on GCP (never changes), all the small kube-system pods are spread across the nodes instead of filling one at a time. Same with the pods created when I apply my kubefiles.
The problem is sometimes I have plenty of available total CPU for a deployment, but its pods can't be provisioned because no single node has that much free. It's fragmented, and it would obviously fit if the kube-system pods all went into one node instead of being spread out.
I can avoid problems by using bigger/fewer nodes, but I feel like I shouldn't have to do that. I'd also rather not deal with pod affinity settings for such a basic testing setup. Is there a solution to this, maybe a setting to have it prefer filling nodes in order? Like using an already opened carton of milk instead of opening a fresh one each time.
Haven't tested this, but the order I apply files in probably matters, meaning applying the biggest CPU users first could help. But that seems like a hack.
I know there's some discussion on rescheduling that gets complicated because they're dealing with a dynamic node pool, and it seems like they don't have it ready, so I'm guessing there's no way to have it rearrange my pods dynamically.
You can write your own scheduler. Almost all components in k8s are replaceable.
I know you won't. If you don't want to deal with affinity, you def won't write your own scheduler. But know that you have that option.
With GCP native, try to have all your pods with resource request and limits set up.
One of the documented best practices for Kubernetes is to store the configuration in version control. It is mentioned in the official best practices and also summed up in this Stack Overflow question. The reason is that this is supposed to speed-up rollbacks if necessary.
My question is, why do we need to store this configuration if this is already stored by Kubernetes and there are ways with which we can easily go back to a previous version of the configuration using for example kubectl? An example is a command like:
kubectl rollout history deployment/nginx-deployment
Isn't storing the configuration an unnecessary duplication of a piece of information that we will then have to keep synchronized?
The reason I am asking this is that we are building a configuration service on top of Kubernetes. The user will interact with it to configure multiple deployments, I was wondering if we should keep a history of the Kubernetes configuration and the content of configMaps in a database for possible roll backs or if we should just rely on kubernetes to retrieve the current configuration and rolling back to previous versions of the configuration.
You can use Kubernetes as your store of configuration, to your point, it's just that you probably shouldn't want to. By storing configuration as code, you get several benefits:
Configuration changes get regular code reviews.
They get versioned, are diffable, etc.
They can be tested, linted, and whatever else you desired.
They can be refactored, share code, and be documented.
And all this happens before actually being pushed to Kubernetes.
That may seem bad ("but then my configuration is out of date!"), but keep in mind that configuration is actually never in date - just because you told Kubernetes you want 3 replicas running doesn't mean there are, or if there were that 1 isn't temporarily down right now, and so on.
Configuration expresses intent. It takes a different process to actually notice when your intent changes or doesn't match reality, and make it so. For Kubernetes, that storage is etcd and it's up to the master to, in a loop forever, ensure the stored intent matches reality. For you, the storage is source control and whatever process you want, automated or not, can, in a loop forever, ensure your code eventually becomes reflected in Kubernetes.
The rollback command, then, is just a very fast shortcut to "please do this right now!". It's for when your configuration intent was wrong and you don't have time to fix it. As soon as you roll back, you should chase your configuration and update it there as well. In a sense, this is indeed duplication, but it's a rare event compared to the normal flow, and the overall benefits outweigh this downside.
Kubernetes cluster doesn't store your configuration it runs it, as you server runs your application code.
I have seen HPA can be scaled based on CPU usage. That is super cool.
However, the scenario I have is: the stateful app (container in pod) is one to one mapping based on the downstream API results. For example, the downstream api results return maximum and expected capacity like {response: 10}. I would like to see replicaSet or statefulSet or other kubernetes controller can obtain this value and auto scale the pods to 10. Unfortunately, the pod replicas is hardcoded in the yaml file.
If I am doing it manually, I think I can do it via running start a scheduler. The job of the scheduler is to watch the api and run the kubectl scale command based on the downstream api results. This can be error prone and there is another system I need to maintain. I guess this logic should belong to a kubernetes controller ?
May I ask has someone done this stuff before and what is the way to configure it ?
Thanks in advance.
Unfortunately, it is not possible to use an HPA in that mode, but your conception about how to scale is right.
HPA is designed to analyze metrics and decide how many pods need to be spawned based on those metrics. It is using scaling rules and can only spawn pods one by one based on the result of its decision.
Moreover, it using standard Kubernetes API for scale pods.
Because a logic of HPA is already in your application, you can use the same API to scale your pods. Btw, kubectl scale is using the same way to interact with a cluster.
So, you can use i.e. Cronjob, with a small application which will call API of your application every 5 minutes and call kubectl scale with proper name of deployment to scale your app.
But, please keep in mind, you need to somehow control the frequency of up- and downscaling of pods, it will make your application more stable. That’s why I think that scaling not more often than once per 5 minutes is OK, but trying to do it every minute generally is not the best idea.
And of course, you can create a daemon and run it using Deployment, but I think Cronjob solution is more easy and faster to implement.