Deploy several wordpress instances with Kubernetes - kubernetes

I am missing a feature in Kubernetes for horizontal scaling of "standalone" application (where you can assign one application to one user).
I would like to deploy several instances of Wordpress with kubernetes.
Each instance should have its own environment and data and assigned to one user.
At first I thought about Statefullset but the problem is that when you delete one pod, you have to delete it in reverse order or all in parallel ..
What if the 3rd user cancel the service and you want to delete only the third pod ?!
Deployments ? Then I have to create one unique Pod for each user with their own volume !? Where is kubernetes added value there ..
Do you have any other better idea ?
Regards,

I got an answer from a Slack user, StatefulSet should not be used in this case.
It's better to use one different deployment for each user (wordpres-).

Related

Any way we can add an ENV to a pod or a new pod in kubernetes?

Summarize the problem:
Any way we can add an ENV to a pod or a new pod in kubernetes?
For example, I want to add HTTP_PROXY to many pods and the new pods it will generate in kubeflow 1.4. So these pods can be access to internet.
Describe what you’ve tried:
I searched and found istio maybe do that, but it's too complex for me.
The second, there are too many yamls in kubeflow, as to I cannot modify them one by one to use configmap or add ENV just in them.
So anyone has a good simle way to do this? Like doing this in kubernetes configuation.
Use "PodPreset" object to inject common environment variables and other params to all the matching pods.
Please follow below article
https://v1-19.docs.kubernetes.io/docs/tasks/inject-data-application/podpreset/
If PodPreset is indeed removed from v1.20, then you seem to need a webhook.
You will have to run an additional service in your cluster that will change the configuration of the pods.
Here is an example, on the basis of which I created my webhook, which changed the configuration of the pods in the cluster, in this example the developer used the logic adding a sidecar to the pod, but you can set your own to forward the required ENV:
https://github.com/morvencao/kube-mutating-webhook-tutorial/blob/master/medium-article.md

See Every Configuration Field (a Schema?) for Kubernetes REST Objects

I'm new to Kubernetes (K8s). It's my understanding that in order to "do things" in a kubernetes cluster, we interact with a kuberentes REST API endpoint and create/update/delete objects. When these objects are created/updated/deleted K8s will see those changes and take steps to bring the system in-line with the state of your objects.
In other words, you tell K8s you want a "deployment object" with container image foo/bar and 10 replicas and K8s will create 10 running pods with the foo/bar image. If you update the deployment to say you want 20 replicas, K8s will start more pods.
My Question: Is there a canonical description of all the possible configuration fields for these objects? That is -- tutorials liks this one do a good job of describing the simplest possible configuration to get an object like a deployment working, but now I'm curious what else it's possible to do with deployments that go beyond these hello world examples.
Is there a canonical description of all the possible configuration fields for these objects?
Yes, there is the Kubernetes API reference e.g. for Deployment.
But when developing, the easiest way is to use kubectl explain <resource> and navigate deeper, e.g:
kubectl explain Deployment.spec
and then deeper, e.g:
kubectl explain Deployment.spec.template

Include Pod creation time in Kubernetes Pod name

Is it possible to specify the Pod creation time as part of the k8s Pod Name?
Scenario:
I have many pods with the same name prefix (and uniquely generated tail-end of the name) and these are all names of log groups.
I wish to distinguish between log groups by creation time.
Unfortunately AWS CloudWatch Logs console does not sort by log group creation time.
No, not with a deployment at least, a stateful set would work but you should really be using labels here.

How does k8s know which pod to update?

I'm currently getting started with Kubernetes, and so far, I have a question that I could not find answered anywhere.
Until know, I have learned what containers, pods, and replica sets are. I basically understand the things, but one thing I did not get is: If I update a manifest of a pod (or of a replica set), and re-POST it to k8s - how does k8s know which existing manifest this refers to?
Is this matching done by the manifest's name, i.e. by the name of the pod or the replica set? Or …?
In other words: If I update a manifest, how does k8s know that it is an updated one, and how does it detect which one is the one with the previous version?
You are right, k8s uses metadata.name for identifying resources. That name is unique per resource type (Pod/ReplicaSet/...) and namespace.
Well, for starters lets get things straight. When you update manifest, it is obvious what to update in the first place - the object you updated - ie. Deployment or ReplicaSet. Now, when that is updated, the RollingUpdate kicks in, and this is what I assume you wonder about as well as in general how ownership of pod is established. If you make a kubectl get pod -o yaml you can find a keys like ownerReferences, pod-template-hash and kubernetes.io/created-by which should be rather obvious when you see the content. In the other direction (so not from the Pod but from Deployment) you have a selector field which defines what labels are used to filter pods to find the right ones.

Does Kubernetes provide a colocated Job container?

I wonder how would one implement a colocated auxiliary container in a Pod within a Deployment which does not provide a service but rather a job/batch workload?
Background of my questions is, that I want to deploy a scalable service at which each instance needs configuration after its start. This configuration is done via a HTTP POST to its local colocated service instance. I've implemented a auxiliary container for this in order to benefit from the feature of colocation. So the auxiliary container always knows which instance needs to be configured.
Problem is, that the restartPolicy needs to be defined at the Pod level. I am looking for something like restart policy always for the service and a different restart policy onFailurefor the configuration job.
I know that k8s provides the Job resource for such workloads. But is there an option to colocate those jobs to Pods?
Furthermore I've stumbled across the so called init containers which might be defined via annotations. But these suffer the drawback, that k8s ensures that the actual Pod is only started after the init container did run. So for my very scenario it seems unsuitable.
As I understand you need your service running to configure it.
Your solution is workable and you can set restartPolicy: always you just need a way to tell your one off configuration container that it already ran. You could create and attach an emptyDir volume to your configuration container, create a file on it to mark your configuration successful and check for this file from your process. After your initialization you enter sleep in a loop. The downside is that some resources will be taken up by that container too.
Or you can just add an extra process in the same container and do the configuration (maybe with the file mentioned above as a guard to avoid configuring twice). So write a simple shell script like this and run it instead of your main process:
#!/bin/sh
(
[ -f /mnt/guard-vol/stamp ] && exit 0
/opt/my-config-process parameters && touch /mnt/guard-vol/stamp
) &
exec /opt/my-main-process "$#"
Alternatively you could implement a separate pod that queries the kubernetes API for pods of your service with label configured=false. Configure it and remove the label with the API. You should also modify your Service to select configured=true pods.