i am new to kubernetes and i have some functionally that i need to implement.
i need to set an env variable for only one docker container in a service.
for example- if i have 3 users containers then 1 of them need to have env variable named master
i did it with nomad. nomad set an env variable named NOMAD_ALLOC_INDEX, that give me the index of the container, this way i checked that if the container index was 0 then it is master.
i try find if kubernetes have a similar variable but didn't find anywhere.
i also try find in google an alternative solution but ended up with nothing.
any ideas of how i can achieve it ?
If you want sequential indexes, StatefulSet is your solution. Otherwise lookup kubernetes leader election, there are ways to solve it with ie. sidecar container performing leader election and exposing status via http call so you can curl localhost:port and see if the pod is master or not.
Related
Summarize the problem:
Any way we can add an ENV to a pod or a new pod in kubernetes?
For example, I want to add HTTP_PROXY to many pods and the new pods it will generate in kubeflow 1.4. So these pods can be access to internet.
Describe what you’ve tried:
I searched and found istio maybe do that, but it's too complex for me.
The second, there are too many yamls in kubeflow, as to I cannot modify them one by one to use configmap or add ENV just in them.
So anyone has a good simle way to do this? Like doing this in kubernetes configuation.
Use "PodPreset" object to inject common environment variables and other params to all the matching pods.
Please follow below article
https://v1-19.docs.kubernetes.io/docs/tasks/inject-data-application/podpreset/
If PodPreset is indeed removed from v1.20, then you seem to need a webhook.
You will have to run an additional service in your cluster that will change the configuration of the pods.
Here is an example, on the basis of which I created my webhook, which changed the configuration of the pods in the cluster, in this example the developer used the logic adding a sidecar to the pod, but you can set your own to forward the required ENV:
https://github.com/morvencao/kube-mutating-webhook-tutorial/blob/master/medium-article.md
I am trying to understand how to deploy an application on Kubernetes which requires each Pod of the same deployment to have different args used with the starting command.
I have this application which runs spark on Kubernetes and needs to spawn executor Pods on start. The problem is that each Pod of the application needs to spawn its own executors using its own port and spark app name.
I've read of stateful sets and searched the documentation but I didn't found a solution to my problem. Since every Pod needs to use a different port, I need that port to be declared in a service if I understood correctly, and also directly passed as an argument to the pod command in the args.
Is there a way to obtain this without using multiple deployments, one for each pod I need to create? Because this is the only solution i can think of but it can't be scaled after being deployed.
I'm using Helm to deploy the application, so I can easily create as many deployments and / or services as needed, but I would like to find a solution which can scale at runtime, if possible.
I don't think you can have a Deployment which creates PODs from different Specs. You can't have it in Kubernetes and Helm won't help here (since Helm is just a template manager over Kubernetes configurations).
What you can do is to specify each Pod as a separate configuration (if single Pod, you don't necessarily need Deployment) and let Helm manage it.
Posting the solution I used since it could be useful for other people searching around.
In the end I found a great configuration to solve my problem. I used a StatefulSet to declare the deployment of the Spark application. Associated with the StatefulSet, a headless Service which expose each pod on a specific port.
StatefulSet can declare a property spec.serviceName which can have the same name of a headless service to create a unique network name for each Pod. Something like <pod_name>.<service_name>
Additionally, each Pod has a unique and not-changing name which is created using the application name and an ordinal starting from 0 for each replica Pod.
Using a starting script in the docker image and inserting in the environment of each Pod the pod name from the metadata, I was able to use different configurations for each pod since, even with the same deployment, each pod have their own unique metadata name and I can use the StatefulSet service to obtain what I needed.
This way, the StatefulSet is scalable at run time and works as expected.
hey I am not sure if this will exactly match your scenario but I think this is what you can try. Use a sidecar container to run the replica instances, A sidecar is a container which runs along with the main container and also shares the same namespace and can share volumes across each container.
Now to pass the different arguments to each container or sidecar, you will have to tweak the dockerfile or rather tweak the way your container starts.
Create a start.sh script file which accepts the arguments and starts the container with those arguments, the trick here is to accept the argument from environment variables thus allowing you to later configure these from configmaps or pod env.
So here is an example of php/laravel application running the same code and starting with different arguments. And the start.sh the file looks like this.
#!/bin/sh
if [ "${CONTAINER_ROLE}" = "queue" ];
then
echo "Running the queue..."
php artisan queue:work --queue=${QUEUENAME}
echo "Queue Started"
else
echo "Running Iceberg."
exec apache2-foreground
fi
So a sample dockerfile looks like this
FROM php:7.1.24-apache
COPY . /srv/myapp
...
...
RUN chown -R www-data:www-data /srv/app \
&& a2enmod remoteip && a2enmod rewrite
WORKDIR /srv/app
RUN chmod +x .docker/start.sh
CMD [ "sh",".docker/start.sh"]
Let me know how it goes.
I wonder how would one implement a colocated auxiliary container in a Pod within a Deployment which does not provide a service but rather a job/batch workload?
Background of my questions is, that I want to deploy a scalable service at which each instance needs configuration after its start. This configuration is done via a HTTP POST to its local colocated service instance. I've implemented a auxiliary container for this in order to benefit from the feature of colocation. So the auxiliary container always knows which instance needs to be configured.
Problem is, that the restartPolicy needs to be defined at the Pod level. I am looking for something like restart policy always for the service and a different restart policy onFailurefor the configuration job.
I know that k8s provides the Job resource for such workloads. But is there an option to colocate those jobs to Pods?
Furthermore I've stumbled across the so called init containers which might be defined via annotations. But these suffer the drawback, that k8s ensures that the actual Pod is only started after the init container did run. So for my very scenario it seems unsuitable.
As I understand you need your service running to configure it.
Your solution is workable and you can set restartPolicy: always you just need a way to tell your one off configuration container that it already ran. You could create and attach an emptyDir volume to your configuration container, create a file on it to mark your configuration successful and check for this file from your process. After your initialization you enter sleep in a loop. The downside is that some resources will be taken up by that container too.
Or you can just add an extra process in the same container and do the configuration (maybe with the file mentioned above as a guard to avoid configuring twice). So write a simple shell script like this and run it instead of your main process:
#!/bin/sh
(
[ -f /mnt/guard-vol/stamp ] && exit 0
/opt/my-config-process parameters && touch /mnt/guard-vol/stamp
) &
exec /opt/my-main-process "$#"
Alternatively you could implement a separate pod that queries the kubernetes API for pods of your service with label configured=false. Configure it and remove the label with the API. You should also modify your Service to select configured=true pods.
A new Kubernetes Deployment with 2+ replicas for high availability.
I want to be able to execute a command on the first pod only, let's say create a DB, and let the other replicas wait for the first one to complete.
To implement this, I just want to know in the pod if this is replica #1 or not.
So in the pod's entry point I can test:
if [ $REPLICA_ID -eq 1 ]; then
CreateDB
else
WaitForDB
fi
Can this be done in Kubernetes?
in Kubernetes a Deployment is considered stateless and therefore doesn't provide the feature you're looking for. You should rather look into StatefulSet and their features.
A StatefulSete.g. supports ordered creation and when combined with the generally available readinessProbe for you pods you could create the desired behaviour. Also the pod name is stable within a StatefulSet so your test could then be done with the hostname of the Pod.
Instead of the accepted answer, wouldn't an init container fit your problem description better?
Add some kind of semaphore system (if needed) to ensure it is executed correctly?
I am following up guide [1] to create multi-node K8S cluster which has 1 master and 2 nodes. Also, a label needs to set to each node respectively.
Node 1 - label name=orders
Node 2 - label name=payment
I know that above could be achieved running kubectl command
kubectl get nodes
kubectl label nodes <node-name> <label-key>=<label-value>
But I would like to know how to set label when creating a node. Node creation guidance is in [2].
Appreciate your input.
[1] https://coreos.com/kubernetes/docs/latest/getting-started.html
[2] https://coreos.com/kubernetes/docs/latest/deploy-workers.html
In fact there is a trivial way to achieve that since 1.3 or something like that.
What is responsible for registering your node is the kubelet process launched on it, all you need to do is pass it a flag like this --node-labels 'role=kubemaster'. This is how I differentiate nodes between different autoscaling groups in my AWS k8s cluster.
This answer is now incorrect (and has been for several versions of Kubernetes). Please see the correct answer by Radek 'Goblin' Pieczonka
There are a few options available to you. The easiest IMHO would be to use a systemd unit to install and configure kubectl, then run the kubectl label command. Alternatively, you could just use curl to update the labels in the node's metadata directly.
That being said, while I don't know your exact use case, the way you are using the labels on the nodes seems to be an effort to bypass some of Kubernetes key features, like dynamic scheduling of components across nodes. I would suggest rather than work on labeling the nodes automatically that you try to address why you need to identify the nodes in the first place.
I know this isn't creation time but, the following is pretty easy (labels follow pattern of key=value):
k label node minikube gpu.nvidia.com/model=Quadro_RTX_4000 node.coreweave.cloud/cpu=intel-xeon-v2
node/minikube labeled