I want to create specific version of redis to be used as a cache. Task:
Pod must run in web namespace
Pod name should be cache
Image name is lfccncf/redis with the 4.0-alpine tag
Expose port 6379
The pods need to be running after complete
This are my steps:
k create ns web
k -n web run cache --image=lfccncf/redis:4.0-alpine --port=6379 --dry-run=client-o yaml > pod1.yaml
vi pod1.yaml
pod looks like this
k create -f pod1.yaml
When the expose service name is not define is this right command to fully complete the task ?
k expose pod cache --port=6379 --target-port=6379.
Is it the best way to keep pod running using command like this command: ["/bin/sh", "-ec", "sleep 1000"] ?
You should not use sleep to keep a redis pod running. As long as the redis process runs in the container the pod will be running.
The best way to go about it is to take a stable helm chart from https://hub.helm.sh/charts/stable/redis-ha. Do a helm pull and modify the values as you need.
Redis should be definde as a Statefulset for various reasons. You could also do a
mkdir my-redis
helm fetch --untar --untardir . 'stable/redis' #makes a directory called redis
helm template --output-dir './my-redis' './redis' #redis dir (local helm chart), export to my-redis dir
then use Kustomise if you like.
You will notice that a redis deployment definition is not so trivial when you see how much code there is in the stable chart.
You can then expose it in various ways, but normally you need the access only within the cluster.
If you need a fast way to test from outside the cluster or use it as a development environment check the official ways to do that.
Related
Right now i use docker-compose for development. This is a great tool that comes handy if i use it on simple projects where i got maximum of 3-6 active services but when it comes to 6-8 and more it is become hard to manage.
So i've started to learn k8s on minikube and now i got few questions about some questions:
How to make "two-way" binding for volumes? for example if i have folder named "my-frontend" and i want to sync specific folder in deployment, how to "link" them using PV and PVC ?
Very often it comes handy to make some service with specific environment like node:12.0.0 and then use it as command executor like this: docker-compose run workspace npm install
how to achieve something like this using k8s?
How to make "two-way" binding for volumes? for example if i have folder named "my-frontend" and i want to sync specific folder in deployment, how to "link" them using PV and PVC ?
You need to create a PersistentVolume which in your case will use a specific directory in the host, in Kubernetes official documentation there's an example with this same use case.
Then a PersistentVolumeClaim to request some space from this volume (also an example in the previous documentation) and then mount the PVC on the pod/deployment where you need it.
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-pvc
containers:
....
volumeMounts:
- mountPath: "/mount/path/in/pod"
name: my-volume
Very often it comes handy to make some service with specific environment like node:12.0.0 and then use it as command executor like this: docker-compose run workspace npm install
how to achieve something like this using k8s?
You need to use kubectl, it has very similar functionalities as docker CLI, it also supports run command with very similar parameters and functionality. Alternatively, you can start your pod once and then run commands multiple times in the same instance by using kubectl exec
I am setting up kubernetes for an application with 8 microservices,activemq,postgres,redis and mongodb.
After the entire configuration of pods and deployment ,is there any way to create a single master deployment yaml file which will create the entire set of services,replcas etc for the entire application.
Note:I will be using multiple deployment yaml files,statefulsets etc for all above mentioned services.
You can use this script:
NAMESPACE="your_namespace"
RESOURCES="configmap secret daemonset deployment service hpa"
for resource in ${RESOURCES};do
rsrcs=$(kubectl -n ${NAMESPACE} get -o json ${resource}|jq '.items[].metadata.name'|sed "s/\"//g")
for r in ${rsrcs};do
dir="${NAMESPACE}/${resource}"
mkdir -p "${dir}"
kubectl -n ${NAMESPACE} get -o yaml ${resource} ${r} > "${dir}/${r}.yaml"
done
done
Remember to specify what resources you want exported in the script.
More info here
Is there any way to create a single master deployment yaml file which will create the entire set of services,replicas etc for the entire application.
Since you already mentioned kubernetes-helm why don't you actually used it for that exact purpose? In short helm is sort of package manager for Kubernetes, some say similar to yum or apt. It deploys charts which you can actually refer to as packed application. Its pack of all your pre-configured applications which can be deploy as one unit. It's not entirely one file but more collection of files that build so called helm chart.
What are the helm charts?
Well they are basically K8s yaml manifest combined into a single package that can be installed to your cluster. And installing the package is just as simple as running single command such as helm install. Once done the charts are highly reusable which reduces the time for creating dev, test and prod environments.
As an example of a complex helm chart deploying multiple resources you many want to check Stackstorm.
Basically once deployed without any custom config this chart will deploy 2 replicas for each component of StackStorm as well as backends like RabbitMQ, MongoDB and Redis.
I'm running GKE cluster and there is a deployment that uses image which I push to Container Registry on GCP, issue is - even though I build the image and push it with latest tag, the deployment keeps on creating new pods with the old one cached - is there a way to update it without re-deploying (aka without destroying it first)?
There is a known issue with the kubernetes that even if you change configmaps the old config remains and you can either redeploy or workaround with
kubectl patch deployment $deployment -n $ns -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
is there something similar with cached images?
I think you're looking for kubectl set or patch which I found there in kubernetes documentation.
To update image of deployment you can use kubectl set
kubectl set image deployment/name_of_deployment name_of_deployment=image:name_of_image
To update image of your pod you can use kubectl patch
kubectl patch pod name_of_pod -p '{"spec":{"containers":[{"name":"name_of_pod_from_yaml","image":"name_of_image"}]}}'
You can always use kubectl edit to edit which allows you to directly edit any API resource you can retrieve via the command line tool.
kubectl edit deployment name_of_deployment
Let me know if you have any more questions.
1) You should change the way of your thinking. Destroying pod is not bad. Application downtime is what is bad. You should always plan your deployments in such a way that it can tolerate one pod death. Use multiple replicas for stateless apps and use clusters for stateful apps. Use Kubernetes rolling update for any changes to your deployments. Rolling updates have many extremely important settings which directly influence the uptime of your apps. Read it carefully.
2) The reason why Kubernetes launches old image is that by default it uses
imagePullPolicy: IfNotPresent. Use imagePullPolicy: Always and it will always try to pull latest version on redeploy.
I am trying to understand how to deploy an application on Kubernetes which requires each Pod of the same deployment to have different args used with the starting command.
I have this application which runs spark on Kubernetes and needs to spawn executor Pods on start. The problem is that each Pod of the application needs to spawn its own executors using its own port and spark app name.
I've read of stateful sets and searched the documentation but I didn't found a solution to my problem. Since every Pod needs to use a different port, I need that port to be declared in a service if I understood correctly, and also directly passed as an argument to the pod command in the args.
Is there a way to obtain this without using multiple deployments, one for each pod I need to create? Because this is the only solution i can think of but it can't be scaled after being deployed.
I'm using Helm to deploy the application, so I can easily create as many deployments and / or services as needed, but I would like to find a solution which can scale at runtime, if possible.
I don't think you can have a Deployment which creates PODs from different Specs. You can't have it in Kubernetes and Helm won't help here (since Helm is just a template manager over Kubernetes configurations).
What you can do is to specify each Pod as a separate configuration (if single Pod, you don't necessarily need Deployment) and let Helm manage it.
Posting the solution I used since it could be useful for other people searching around.
In the end I found a great configuration to solve my problem. I used a StatefulSet to declare the deployment of the Spark application. Associated with the StatefulSet, a headless Service which expose each pod on a specific port.
StatefulSet can declare a property spec.serviceName which can have the same name of a headless service to create a unique network name for each Pod. Something like <pod_name>.<service_name>
Additionally, each Pod has a unique and not-changing name which is created using the application name and an ordinal starting from 0 for each replica Pod.
Using a starting script in the docker image and inserting in the environment of each Pod the pod name from the metadata, I was able to use different configurations for each pod since, even with the same deployment, each pod have their own unique metadata name and I can use the StatefulSet service to obtain what I needed.
This way, the StatefulSet is scalable at run time and works as expected.
hey I am not sure if this will exactly match your scenario but I think this is what you can try. Use a sidecar container to run the replica instances, A sidecar is a container which runs along with the main container and also shares the same namespace and can share volumes across each container.
Now to pass the different arguments to each container or sidecar, you will have to tweak the dockerfile or rather tweak the way your container starts.
Create a start.sh script file which accepts the arguments and starts the container with those arguments, the trick here is to accept the argument from environment variables thus allowing you to later configure these from configmaps or pod env.
So here is an example of php/laravel application running the same code and starting with different arguments. And the start.sh the file looks like this.
#!/bin/sh
if [ "${CONTAINER_ROLE}" = "queue" ];
then
echo "Running the queue..."
php artisan queue:work --queue=${QUEUENAME}
echo "Queue Started"
else
echo "Running Iceberg."
exec apache2-foreground
fi
So a sample dockerfile looks like this
FROM php:7.1.24-apache
COPY . /srv/myapp
...
...
RUN chown -R www-data:www-data /srv/app \
&& a2enmod remoteip && a2enmod rewrite
WORKDIR /srv/app
RUN chmod +x .docker/start.sh
CMD [ "sh",".docker/start.sh"]
Let me know how it goes.
I have a running pod and I want to change one of it's container's environment variable and made it work immediately. Can I achieve that? If I can, how to do that?
Simply put and in kube terms, you can not.
Environment for linux process is established on process startup, and there are certainly no kube tools that can achieve such goal.
For example, if you make a change to your Deployment (I assume you use it to create pods) it will roll the underlying pods.
Now, that said, there is a really hacky solution reported under Is there a way to change the environment variables of another process in Unix? that involves using GDB
Also, remember that even if you could do that, there is still application logic that would need to watch for such changes instead of, as it usually is now, just evaluate configuration from envs during startup.
This worked with me
kubectl set env RESOURCE/NAME KEY_1=VAL_1 ... KEY_N=VAL_N
check the official documentation here
Another approach for runtime pods you can get into the Pod command line and change the variables in the runtime
RUN kubectl exec -it <pod_name> -- /bin/bash
Then
Run export VAR1=VAL1 && export VAR2=VAL2 && your_cmd
I'm not aware of any way to do it and I can't think of real world scenario where this makes too much sense.
Usually you have to restart a process for it to notice the changed environment variables and the easiest way to do that is restart the pod.
The solution closest to what seem to want is to create a deployment and then use kubectl edit (kubectl edit deploy/name) to modify it's environment variables. A new pod is started and the old one is terminated after you save.
Kubernetes is designed in such a way that any changes to the pod should be redeployed through the config. If you go messing with pods that have already been deployed you can end up with weird clusters that are hard to debug.
If you really want to you can run additional commands in your running pod using kubectl exec, but this is only recommended for debug purposes.
kubectl exec -it <pod_name> export VARIABLENAME=<thing>
If you are using Helm 3> according to the documentation:
Automatically Roll Deployments
Often times ConfigMaps or Secrets are
injected as configuration files in containers or there are other
external dependencies changes that require rolling pods. Depending on
the application a restart may be required should those be updated with
a subsequent helm upgrade, but if the deployment spec itself didn't
change the application keeps running with the old configuration
resulting in an inconsistent deployment.
The sha256sum function can be used to ensure a deployment's annotation
section is updated if another file changes:
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
[...]
In the event you always want
to roll your deployment, you can use a similar annotation step as
above, instead replacing with a random string so it always changes and
causes the deployment to roll:
kind: Deployment
spec:
template:
metadata:
annotations:
rollme: {{ randAlphaNum 5 | quote }}
[...]
Both of these methods allow your Deployment to leverage the built in update strategy
logic to avoid taking downtime.
NOTE: In the past we recommended using the --recreate-pods flag as
another option. This flag has been marked as deprecated in Helm 3 in
favor of the more declarative method above.
It is hard to change from outside. But it is easy to change from inside. Your App running in the pod can change it. Just oppose an Api to change environment variable.
You can use configmap with volumes to update environment variables on the go..
Refer: https://itnext.io/how-to-automatically-update-your-kubernetes-app-configuration-d750e0ca79ab