Single Container Pod yaml - kubernetes

Forgive my ignorance but I can't seem to find a way of using a yaml file to deploy a single container pod (read: kind: Pod). It appears the only way to do it is to use a deployment yaml file (read: kind: Deployment) with a replica of 1.
Is there really no way?
Why I ask is because it would be nice to put everything in source control, including the one off's like databases.
It would be awesome if there was a site with all the available options you can use in a yaml file (like vagrant's vagrantfile). There isn't one, right?
Thanks!

You should be able to find pod yaml files easily. For example, the documentation has an example of a Pod being created.
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pod's contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
command: ["/bin/echo", "hello", "world"]
One thing to note is that if a deployment or a replicaset created a resource on your behalf, there is no reason why you couldn't do the same.
kubectl get pod <pod-name> -o yaml should give you the YAML spec of a created pod.
There is Kubernetes charts, which serves as a repository for configuration surrounding complex applications, using the helm package manager. This would serve you well for deploying more complex applications.

Never mind, figured it out. It's possible. You just use the multi-container yaml file (example found here: https://kubernetes.io/docs/user-guide/pods/multi-container/) but only specify one container.
I'd tried it before but had inadvertently mistyped the yaml formatting.
Thanks rubber ducky!

Related

Kubernetes - Reconfiguring a Service to point to a new Deployment (blue/green)

I'm following along with a video explaining blue/green Deployments in Kubernetes. They have a simple example with a Deployment named blue-nginx and another named green-nginx.
The blue Deployment is exposed via a Service named bgnginx. To transfer traffic from the blue deployment to the green deployment, the Service is deleted and the green deployment is exposed via a Service with the same name. This is done with the following one-liner:
kubectl delete svc bgnginx; kubectl expose deploy green-nginx --port=80 --name=bgnginx
Obviously, this works successfully. However, I'm wondering why they don't just use kubectl edit to change the labels in the Service instead of deleting and recreating it. If I edit bgnginx and set .metadata.labels.app & .spec.selector.app to green-nginx it achieves the same thing.
Is there a benefit to deleting and recreating the Service, or am I safe just editing it?
Yes, you can follow the kubectl edit svc and edit the labels & selector there.
it's fine, however YAML and other option is suggested due to kubectl edit is error-prone approach. you might face indentation issues.
Is there a benefit to deleting and recreating the Service, or am I
safe just editing it?
It's more about following best practices, and you have YAML declarative file handy with version control if managing.
The problem with kubectl edit is that it requires a human to operate a text editor. This is a little inefficient and things do occasionally go wrong.
I suspect the reason your writeup wants you to kubectl delete the Service first is that the kubectl expose command will fail if it already exists. But as #HarshManvar suggests in their answer, a better approach is to have an actual YAML file checked into source control
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app.kubernetes.io/name: myapp
spec:
selector:
app.kubernetes.io/name: myapp
example.com/deployment: blue
You should be able to kubectl apply -f service.yaml to deploy it into the cluster, or a tool can do that automatically.
The problem here is that you still have to edit the YAML file (or in principle you can do it with sed) and swapping the deployment would result in an extra commit. You can use a tool like Helm that supports an extra templating layer
spec:
selector:
app.kubernetes.io/name: myapp
example.com/deployment: {{ .Values.color }}
In Helm I might set this up with three separate Helm releases: the "blue" and "green" copies of your application, plus a separate top-level release that just contained the Service.
helm install myapp-blue ./myapp
# do some isolated validation
helm upgrade myapp-router ./router --set color=blue
# do some more validation
helm uninstall myapp-green
You can do similar things with other templating tools like ytt or overlay layers like Kustomize. The Service's selectors: don't have to match its own metadata, and you could create a Service that matched both copies of the application, maybe for a canary pattern rather than a blue/green deployment.

Kubectl get deployments, no resources

I've just started learning kubernetes, in every tutorial the writer generally uses "kubectl .... deploymenst" to control the newly created deploys. Now, with those commands (ex kubectl get deploymets) i always get the response No resources found in default namespace., and i have to use "pods" instead of "deployments" to make things work (which works fine).
Now my question is, what is causing this to happen, and what is the difference between using a deployment or a pod? ? i've set the docker driver in the first minikube, it has something to do with this?
First let's brush up some terminologies.
Pod - It's the basic building block for Kubernetes. It groups one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers.
Deployment - It is a controller which wraps Pod/s and manages its life cycle, which is to say actual state to desired state. There is one more layer in between Deployment and Pod which is ReplicaSet : A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.
Below is the visualization:
Source: I drew it!
In you case what might have happened :
Either you have created a Pod not a Deployment. Therefore, when you do kubectl get deployment you don't see any resources. Note when you create Deployments it in turn creates a ReplicaSet for you and also creates the defined pods.
Or may be you created your deployment in a different namespace, if that's the case, then type this command to find your deployments in that namespace kubectl get deploy NAME_OF_DEPLOYMENT -n NAME_OF_NAMESPACE
More information to clarify your concepts:
Source
Below the section inside spec.template is the section which is supposedly your POD manifest if you were to create it manually and not take the deployment route. Now like I said earlier in simple terms Deployments are a wrapper to your PODs, therefore anything which you see outside the path spec.template is the configuration which you will need to defined on how you want to manage (scaling,affinity, e.t.c) your POD
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Deployment is a controller providing higher level abstraction on top of pods and ReplicaSets. A Deployment provides declarative updates for Pods and ReplicaSets. Deployments internally creates ReplicaSets within which pods are created.
Use cases of deployment is documented here
One reason for No resources found in default namespace could be that you created the deployment in a specific namespace and not in default namespace.
You can see deployments in a specific namespace or in all namespaces via
kubectl get deploy -n namespacename
kubectl get deploy -A

kubernetes generate\create resource configuration

i was looking for a strightforward way to create kubernetes resources templates (such as pod, deployment, service, etc.), though i couldn't find any good tool that does it. the tools i came across, are no longer maintained and some have stiff learning curve (such as kustomize).
some time ago, kubectl have introduced the generator and it can be used as follow to produce the resource configuration. for instance:
$ kubectl run helloworld --image=helloworld \
--dry-run --output=yaml --generator=run-pod/v1
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: helloworld
name: helloworld
spec:
containers:
- image: helloworld
name: helloworld
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
but kubectl generator should not be used, since most if it was deprecated.
one might issue kubectl get on all the resources, to commit them in a source control and use it to restore the kubernetes cluster, though that implies that the kubernetes resources were created already, where my interest is generating these resource configuration in the first place.
please recommend about your favorite tool for generating\creating kubernetes resources or explain what is the best practice to handle my use case.
Create yaml files and use git to version them is the new best practice. However this approach can be automatized with GitOps approach and flagger. Another way is to create helm chart and customize with template feature.
You should definitely run away from generating from command line either way. You should have source of truth and cli generation will not give you that.

Restart pods when configmap updates in Kubernetes?

How do I automatically restart Kubernetes pods and pods associated with deployments when their configmap is changed/updated?
I know there's been talk about the ability to automatically restart pods when a config maps changes but to my knowledge this is not yet available in Kubernetes 1.2.
So what (I think) I'd like to do is a "rolling restart" of the deployment resource associated with the pods consuming the config map. Is it possible, and if so how, to force a rolling restart of a deployment in Kubernetes without changing anything in the actual template? Is this currently the best way to do it or is there a better option?
The current best solution to this problem (referenced deep in https://github.com/kubernetes/kubernetes/issues/22368 linked in the sibling answer) is to use Deployments, and consider your ConfigMaps to be immutable.
When you want to change your config, create a new ConfigMap with the changes you want to make, and point your deployment at the new ConfigMap. If the new config is broken, the Deployment will refuse to scale down your working ReplicaSet. If the new config works, then your old ReplicaSet will be scaled to 0 replicas and deleted, and new pods will be started with the new config.
Not quite as quick as just editing the ConfigMap in place, but much safer.
Signalling a pod on config map update is a feature in the works (https://github.com/kubernetes/kubernetes/issues/22368).
You can always write a custom pid1 that notices the confimap has changed and restarts your app.
You can also eg: mount the same config map in 2 containers, expose a http health check in the second container that fails if the hash of config map contents changes, and shove that as the liveness probe of the first container (because containers in a pod share the same network namespace). The kubelet will restart your first container for you when the probe fails.
Of course if you don't care about which nodes the pods are on, you can simply delete them and the replication controller will "restart" them for you.
The best way I've found to do it is run Reloader
It allows you to define configmaps or secrets to watch, when they get updated, a rolling update of your deployment is performed. Here's an example:
You have a deployment foo and a ConfigMap called foo-configmap. You want to roll the pods of the deployment every time the configmap is changed. You need to run Reloader with:
kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml
Then specify this annotation in your deployment:
kind: Deployment
metadata:
annotations:
configmap.reloader.stakater.com/reload: "foo-configmap"
name: foo
...
Helm 3 doc page
Often times configmaps or secrets are injected as configuration files in containers. Depending on the application a restart may be required should those be updated with a subsequent helm upgrade, but if the deployment spec itself didn't change the application keeps running with the old configuration resulting in an inconsistent deployment.
The sha256sum function can be used together with the include function to ensure a deployments template section is updated if another spec changes:
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
[...]
In my case, for some reasons, $.Template.BasePath didn't work but $.Chart.Name does:
spec:
replicas: 1
template:
metadata:
labels:
app: admin-app
annotations:
checksum/config: {{ include (print $.Chart.Name "/templates/" $.Chart.Name "-configmap.yaml") . | sha256sum }}
You can update a metadata annotation that is not relevant for your deployment. it will trigger a rolling-update
for example:
spec:
template:
metadata:
annotations:
configmap-version: 1
If k8>1.15; then doing a rollout restart worked best for me as part of CI/CD with App configuration path hooked up with a volume-mount. A reloader plugin or setting restartPolicy: Always in deployment manifest YML did not work for me. No application code changes needed, worked for both static assets as well as Microservice.
kubectl rollout restart deployment/<deploymentName> -n <namespace>
Had this problem where the Deployment was in a sub-chart and the values controlling it were in the parent chart's values file. This is what we used to trigger restart:
spec:
template:
metadata:
annotations:
checksum/config: {{ tpl (toYaml .Values) . | sha256sum }}
Obviously this will trigger restart on any value change but it works for our situation. What was originally in the child chart would only work if the config.yaml in the child chart itself changed:
checksum/config: {{ include (print $.Template.BasePath "/config.yaml") . | sha256sum }}
Consider using kustomize (or kubectl apply -k) and then leveraging it's powerful configMapGenerator feature. For example, from: https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/configmapgenerator/
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Just one example of many...
- name: my-app-config
literals:
- JAVA_HOME=/opt/java/jdk
- JAVA_TOOL_OPTIONS=-agentlib:hprof
# Explanation below...
- SECRETS_VERSION=1
Then simply reference my-app-config in your deployments. When building with kustomize, it'll automatically find and update references to my-app-config with an updated suffix, e.g. my-app-config-f7mm6mhf59.
Bonus, updating secrets: I also use this technique for forcing a reload of secrets (since they're affected in the same way). While I personally manage my secrets completely separately (using Mozilla sops), you can bundle a config map alongside your secrets, so for example in your deployment:
# ...
spec:
template:
spec:
containers:
- name: my-app
image: my-app:tag
envFrom:
# For any NON-secret environment variables. Name is automatically updated by Kustomize
- configMapRef:
name: my-app-config
# Defined separately OUTSIDE of Kustomize. Just modify SECRETS_VERSION=[number] in the my-app-config ConfigMap
# to trigger an update in both the config as well as the secrets (since the pod will get restarted).
- secretRef:
name: my-app-secrets
Then, just add a variable like SECRETS_VERSION into your ConfigMap like I did above. Then, each time you change my-app-secrets, just increment the value of SECRETS_VERSION, which serves no other purpose except to trigger a change in the kustomize'd ConfigMap name, which should also result in a restart of your pod. So then it becomes:
I also banged my head around this problem for some time and wished to solve this in an elegant but quick way.
Here are my 20 cents:
The answer using labels as mentioned here won't work if you are updating labels. But would work if you always add labels. More details here.
The answer mentioned here is the most elegant way to do this quickly according to me but had the problem of handling deletes. I am adding on to this answer:
Solution
I am doing this in one of the Kubernetes Operator where only a single task is performed in one reconcilation loop.
Compute the hash of the config map data. Say it comes as v2.
Create ConfigMap cm-v2 having labels: version: v2 and product: prime if it does not exist and RETURN. If it exists GO BELOW.
Find all the Deployments which have the label product: prime but do not have version: v2, If such deployments are found, DELETE them and RETURN. ELSE GO BELOW.
Delete all ConfigMap which has the label product: prime but does not have version: v2 ELSE GO BELOW.
Create Deployment deployment-v2 with labels product: prime and version: v2 and having config map attached as cm-v2 and RETURN, ELSE Do nothing.
That's it! It looks long, but this could be the fastest implementation and is in principle with treating infrastructure as Cattle (immutability).
Also, the above solution works when your Kubernetes Deployment has Recreate update strategy. Logic may require little tweaks for other scenarios.
How do I automatically restart Kubernetes pods and pods associated
with deployments when their configmap is changed/updated?
If you are using configmap as Environment you have to use the external option.
Reloader
Kube watcher
Configurator
Kubernetes auto-reload the config map if it's mounted as volume (If subpath there it won't work with that).
When a ConfigMap currently consumed in a volume is updated, projected
keys are eventually updated as well. The kubelet checks whether the
mounted ConfigMap is fresh on every periodic sync. However, the
kubelet uses its local cache for getting the current value of the
ConfigMap. The type of the cache is configurable using the
ConfigMapAndSecretChangeDetectionStrategy field in the
KubeletConfiguration struct. A ConfigMap can be either propagated by
watch (default), ttl-based, or by redirecting all requests directly to
the API server. As a result, the total delay from the moment when the
ConfigMap is updated to the moment when new keys are projected to the
Pod can be as long as the kubelet sync period + cache propagation
delay, where the cache propagation delay depends on the chosen cache
type (it equals to watch propagation delay, ttl of cache, or zero
correspondingly).
Official document : https://kubernetes.io/docs/concepts/configuration/configmap/#mounted-configmaps-are-updated-automatically
ConfigMaps consumed as environment variables are not updated automatically and require a pod restart.
Simple example Configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: config
namespace: default
data:
foo: bar
POD config
spec:
containers:
- name: configmaptestapp
image: <Image>
volumeMounts:
- mountPath: /config
name: configmap-data-volume
ports:
- containerPort: 8080
volumes:
- name: configmap-data-volume
configMap:
name: config
Example : https://medium.com/#harsh.manvar111/update-configmap-without-restarting-pod-56801dce3388
Adding the immutable property to the config map totally avoids the problem. Using config hashing helps in a seamless rolling update but it does not help in a rollback. You can take a look at this open-source project - 'Configurator' - https://github.com/gopaddle-io/configurator.git .'Configurator' works by the following using the custom resources :
Configurator ties the deployment lifecycle with the configMap. When
the config map is updated, a new version is created for that
configMap. All the deployments that were attached to the configMap
get a rolling update with the latest configMap version tied to it.
When you roll back the deployment to an older version, it bounces to
configMap version it had before doing the rolling update.
This way you can maintain versions to the config map and facilitate rolling and rollback to your deployment along with the config map.
Another way is to stick it into the command section of the Deployment:
...
command: [ "echo", "
option = value\n
other_option = value\n
" ]
...
Alternatively, to make it more ConfigMap-like, use an additional Deployment that will just host that config in the command section and execute kubectl create on it while adding an unique 'version' to its name (like calculating a hash of the content) and modifying all the deployments that use that config:
...
command: [ "/usr/sbin/kubectl-apply-config.sh", "
option = value\n
other_option = value\n
" ]
...
I'll probably post kubectl-apply-config.sh if it ends up working.
(don't do that; it looks too bad)

How can I edit a Deployment without modify the file manually?

I have defined a Deployment for my app:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: 172.20.34.206:5000/myapp_img:2.0
ports:
- containerPort: 8080
Now, if I want update my app's image 2.0 to 3.0, I do this:
$ kubectl edit deployment/myapp-deployment
vim is open. I change the image version from 2.0 to 3.0 and save.
How can it be automated? Is there a way to do it just running a command? Something like:
$ kubectl edit deployment/myapp-deployment --image=172.20.34.206:5000/myapp:img:3.0
I thought using Kubernetes API REST but I don't understand the documentation.
You could do it via the REST API using the PATCH verb. However, an easier way is to use kubectl patch. The following command updates your app's tag:
kubectl patch deployment myapp-deployment -p \
'{"spec":{"template":{"spec":{"containers":[{"name":"myapp","image":"172.20.34.206:5000/myapp:img:3.0"}]}}}}'
According to the documentation, YAML format should be accepted as well. See Kubernetes issue #458 though (and in particular this comment) which may hint at a problem.
There is a set image command which may be useful in simple cases
Update existing container image(s) of resources.
Possible resources include (case insensitive):
pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), job, replicaset (rs)
kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N
http://kubernetes.io/docs/user-guide/kubectl/kubectl_set_image/
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
deployment "nginx-deployment" image updated
http://kubernetes.io/docs/user-guide/deployments/
(I would have posted this as a comment if I had enough reputation)
Yes, as per http://kubernetes.io/docs/user-guide/kubectl/kubectl_patch/ both JSON and YAML formats are accepted.
But I see that all the examples there are using JSON format.
Filed https://github.com/kubernetes/kubernetes.github.io/issues/458 to add a YAML format example.
I have recently built a tool to automate deployment updates when new images are available, it works with Kubernetes and Helm:
https://github.com/rusenask/keel
You only have to label your deployments with Keel policy like keel.sh/policy=major to enable major version updates, more info in the readme. Works similarly with Helm, no additional CLI/UI required.