Disable kubernetes enableServiceLinks globally? - kubernetes

Is there a way to disable service links globally. There's a field in podSpec:
enableServiceLinks: false
but it's true by default. I couldn't find anything in kubelet to kill it. Or is there some cool admission webhook toolchain I could use

You can use the Kubernetes-native policy engine called Kyverno. Kyverno policies can validate, mutate (see: Mutate Resources), and generate Kubernetes resources.
A Kyverno policy is a collection of rules that can be applied to the entire cluster (ClusterPolicy) or to the specific namespace (Policy).
I will create an example to illustrate how it may work.
First we need to install Kyverno, you have the option of installing Kyverno directly from the latest release manifest, or using Helm (see: Quick Start guide):
$ kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/definitions/release/install.yaml
After successful installation, we can create a simple ClusterPolicy:
$ cat strategic-merge-patch.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: strategic-merge-patch
spec:
rules:
- name: enableServiceLinks_false_globally
match:
resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
enableServiceLinks: false
$ kubectl apply -f strategic-merge-patch.yaml
clusterpolicy.kyverno.io/strategic-merge-patch created
$ kubectl get clusterpolicy
NAME BACKGROUND ACTION READY
strategic-merge-patch true audit true
This policy adds enableServiceLinks: false to the newly created Pod.
Let's create a Pod and check if it works as expected:
$ kubectl run app-1 --image=nginx
pod/app-1 created
$ kubectl get pod app-1 -oyaml | grep "enableServiceLinks:"
enableServiceLinks: false
It also works with Deployments, StatefulSets, DaemonSets etc.:
$ kubectl create deployment deploy-1 --image=nginx
deployment.apps/deploy-1 created
$ kubectl get pod deploy-1-7cfc5d6879-kfdlh -oyaml | grep "enableServiceLinks:"
enableServiceLinks: false
More examples with detailed explanations can be found in the Kyverno Writing Policies documentation.

Related

How to add label to existed namespace by helm

I have a project to create a mutating webhook in the kube-system namespace, which needs to exclude webhook server deployment namespaces.
But the kube-system namespace has been created. How do I attach the required labels to it using Helm?
Helmfile offers hooks which are pretty neat for that:
releases:
- name: istio-ingress
namespace: istio-ingress
chart: istio/gateway
wait: true
hooks:
- events:
- presync
showlogs: true
command: sh
args:
- -c
- "kubectl create namespace istio-ingress --dry-run=client -o yaml | kubectl apply -f -"
- events:
- presync
showlogs: true
command: sh
args:
- -c
- "kubectl label --dry-run=client -o yaml --overwrite namespace istio-ingress istio-injection=enabled | kubectl apply -f -"
Since the kube-system namespace is a core part of Kubernetes (every cluster has it preinstalled and some core components run there) Helm can't manage it.
Some possible things you could do instead:
Make the per-namespace labels opt-in, not opt-out; only apply the webhook in namespaces where the label is present, rather than in every namespace except flagged ones. (Istio's sidecar injector works this way.)
Exclude kube-system as a special case in the code.
Manually run kubectl label namespace outside of Helm.
Make your larger-scale deployment pipeline run the kubectl command (for example, if you have a Jenkins build that installs the webhook, also make it set the label).

When do I use `apply` when to use `rollout`?

I am new to kubernete and have a bit confused about apply and rollout command. If I update the kubernete configuration file, should I use kubectl apply -f or kubectl rollout?
if I update kubernete configuration and I run kubectl apply -f, it will terminate the running pod and create a new one.
but rollout also has restart command which is used to restart the pod. so when should I use rollout restart?
The kubectl apply -f used to apply the configuration file kubernetes(where your deploy your desired application).
And kubectl rollout is used to check the above deployed application
Example
Suppose your deployment configuration file looks like this and you saved that in nginx.yaml file. Now you want deploy the nginx app from the below yaml file. so you should use kubectl apply -f nginx.yaml, and now you want to check if your application deployed successfully or not using kubectl rollout status nginx
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
and if you updated the yaml locally and you want replace that with existing then use kubectl replace -f nginx.yaml
One major difference I can think of is kubectl apply can be used for all Kubernetes objects (Pod, Deployment, ConfigMaps, Secrets, etc.) where as the kubectl rollout applies specifically to objects that deals with some computation like Deployments, Statefulsets, etc.
Further, kubectl rollout restart is useful to restart the pods without requiring any changes in the spec fields which is not possible with kubectl apply. If we run kubectl apply with no changes in the spec fields, the pods will not get updated as there is no change to update.
Consider the scenario where some configuration (say, external certificate) is mounted to the pods as ConfigMap and any change in the ConfigMap do not cause the pods to get updated automatically. kubectl rollout restart can be useful in such scenarios to create new pods which can then read updated configurations from the ConfigMap.
Also, a important note from the docs:
Note: A Deployment's rollout is triggered if and only if the
Deployment's Pod template (that is, .spec.template) is changed, for
example if the labels or container images of the template are updated.
Other updates, such as scaling the Deployment, do not trigger a
rollout.

Helm 3 Deployment Order of Kubernetes Service Catalog Resources

I am using Helm v3.3.0, with a Kubernetes 1.16.
The cluster has the Kubernetes Service Catalog installed, so external services implementing the Open Service Broker API spec can be instantiated as K8S resources - as ServiceInstances and ServiceBindings.
ServiceBindings reflect as K8S Secrets and contain the binding information of the created external service. These secrets are usually mapped into the Docker containers as environment variables or volumes in a K8S Deployment.
Now I am using Helm to deploy my Kubernetes resources, and I read here that...
The [Helm] install order of Kubernetes types is given by the enumeration InstallOrder in kind_sorter.go
In that file, the order does neither mention ServiceInstance nor ServiceBinding as resources, and that would mean that Helm installs these resource types after it has installed any of its InstallOrder list - in particular Deployments. That seems to match the output of helm install --dry-run --debug run on my chart, where the order indicates that the K8S Service Catalog resources are applied last.
Question: What I cannot understand is, why my Deployment does not fail to install with Helm.
After all my Deployment resource seems to be deployed before the ServiceBinding is. And it is the Secret generated out of the ServiceBinding that my Deployment references. I would expect it to fail, since the Secret is not there yet, when the Deployment is getting installed. But that is not the case.
Is that just a timing glitch / lucky coincidence, or is this something I can rely on, and why?
Thanks!
As said in the comment I posted:
In fact your Deployment is failing at the start with Status: CreateContainerConfigError. Your Deployment is created before Secret from the ServiceBinding. It's only working as it was recreated when the Secret from ServiceBinding was available.
I wanted to give more insight with example of why the Deployment didn't fail.
What is happening (simplified in order):
Deployment -> created and spawned a Pod
Pod -> failing pod with status: CreateContainerConfigError by lack of Secret
ServiceBinding -> created Secret in a background
Pod gets the required Secret and starts
Previously mentioned InstallOrder will leave ServiceInstace and ServiceBinding as last by comment on line 147.
Example
Assuming that:
There is a working Kubernetes cluster
Helm3 installed and ready to use
Following guides:
Kubernetes.io: Instal Service Catalog using Helm
Magalix.com: Blog: Kubernetes Service Catalog
There is a Helm chart with following files in templates/ directory:
ServiceInstance
ServiceBinding
Deployment
Files:
ServiceInstance.yaml:
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: example-instance
spec:
clusterServiceClassExternalName: redis
clusterServicePlanExternalName: 5-0-4
ServiceBinding.yaml:
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: example-binding
spec:
instanceRef:
name: example-instance
Deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu
spec:
selector:
matchLabels:
app: ubuntu
replicas: 1
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
# part below responsible for getting secret as env variable
env:
- name: DATA
valueFrom:
secretKeyRef:
name: example-binding
key: host
Applying above resources to check what is happening can be done in 2 ways:
First method is to use timestamp from $ kubectl get RESOURCE -o yaml
Second method is to use $ kubectl get RESOURCE --watch-only=true
First method
As said previously the Pod from the Deployment couldn't start as Secret was not available when the Pod tried to spawn. After the Secret was available to use, the Pod started.
The statuses this Pod had were the following:
Pending
ContainerCreating
CreateContainerConfigError
Running
This is a table with timestamps of Pod and Secret:
| Pod | Secret |
|-------------------------------------------|-------------------------------------------|
| creationTimestamp: "2020-08-23T19:54:47Z" | - |
| - | creationTimestamp: "2020-08-23T19:54:55Z" |
| startedAt: "2020-08-23T19:55:08Z" | - |
You can get this timestamp by invoking below commands:
$ kubectl get pod pod_name -n namespace -o yaml
$ kubectl get secret secret_name -n namespace -o yaml
You can also get get additional information with:
$ kubectl get event -n namespace
$ kubectl describe pod pod_name -n namespace
Second method
This method requires preparation before running Helm chart. Open another terminal window (for this particular case 2) and run:
$ kubectl get pod -n namespace --watch-only | while read line ; do echo -e "$(gdate +"%H:%M:%S:%N")\t $line" ; done
$ kubectl get secret -n namespace --watch-only | while read line ; do echo -e "$(gdate +"%H:%M:%S:%N")\t $line" ; done
After that apply your Helm chart.
Disclaimer!
Above commands will watch for changes in resources and display them with a timestamp from the OS. Please remember that this command is only for example purposes.
The output for Pod:
21:54:47:534823000 NAME READY STATUS RESTARTS AGE
21:54:47:542107000 ubuntu-65976bb789-l48wz 0/1 Pending 0 0s
21:54:47:553799000 ubuntu-65976bb789-l48wz 0/1 Pending 0 0s
21:54:47:655593000 ubuntu-65976bb789-l48wz 0/1 ContainerCreating 0 0s
-> 21:54:52:001347000 ubuntu-65976bb789-l48wz 0/1 CreateContainerConfigError 0 4s
21:55:09:205265000 ubuntu-65976bb789-l48wz 1/1 Running 0 22s
The output for Secret:
21:54:47:385714000 NAME TYPE DATA AGE
21:54:47:393145000 sh.helm.release.v1.example.v1 helm.sh/release.v1 1 0s
21:54:47:719864000 sh.helm.release.v1.example.v1 helm.sh/release.v1 1 0s
21:54:51:182609000 understood-squid-redis Opaque 1 0s
21:54:52:001031000 understood-squid-redis Opaque 1 0s
-> 21:54:55:686461000 example-binding Opaque 6 0s
Additional resources:
Stackoverflow.com: Answer: Helm install in certain order
Alibabacloud.com: Helm charts and templates hooks and tests part 3
So to answer my own question (and thanks to #dawid-kruk and the folks on Service Catalog Sig on Slack):
In fact, the initial start of my Pods (the ones referencing the Secret created out of the ServiceBinding) fails! It fails because the Secret is actually not there the moment K8S tries to start the pods.
Kubernetes has a self-healing mechanism, in the sense that it tries (and retries) to reach the target state of the cluster as described by the various deployed resources.
By Kubernetes retrying to get the pods running, eventually (when the Secret is finally there) all conditions will be satisfied to make the pods start up nicely. Therefore, eventually, evth. is running as it should.
How could this be streamlined? One possibility would be for Helm to include the custom resources ServiceBinding and ServiceInstance into its ordered list of installable resources and install them early in the installation phase.
But even without that, Kubernetes actually deals with it just fine. The order of installation (in this case) really does not matter. And that is a good thing!

Taint a node in kubernetes live cluster

How can I achieve the same command with a YAML file such that I can do a kubectl apply -f? The command below works and it taints but I can't figure out how to use it via a manifest file.
$ kubectl taint nodes \
172.4.5.2-3a1d4eeb \
kops.k8s.io/instancegroup=loadbalancer \
NoSchedule
Use the -o yaml option and save the resulting YAML file and make sure to remove the status and some extra stuff, this will apply the taint , but provide you the yaml that you can later use to do kubectl apply -f , and save it to version control ( even if you create the resource from command line and later get the yaml and apply it , it will not re-create the resource , so it is perfectly fine )
Note: Most of the commands support --dry-run , that will just generate the yaml and not create the resource , but in this case , I could not make it work with --dry-run , may be this command does not support that flag.
C02W84XMHTD5:~ iahmad$ kubectl taint node minikube dedicated=foo:PreferNoSchedule -o yaml
apiVersion: v1
kind: Node
metadata:
annotations:
node.alpha.kubernetes.io/ttl: "0"
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: 2018-10-16T21:44:03Z
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/hostname: minikube
node-role.kubernetes.io/master: ""
name: minikube
resourceVersion: "291136"
selfLink: /api/v1/nodes/minikube
uid: 99a1a304-d18c-11e8-9334-f2cf3c1f0864
spec:
externalID: minikube
taints:
- effect: PreferNoSchedule
key: dedicated
value: foo
then use the yaml with kubectl apply:
apiVersion: v1
kind: Node
metadata:
name: minikube
spec:
taints:
- effect: PreferNoSchedule
key: dedicated
value: foo
I have two nodes in my cluster, please look at labels
kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
172.16.2.53 Ready node 7d4h v1.19.7 type=primary
172.16.2.89 Ready node 33m v1.19.7 type=secondary
Lets say I want to taint node name with "172.16.2.89"
kubectl taint node 172.16.2.89 type=secondary:NoSchedule
node/172.16.2.89 tainted
Example -
kubectl taint node <node-name> <label-key>=<value>:NoSchedule
I have two nodes in my cluster, please look at labels
NoExecute means the pod will be evicted from the node.
NoSchedule means the scheduler will not place the pod onto the node

Create Daemonset using kubectl?

I took the CKA exam and I needed to work with Daemonsets for quite a while there. Since it is much faster to do everything with kubectl instead of creating yaml manifests for k8s resources, I was wondering if it is possible to create Daemonset resources using kubectl.
I know that it is NOT possible to create it using regular kubectl create daemonset at least for now. And there is no description of it in the documentation. But maybe there is a way to do that in some different way?
The best thing I could do right now is to create Deployment first like kubectl create deployment and edit it's output manifest. Any options here?
The fastest hack is to create a deployment file using
kubectl create deploy nginx --image=nginx --dry-run -o yaml > nginx-ds.yaml
Now replace the line kind: Deployment with kind: DaemonSet in nginx-ds.yaml and remove the line replicas: 1
However, the following command will give a clean daemonset manifest considering that "apps/v1" is the api used for DaemonSet in your cluster
kubectl create deploy nginx --image=nginx --dry-run -o yaml | \
sed '/null\|{}\|replicas/d;/status/,$d;s/Deployment/DaemonSet/g' > nginx-ds.yaml
You have your nginx DaemonSet.
CKA allows access to K8S documentation. So, it should be possible to get a sample YAML for different resources from there. Here is the one for the Daemonset from K8S documentation.
Also, not sure if the certification environment has access to resources in the kube-system namespace. If yes, then use the below command to get a sample yaml for Daemonset.
kubectl get daemonsets kube-flannel-ds-amd64 -o yaml -n=kube-system > daemonset.yaml
It's impossible. At least for Kubernetes 1.12. The only option is to get a sample Daemonset yaml file and go from there.
The fastest way to create
kubectl create deploy nginx --image=nginx --dry-run -o yaml > nginx-ds.yaml
Now replace the line kind: Deployment with kind: DaemonSet in nginx-ds.yaml and remove the line replicas: 1 , strategy {} and status {} as well.
Otherwise it shows error for some required fields like this
error: error validating "nginx-ds.yaml": error validating data: [ValidationError(DaemonSet.spec): unknown field "strategy" in io.k8s.api.apps.v1.DaemonSetSpec, ValidationError(DaemonSet.status): missing required field "currentNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus,ValidationError(DaemonSet.status): missing required field "numberMisscheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "desiredNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "numberReady" in io.k8s.api.apps.v1.DaemonSetStatus]; if you choose to ignore these errors, turn validation off with --validate=false
There is no such option to create a DaemonSet using kubectl. But still, you can prepare a Yaml file with basic configuration for a DaemonSet, e.g. daemon-set-basic.yaml, and create it using kubectl create -f daemon-set-basic.yaml
You can edit new DaemonSet using kubectl edit daemonset <name-of-the-daemon-set>. Or modify the Yaml file and apply changes by kubectl apply -f daemon-set-basic.yaml. Note, if you want to update configuration modifying file and using apply command, it is better to use apply instead of create when you create the DaemonSet.
Here is the example of a simple DaemonSet:
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
containers:
- name: fluentd-elasticsearch
image: k8s.gcr.io/fluentd-elasticsearch:1.20
You could take advantage of Kubernetes architecture to obtain definition of DaemonSet from existing cluster. Have a look at kube-proxy, which is a network component that runs on each node in your cluster.
kube-proxy is deployed as DaemonSet so you can extract its definition with below command.
$ kubectl get ds kube-proxy -n kube-system -o yaml > kube-proxy.ds.yaml
Warning!
By extracting definition of DaemonSet from kube-proxy be aware that:
You will have to do pliantly of clean up!
You will have to change apiVersion from extensions/v1beta1 to apps/v1
I used this by the following commands:
Either create Replicaset or deployment from Kubernetes imperative command
kubectl create deployment <daemonset_name> --image= --dry-run -o yaml > file.txt
Edit the kind and replace DaemonSet, remove replicas and strategy fields into it.
kubectl apply -f file.txt
During CKA examination you are allowed to access Kubernetes Documentation for DaemonSets. You could use the link and get examples of DaemonSet yaml files. However you could use the way you mentioned, change a deployment specification to DaemonSet specification. You need to change the kind to Daemonset, remove strategy, replicas and status fields. That would do.
Using command to deployment create and modifying it, one can create daemonset very quickly.
Below is one line command to create daemonset
kubectl create deployment elasticsearch --namespace=kube-system --image=k8s.gcr.io/fluentd-elasticsearch:1.20 --dry-run -o yaml | grep -v "creationTimestamp\|status" | awk '{gsub(/Deployment/, "DaemonSet"); print }'