I have a project to create a mutating webhook in the kube-system namespace, which needs to exclude webhook server deployment namespaces.
But the kube-system namespace has been created. How do I attach the required labels to it using Helm?
Helmfile offers hooks which are pretty neat for that:
releases:
- name: istio-ingress
namespace: istio-ingress
chart: istio/gateway
wait: true
hooks:
- events:
- presync
showlogs: true
command: sh
args:
- -c
- "kubectl create namespace istio-ingress --dry-run=client -o yaml | kubectl apply -f -"
- events:
- presync
showlogs: true
command: sh
args:
- -c
- "kubectl label --dry-run=client -o yaml --overwrite namespace istio-ingress istio-injection=enabled | kubectl apply -f -"
Since the kube-system namespace is a core part of Kubernetes (every cluster has it preinstalled and some core components run there) Helm can't manage it.
Some possible things you could do instead:
Make the per-namespace labels opt-in, not opt-out; only apply the webhook in namespaces where the label is present, rather than in every namespace except flagged ones. (Istio's sidecar injector works this way.)
Exclude kube-system as a special case in the code.
Manually run kubectl label namespace outside of Helm.
Make your larger-scale deployment pipeline run the kubectl command (for example, if you have a Jenkins build that installs the webhook, also make it set the label).
Related
Is there a way to disable service links globally. There's a field in podSpec:
enableServiceLinks: false
but it's true by default. I couldn't find anything in kubelet to kill it. Or is there some cool admission webhook toolchain I could use
You can use the Kubernetes-native policy engine called Kyverno. Kyverno policies can validate, mutate (see: Mutate Resources), and generate Kubernetes resources.
A Kyverno policy is a collection of rules that can be applied to the entire cluster (ClusterPolicy) or to the specific namespace (Policy).
I will create an example to illustrate how it may work.
First we need to install Kyverno, you have the option of installing Kyverno directly from the latest release manifest, or using Helm (see: Quick Start guide):
$ kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/definitions/release/install.yaml
After successful installation, we can create a simple ClusterPolicy:
$ cat strategic-merge-patch.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: strategic-merge-patch
spec:
rules:
- name: enableServiceLinks_false_globally
match:
resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
enableServiceLinks: false
$ kubectl apply -f strategic-merge-patch.yaml
clusterpolicy.kyverno.io/strategic-merge-patch created
$ kubectl get clusterpolicy
NAME BACKGROUND ACTION READY
strategic-merge-patch true audit true
This policy adds enableServiceLinks: false to the newly created Pod.
Let's create a Pod and check if it works as expected:
$ kubectl run app-1 --image=nginx
pod/app-1 created
$ kubectl get pod app-1 -oyaml | grep "enableServiceLinks:"
enableServiceLinks: false
It also works with Deployments, StatefulSets, DaemonSets etc.:
$ kubectl create deployment deploy-1 --image=nginx
deployment.apps/deploy-1 created
$ kubectl get pod deploy-1-7cfc5d6879-kfdlh -oyaml | grep "enableServiceLinks:"
enableServiceLinks: false
More examples with detailed explanations can be found in the Kyverno Writing Policies documentation.
I am trying to set up Argo CD on Google Kubernetes Engine Autopilot and each pod/container is defaulting to the default resource request (0.5 vCPU and 2 GB RAM per container). This is way more than the pods need and is going to be too expensive (13GB of memory reserved in my cluster just for Argo CD). I am following the Getting Started guide for Argo CD and am running the following command to add Argo CD to my cluster:
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
How do I specify the resources for each pod when I am using someone else's yaml template? The only way I have found to set resource requests is with my own yaml file like this:
apiVersion: v1
kind: Pod
metadata:
name: memory-demo
namespace: mem-example
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
But I don't understand how to apply this type of configuration to Argo CD.
Thanks!
So right now you are just using kubectl with the manifest from github and you cannot edit it. What you need to do is
1 Download the file with wget
https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
2 Use an editor like nano or vim to edit the file with requests as
explained in my comments above.
3 Then use kubectl apply -f newfile.yaml
You can dump the yaml of argocd, then customize your resource request, and then apply the modified yaml.
$ kubectl get deployment -n argocd -o yaml > argocd_deployment.yaml
$ kubectl get sts -n argocd -o yaml > argocd_statefulset.yaml
$ # modify resource
$ vim argocd_deployment.yaml
$ vim argocd_statefulset.yaml
$ kubectl apply -f argocd_deployment.yaml
$ kubectl apply -f argocd_statefulset.yaml
Or modify deplopyment and statefulset directly by kubectl edit
$ kubectl edit deployment -n argocd
$ kubectl edit sts -n argocd
I am using Helm v3.3.0, with a Kubernetes 1.16.
The cluster has the Kubernetes Service Catalog installed, so external services implementing the Open Service Broker API spec can be instantiated as K8S resources - as ServiceInstances and ServiceBindings.
ServiceBindings reflect as K8S Secrets and contain the binding information of the created external service. These secrets are usually mapped into the Docker containers as environment variables or volumes in a K8S Deployment.
Now I am using Helm to deploy my Kubernetes resources, and I read here that...
The [Helm] install order of Kubernetes types is given by the enumeration InstallOrder in kind_sorter.go
In that file, the order does neither mention ServiceInstance nor ServiceBinding as resources, and that would mean that Helm installs these resource types after it has installed any of its InstallOrder list - in particular Deployments. That seems to match the output of helm install --dry-run --debug run on my chart, where the order indicates that the K8S Service Catalog resources are applied last.
Question: What I cannot understand is, why my Deployment does not fail to install with Helm.
After all my Deployment resource seems to be deployed before the ServiceBinding is. And it is the Secret generated out of the ServiceBinding that my Deployment references. I would expect it to fail, since the Secret is not there yet, when the Deployment is getting installed. But that is not the case.
Is that just a timing glitch / lucky coincidence, or is this something I can rely on, and why?
Thanks!
As said in the comment I posted:
In fact your Deployment is failing at the start with Status: CreateContainerConfigError. Your Deployment is created before Secret from the ServiceBinding. It's only working as it was recreated when the Secret from ServiceBinding was available.
I wanted to give more insight with example of why the Deployment didn't fail.
What is happening (simplified in order):
Deployment -> created and spawned a Pod
Pod -> failing pod with status: CreateContainerConfigError by lack of Secret
ServiceBinding -> created Secret in a background
Pod gets the required Secret and starts
Previously mentioned InstallOrder will leave ServiceInstace and ServiceBinding as last by comment on line 147.
Example
Assuming that:
There is a working Kubernetes cluster
Helm3 installed and ready to use
Following guides:
Kubernetes.io: Instal Service Catalog using Helm
Magalix.com: Blog: Kubernetes Service Catalog
There is a Helm chart with following files in templates/ directory:
ServiceInstance
ServiceBinding
Deployment
Files:
ServiceInstance.yaml:
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: example-instance
spec:
clusterServiceClassExternalName: redis
clusterServicePlanExternalName: 5-0-4
ServiceBinding.yaml:
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: example-binding
spec:
instanceRef:
name: example-instance
Deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu
spec:
selector:
matchLabels:
app: ubuntu
replicas: 1
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
# part below responsible for getting secret as env variable
env:
- name: DATA
valueFrom:
secretKeyRef:
name: example-binding
key: host
Applying above resources to check what is happening can be done in 2 ways:
First method is to use timestamp from $ kubectl get RESOURCE -o yaml
Second method is to use $ kubectl get RESOURCE --watch-only=true
First method
As said previously the Pod from the Deployment couldn't start as Secret was not available when the Pod tried to spawn. After the Secret was available to use, the Pod started.
The statuses this Pod had were the following:
Pending
ContainerCreating
CreateContainerConfigError
Running
This is a table with timestamps of Pod and Secret:
| Pod | Secret |
|-------------------------------------------|-------------------------------------------|
| creationTimestamp: "2020-08-23T19:54:47Z" | - |
| - | creationTimestamp: "2020-08-23T19:54:55Z" |
| startedAt: "2020-08-23T19:55:08Z" | - |
You can get this timestamp by invoking below commands:
$ kubectl get pod pod_name -n namespace -o yaml
$ kubectl get secret secret_name -n namespace -o yaml
You can also get get additional information with:
$ kubectl get event -n namespace
$ kubectl describe pod pod_name -n namespace
Second method
This method requires preparation before running Helm chart. Open another terminal window (for this particular case 2) and run:
$ kubectl get pod -n namespace --watch-only | while read line ; do echo -e "$(gdate +"%H:%M:%S:%N")\t $line" ; done
$ kubectl get secret -n namespace --watch-only | while read line ; do echo -e "$(gdate +"%H:%M:%S:%N")\t $line" ; done
After that apply your Helm chart.
Disclaimer!
Above commands will watch for changes in resources and display them with a timestamp from the OS. Please remember that this command is only for example purposes.
The output for Pod:
21:54:47:534823000 NAME READY STATUS RESTARTS AGE
21:54:47:542107000 ubuntu-65976bb789-l48wz 0/1 Pending 0 0s
21:54:47:553799000 ubuntu-65976bb789-l48wz 0/1 Pending 0 0s
21:54:47:655593000 ubuntu-65976bb789-l48wz 0/1 ContainerCreating 0 0s
-> 21:54:52:001347000 ubuntu-65976bb789-l48wz 0/1 CreateContainerConfigError 0 4s
21:55:09:205265000 ubuntu-65976bb789-l48wz 1/1 Running 0 22s
The output for Secret:
21:54:47:385714000 NAME TYPE DATA AGE
21:54:47:393145000 sh.helm.release.v1.example.v1 helm.sh/release.v1 1 0s
21:54:47:719864000 sh.helm.release.v1.example.v1 helm.sh/release.v1 1 0s
21:54:51:182609000 understood-squid-redis Opaque 1 0s
21:54:52:001031000 understood-squid-redis Opaque 1 0s
-> 21:54:55:686461000 example-binding Opaque 6 0s
Additional resources:
Stackoverflow.com: Answer: Helm install in certain order
Alibabacloud.com: Helm charts and templates hooks and tests part 3
So to answer my own question (and thanks to #dawid-kruk and the folks on Service Catalog Sig on Slack):
In fact, the initial start of my Pods (the ones referencing the Secret created out of the ServiceBinding) fails! It fails because the Secret is actually not there the moment K8S tries to start the pods.
Kubernetes has a self-healing mechanism, in the sense that it tries (and retries) to reach the target state of the cluster as described by the various deployed resources.
By Kubernetes retrying to get the pods running, eventually (when the Secret is finally there) all conditions will be satisfied to make the pods start up nicely. Therefore, eventually, evth. is running as it should.
How could this be streamlined? One possibility would be for Helm to include the custom resources ServiceBinding and ServiceInstance into its ordered list of installable resources and install them early in the installation phase.
But even without that, Kubernetes actually deals with it just fine. The order of installation (in this case) really does not matter. And that is a good thing!
I am trying to define the namespace name when executing the kubectl create deployment command?
This is what I tried:
kubectl create deployment test --image=banu/image1 namespace=test
and this doesn't work.
And I want to expose this deployment using a ClusterIP service within the cluster itself for that given namespace How can I do that using kubectl command line?
You can specify either -n or --namespace options.
kubectl create deployment test --image=nginx --namespace default --dry-run -o yaml and see result deployment yaml.
Using kubectl run
kubectl run test --namespace test --image nginx --port 9090 --dry-run -o yaml
You need to create a namespace like this
kubectl create ns test
ns stands for namespace, so with kubectl you say you want to create namespace with name test
Then while you creating the deployment you add the namespace you want
kubectl create deployment test --image=banu/image1 -n test
Where flag -n stands for namespace, that way you say to Kubernetes that all resources related to that deployment will be under the test namespace
In order to see all the resources under a specific namespace
kubectl get all -n test
--namespace and -n is the same things
Use -n test instead of namespace=test
Sample with nginx image:
$ kubectl create deployment nginx --image=nginx -n test
deployment.apps/nginx created
$ kubectl get deploy -n test
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 8s
In second case you need to create service and define labels from deployment.
You can find correct labels by runnig something like:
kubectl -n test describe deploy test |grep Labels:
and apply service like:
apiVersion: v1
kind: Service
metadata:
name: test-svc
namespace: test
spec:
ports:
- name: test
port: 80 # Change this port
protocol: TCP
type: ClusterIP
selector:
# Here you need to define output from previous step
I am trying to pass an environment variable in kubernetes container.
What have I done so far ?
Create a deployment
kubectl create deployment foo --image=foo:v1
Create a NODEPORT service and expose the port
kubectl expose deployment/foo --type=NodePort --port=9000
See the pods
kubectl get pods
dump the configurations (so to add the environment variable)
kubectl get deployments -o yaml > dev/deployment.yaml
kubectl get svc -o yaml > dev/services.yaml
kubectl get pods -o yaml > dev/pods.yaml
Add env variable to the pods
env:
name: FOO_KEY
value: "Hellooooo"
Delete the svc,pods,deployments
kubectl delete -f dev/ --recursive
Apply the configuration
kubectl apply -f dev/ --recursive
Verify env parameters
kubectl describe pods
Something weird
If I manually changed the meta information of the pod yaml and hard code the name of the pod. It gets the env variable. However, this time two pods come up one with the hard coded name and other with the hash with it. For example, if the name I hardcoded was "foo", two pods namely foo and foo-12314faf (example) would appear in "kubectl get pods". Can you explain why ?
Question
Why does the verification step does not show the environment variable ?
As the issue is resolved in the comment section.
If you want to set env to pods I would suggust you to use set sub commend
kubectl set env --help will provide more detail such as list the env and create new one
Examples:
# Update deployment 'registry' with a new environment variable
kubectl set env deployment/registry STORAGE_DIR=/local
# List the environment variables defined on a deployments 'sample-build'
kubectl set env deployment/sample-build --list
Deployment enables declarative updates for Pods and ReplicaSets. Pods are not typically directly launched on a cluster. Instead, pods are usually managed by replicaSet which is managed by deployment.
following thread discuss what-is-the-difference-between-a-pod-and-a-deployment
You can add any number of env vars into your deployment file
spec:
containers:
- name: auth
image: lord/auth
env:
- name: MONGO_URI
value: "mongodb://auth-mongo-srv:27017/auth"
process.env.MONGO_URI
or you can add a secret first then use newly created secret into your countless deployment files to share same environment variable with value:
kubectl create secret generic jwt-secret --from-literal=JWT_KEY=my_awesome_jwt_secret_code
spec:
containers:
- name: auth
image: lord/auth
env:
- name: MONGO_URI
value: "mongodb://auth-mongo-srv:27017/auth"
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
process.env.MONGO_URI
process.env.JWT_KEY