How to get annotation from namespace in side car injector - kubernetes

I am working with Istio. There are certain annotations that we add to our kubernetes namespace. One out of these namespace annotations also needs to be applied to the pods that are created with sidecar-enabled=true label. For this purpose, I looked at using the Istio sidecar injector webhook, but I am not able to find the reference to namespace's annotations.
Is there a way to do this?

You can find all need namespaces annotations using below command in Annotations: section.
kubectl describe namespaces
EDIT:
Your initial question is not clear. As far as I understand your question and additional clarification - you want to get annotations that are applied to a namespace from a configMap.
Official Istio Sidecar Injection Documentation says that
Manual and automatic injection both use the configuration from the
istio-sidecar-injector and istio ConfigMaps in the istio-system
namespace.
Based on this fact you can dump the configMap in the Istio cluster you are interested in by next command:
$ kubectl describe configmap --namespace=istio-system istio-sidecar-injector
This will show you references for pod annotations, global values, etc.
Example:
[[ annotation .ObjectMeta `traffic.sidecar.istio.io/includeOutboundIPRanges` "*" ]]
The above queries traffic.sidecar.istio.io/includeOutboundIPRanges annotation on the pod, and defaults to "*" if it's not present.

Related

The ALB listeners are not auto removed when I delete the corresponding ingress from EKS

I have deployed an AWS ALB Controller and I create listeners with ingress resources in a EKS cluster.
The steps I followed are the following:
I had an ingress for a service named first-test-api and all where fine
I deploy a new Helm release [first] with just renaming the chart from test-api to main-api. So now is first-main-api.
Noting seems to break in terms of k8s resources , but...
the test-api.mydomain.com listener in the AWS ALB is stuck to the old service
Has anyone encounter such a thing before?
I could delete the listener manually, but I don't want to. I'd like to know what is happening and why it didn't happened automatically :)
EDIT:
The ingress had an ALB annotation that enabled the deletion protection.
I will provide some generic advice on things I would look at, but it might be better to detail a small example.
Yes, ALB controller should automatically manage changes on the backend.
I would suggest ignoring the helm chart and looking into the actual objects:
kubectl get ing -n <namespace> shows the ingress you are expecting?
kubectl get ing -n <ns> <name of ingress> -o yaml points to the correct/new service?
kubectl get svc -n <ns> <name of new svc> shows the new service?
kubectl get endpoints -n <ns> <name of new svc> shows the pod you are expecting?
And then gut feeling.
Check the labels in your new service are differents from the labels in the old service if you expect to both services serve different things.
Get the logs of the ALB controller. You will see registering/deregistering stuff. Sometimes errors. Especially if the role the node/service account doesn't have the proper IAM permissions.
Happy to modify the answer if you expand the question with more details.

linkerd Inject with helm or namespace?

I can't seem to find a simple answer to my question,
how to use linkerd inject command / options to add when using helm to install a package, e.g. like postgres?
I have done it with another package but that was by adding the annotation command to a values file and supplying that when running the helm install command.
With istio, all I have to do is to add a label on the namespace and it works?
So I started to look into adding the annotation to the namespaces I am working with, using the kubectl create namespace command?
However, I cant seem to find a way to add any annotation at the point of creating a namespace, unless I use a file.
So, I either need a way to add this annotation to the namespace with the create command or when installing packages with helm?
Thanks,
I think there are a couple of ways to do this. It all depends on what you are trying to achieve and how you'd like to manage your underlying infrastructure.
I assume you want to automate the installation of helm charts. If you are going to create the namespace using kubectl create namespace then you might be able to follow that up with kubectl annotate <created-namespace> "linkerd.io/inject=enabled".
Alternatively, you can make use of the Linkerd CLI and use the inject command provided -- the workflow here would involve a combination of kubectl and linkerd commands so I'm not sure it's what you're looking for. Nonetheless, you can do something like kubectl create namespace <my-namespace> -o yaml | linkerd inject - | kubectl apply -f -.
Last but not least, if you can use kubectl create namespace then you might be able to pipe the namespace manifest to kubectl directly and call it a day? You can use something similar to the snippet below:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: foo
annotations:
linkerd.io/inject: enabled
EOF

What is kind in kubernetes YAML meant?

I'm very new to kubernetes .What is meant for kind: Deployment?. What are the different kind. Is there any document available for this?
kind represents the type of Kubernetes objects to be created while using the yaml file.
kind: Deployment represents the Kubernetes Deployment object.
You can use the following command to view different supported objects:
kubectl api-resources
You can also review the API reference for detailed overview regarding the Kubernetes objects.

Read Kubernetes deployment annotation

How can I get a particular Kubernetes annotation from a deployment resource using kubectl? I know I can dynamically set an annotation on a deployment using:
kubectl annotate deployment api imageTag=dev-ac5ba48.k1dp9
Is there a single kubectl command to then read this deployments imageTag annotation?
You can use the following command to get the imageTag annotation (given that annotation exists):
kubectl get deploy DEPLOY_NAME -o jsonpath='{.metadata.annotations.imageTag}'
You can use jsonpath for that:
kubectl get deployment api -o=jsonpath='{.metadata.annotations}'
The command above will give you all the annotations of your deployment api.
For reference you can take a look at this doc page as it may help.

Schedule pod on specific node without modifying PodSpec

As a k8s cluster administrator, I want to specify on which nodes (using labels) pods will be scheduled, but without modifying any PodSpec section.
So, nodeSelector, affinity and taints can't be options.
Is there any other solution ?
PS: the reason I can't modify the PodSpec is that deployed applications are available as Helm charts and I don't have hand on those files. Moreover, if I change the PodSpec, it will be lost on next release upgrade.
You can use the PodNodeSelector admission controller for this:
This admission controller has the following behavior:
If the Namespace has an annotation with a key scheduler.kubernetes.io/nodeSelector, use its value as the node selector.
If the namespace lacks such an annotation, use the clusterDefaultNodeSelector defined in the PodNodeSelector plugin configuration file as the node selector.
Evaluate the pod’s node selector against the namespace node selector for conflicts. Conflicts result in rejection.
Evaluate the pod’s node selector against the namespace-specific whitelist defined the plugin configuration file. Conflicts result in rejection.
First of all you will need to enable this admission controller. The way to enable it depends on your environment, but it's done via the parameter kube-apiserver --enable-admission-plugins=PodNodeSelector.
Then create a namespace and annotate it with whatever node label you want all Pods in that namespace to have:
kubectl create ns node-selector-test
kubectl annotate ns node-selector-test \
scheduler.alpha.kubernetes.io/node-selector=mynodelabel=mynodelabelvalue
To test it you could do something like this:
kubectl run busybox \
-n node-selector-test -it --restart=Never --attach=false --image=busybox
kubectl get pod busybox -n node-selector-test -o yaml
It should output something like this:
apiVersion: v1
kind: Pod
metadata:
name: busybox
....
spec:
...
nodeSelector:
mynodelabel: mynodelabelvalue
Now, unless that label exists on some nodes, this Pod will never be scheduled, so put this label on a node to see it scheduled:
kubectl label node myfavoritenode mynodelabel=mynodelabelvalue
No, there are no other ways to specify on which nodes pod will be scheduled, only labels and selectors.
I think the problem with a Helm is related to that issue.
For now, the only way for you is to change a spec, remove release and deploy a new one with updated specs.
UPD
#Janos Lenart provides a way how to manage scheduling per-namespace. That is a good idea if your releases already are split among namespaces and if you don't want to spawn different pods on different nodes in a single release. Otherwise, you will have to create new releases in new namespaces and in that case I highly recommend you to use selectors in the Pods spec.