we have istio installed without the side car enabled gloablly , and I want to enable it to specific service in a new namespace
I’ve added to my deployment the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gow
labels:
app: gowspec:
replicas: 2
template:
metadata:
labels:
app: gow
tier: service
annotations:
sidecar.istio.io/inject: "true"
while running
get namespace -L istio-injection I don’t see anything enabled , everything is empty…
How can I verify that the side car is created ? I dont see anything new ...
You can use istioctl kube-inject to make that
kubectl create namespace asdd
istioctl kube-inject -f nginx.yaml | kubectl apply -f -
nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: asdd
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
annotations:
sidecar.istio.io/inject: "True"
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Result:
nginx-deployment-55b6fb474b-77788 2/2 Running 0 5m36s
nginx-deployment-55b6fb474b-jrkqk 2/2 Running 0 5m36s
Let me know if You have any more questions.
You can describe your pod to see list of containers and one of those should be sidecar container. Look for something called istio-proxy.
kubectl describe pod pod name
It should look something like below
$ kubectl describe pod demo-red-pod-8b5df99cc-pgnl7
SNIPPET from the output:
Name: demo-red-pod-8b5df99cc-pgnl7
Namespace: default
.....
Labels: app=demo-red
pod-template-hash=8b5df99cc
version=version-red
Annotations: sidecar.istio.io/status={"version":"3c0b8d11844e85232bc77ad85365487638ee3134c91edda28def191c086dc23e","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs...
Status: Running
IP: 10.32.0.6
Controlled By: ReplicaSet/demo-red-pod-8b5df99cc
Init Containers:
istio-init:
Container ID: docker://bef731eae1eb3b6c9d926cacb497bb39a7d9796db49cd14a63014fc1a177d95b
Image: docker.io/istio/proxy_init:1.0.2
Image ID: docker-pullable://docker.io/istio/proxy_init#sha256:e16a0746f46cd45a9f63c27b9e09daff5432e33a2d80c8cc0956d7d63e2f9185
.....
State: Terminated
Reason: Completed
.....
Ready: True
Containers:
demo-red:
Container ID: docker://8cd9957955ff7e534376eb6f28b56462099af6dfb8b9bc37aaf06e516175495e
Image: chugtum/blue-green-image:v3
Image ID: docker-pullable://docker.io/chugtum/blue-green-image#sha256:274756dbc215a6b2bd089c10de24fcece296f4c940067ac1a9b4aea67cf815db
State: Running
Started: Sun, 09 Dec 2018 18:12:31 -0800
Ready: True
istio-proxy:
Container ID: docker://ca5d690be8cd6557419cc19ec4e76163c14aed2336eaad7ebf17dd46ca188b4a
Image: docker.io/istio/proxyv2:1.0.2
Image ID: docker-pullable://docker.io/istio/proxyv2#sha256:54e206530ba6ca9b3820254454e01b7592e9f986d27a5640b6c03704b3b68332
Args:
proxy
sidecar
.....
State: Running
Started: Sun, 09 Dec 2018 18:12:31 -0800
Ready: True
.....
You need to have the admission webhook for automatic sidecar injection.
kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml | grep "namespaceSelector:" -A5
There may be many reasons for sidecar injection failures as described here
Here is a table which shows final injection status based on different scenarios.
Based on above table its mandatory to label the namespace with a label istio-injection=enabled
kubectl label namespace default istio-injection=enabled --overwrite
Related
Say I have a pod YAML such as:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19.1
And a Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- containerPort: 80
Now I first create the Pod:
$ kubectl apply -f pod.yaml
And only then the Deployment:
$ kubectl apply -f deployment.yaml
I thought that, since the pod.yaml metadata includes a app: nginx selector, the Deployment controller will only create 2 nginx:1.17.1 pods, but I see that all 3 are created. Why is that?
In addition to creating the app: nginx label, Deployment controller also added the pod-template-hash label for each pod that was created.
If we check labels for running pods, we can see pod-template-hash=5d5dd5dd49 label for my-deployment pods:
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
my-deployment-5d5dd5dd49-9tbcx 1/1 Running 0 55s app=nginx,pod-template-hash=5d5dd5dd49
my-deployment-5d5dd5dd49-b88f4 1/1 Running 0 55s app=nginx,pod-template-hash=5d5dd5dd49
my-deployment-5d5dd5dd49-x7n8q 1/1 Running 0 55s app=nginx,pod-template-hash=5d5dd5dd49
nginx 1/1 Running 0 62s app=nginx
According to the official documentation:
The pod-template-hash label ensures that child ReplicaSets of a Deployment do not
overlap. It is generated by hashing the PodTemplate of the
ReplicaSet and using the resulting hash as the label value that is
added to the ReplicaSet selector, Pod template labels, and in any
existing Pods that the ReplicaSet might have.
This is why the changes from the Deployment didn't apply to a single pod with only the app: nginx label.
i've a namespace called: test, and containing 3 pods: frontend, backend and database.
this is the manifest of pods:
kind: Pod
apiVersion: v1
metadata:
name: frontend
namespace: test
labels:
app: todo
tier: frontend
spec:
containers:
- name: frontend
image: nginx
---
kind: Pod
apiVersion: v1
metadata:
name: backend
namespace: test
labels:
app: todo
tier: backend
spec:
containers:
- name: backend
image: nginx
---
kind: Pod
apiVersion: v1
metadata:
name: database
namespace: test
labels:
app: todo
tier: database
spec:
containers:
- name: database
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: example
I would implement a network policy , that allow only allow incoming traffic from the backend to the database but disallow incoming traffic from the frontend.
this my network policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-allow
namespace: test
spec:
podSelector:
matchLabels:
app: todo
tier: database
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: todo
tier: backend
ports:
- protocol: TCP
port: 3306
- protocol: UDP
port: 3306
This is the output of kubectl get pods -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
backend 1/1 Running 0 28m 172.17.0.5 minikube <none> <none>
database 1/1 Running 0 28m 172.17.0.4 minikube <none> <none>
frontend 1/1 Running 0 28m 172.17.0.3 minikube <none> <none>
This is the output of kubectl get networkpolicy -n test -o wide
NAME POD-SELECTOR AGE
app-allow app=todo,tier=database 21m
when i execute telnet #ip-of-mysql-pod 3306 from the frontend pod , the connection look be established and the network policy is not working
kubectl exec -it pod/frontend bash -n test
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root#frontend:/# telnet 172.17.0.4 3306
Trying 172.17.0.4...
Connected to 172.17.0.4.
Escape character is '^]'.
J
8.0.25 k{%J\�#(t%~qI%7caching_sha2_password
there are something i missing ?
Thanks
It seems that you forgot to add "default deny" policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
The default behavior of NetworkPolicy is to allow all connections between pod unless explicitly denied.
More details here: https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-ingress-traffic
im practicing with kubernetes taints , i have tainted my node and than make a deploy like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
tolerations:
- key: "test"
operator: "Equal"
value: "blue"
effect: "NoSchedule"
kubectl describe nodes knode2 :
Name: knode2
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=knode2
kubernetes.io/os=linux
testing=test
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: **********
projectcalico.org/IPv4IPIPTunnelAddr: ********
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 27 Oct 2020 17:23:47 +0200
Taints: test=blue:NoSchedule
but when i deploy this yaml file the pods are not going only to that tainted node.
Why is that?
Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. That's exactly opposite of what you intend to do.
You can constrain a Pod to only be able to run on particular Node(s), or to prefer to run on particular nodes using NodeSelector or NodeAffinity.
NodeSelector example
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
Node affinity is conceptually similar to nodeSelector -- it allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node.
I have an existing kubernetes deployment which is running fine. Now I want to edit it with some new environment variables which I will use in the pod.
Editing a deployment will delete and create new pod or it will update the existing pod.
My requirement is I want to create a new pod whenever I edit/update the deployment.
Kubernetes is always going to recreate your pods in case you change/create env vars.
Lets check this together creating a deployment without any env var on it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Let's check and note these pod names so we can compare later:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-56db997f77-9mpjx 1/1 Running 0 8s
nginx-deployment-56db997f77-mgdv9 1/1 Running 0 8s
nginx-deployment-56db997f77-zg96f 1/1 Running 0 8s
Now let's edit this deployment and include one env var making the manifest look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
env:
- name: STACK_GREETING
value: "Hello from the MARS"
ports:
- containerPort: 80
After we finish the edition, let's check our pod names and see if it changed:
$ kubectl get pod
nginx-deployment-5b4b68cb55-9ll7p 1/1 Running 0 25s
nginx-deployment-5b4b68cb55-ds9kb 1/1 Running 0 23s
nginx-deployment-5b4b68cb55-wlqgz 1/1 Running 0 21s
As we can see, all pod names changed. Let's check if our env var got applied:
$ kubectl exec -ti nginx-deployment-5b4b68cb55-9ll7p -- sh -c 'echo $STACK_GREETING'
Hello from the MARS
The same behavior will occur if you change the var or even remove it. All pods need to be removed and created again for the changes to take place.
If you would like to create a new pod, then you need to create a new deployment for that. By design deployments are managing the replicas of pods that belong to them.
I created a deployment like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: scs-db-sink
spec:
selector:
matchLabels:
app: scs-db-sink
replicas: 1
template:
metadata:
labels:
app: scs-db-sink
spec:
nodeSelector:
cloud.google.com/gke-nodepool: service-pool
containers:
- name: scs-db-sink
image: 'IMAGE_NAME'
imagePullPolicy: Always
ports:
- containerPort: 1068
kubectl get pods shows me that the pod is running:
scs-db-sink-74c4b6cd6b-tchm9 1/1 Running 0 16m
Question:
How can I setup the pod name to be scs-db-sink-0 and increase to scs-db-sink-1 when I scale up?
Thanks
Deployments pods are named as <replicaset-name>-<random-suffix> where replicaset name is <deployment-name>-<random-suffix>. Here, replicaset is created automatically by deployment. So, you can't achieve your expected name with deployment.
However, you can use Statefulset in this case. Statefulset's pods are named as you specified. Check about Statefulset here.