What is the difference between CNI calico and calico tigera? [closed] - kubernetes

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
Improve this question
I am unsure what the difference between "plain calico"
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
and the "calico tigera" (operator) is.
helm repo add projectcalico https://projectcalico.docs.tigera.io/charts
helm install calico projectcalico/tigera-operator --version v3.24.1\
--create-namespace -f values.yaml --namespace tigera-operator
I only really need a CNI, ideally the least contorted.
My impression is that the tigera is somehow a "new extented version" and it makes me
sad to see suddenly a much fuller K8s cluster because of this
(seems hence like mainly the devs of Calico wanted to get funding and needed to blow up
the complexity for fame of their product, but I might be wrong hence the question)
root#cp:~# kubectl get all -A | grep -e 'NAMESPACE\|calico'
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver pod/calico-apiserver-8665d9fcfb-6z7sv 1/1 Running 0 7m30s
calico-apiserver pod/calico-apiserver-8665d9fcfb-95rlh 1/1 Running 0 7m30s
calico-system pod/calico-kube-controllers-78687bb75f-ns5nj 1/1 Running 0 8m3s
calico-system pod/calico-node-2q8h9 1/1 Running 0 7m43s
calico-system pod/calico-typha-6d48dfd49d-p5p47 1/1 Running 0 7m47s
calico-system pod/csi-node-driver-9gjc4 2/2 Running 0 8m4s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
calico-apiserver service/calico-api ClusterIP 10.105.6.52 <none> 443/TCP 7m30s
calico-system service/calico-kube-controllers-metrics ClusterIP 10.105.39.117 <none> 9094/TCP 8m3s
calico-system service/calico-typha ClusterIP 10.102.152.6 <none> 5473/TCP 8m5s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
calico-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 8m4s
calico-system daemonset.apps/csi-node-driver 1 1 1 1 1 kubernetes.io/os=linux 8m4s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
calico-apiserver deployment.apps/calico-apiserver 2/2 2 2 7m30s
calico-system deployment.apps/calico-kube-controllers 1/1 1 1 8m3s
calico-system deployment.apps/calico-typha 1/1 1 1 8m4s
NAMESPACE NAME DESIRED CURRENT READY AGE
calico-apiserver replicaset.apps/calico-apiserver-8665d9fcfb 2 2 2 7m30s
calico-system replicaset.apps/calico-kube-controllers-78687bb75f 1 1 1 8m3s
calico-system replicaset.apps/calico-typha-588b4ff644 0 0 0 8m4s
calico-system replicaset.apps/calico-typha-6d48dfd49d 1 1 1 7m47s

Tigera is a Cloud-Native Application Protection Platform (CNAPP).
For sure, you just want the first copy, Calico CNI.

CNI is a small network plugin that is used for allocating IP address, but calico tigera is responsible for whole kubernetes networking and connecting nodes and services

Related

How to expose Kubernetes Dashboard with Nginx Ingress? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
The whole cluster consists of 3 nodes and everything seems to run correctly:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default ingress-nginx-controller-5c8d66c76d-wk26n 1/1 Running 0 12h
ingress-nginx-2 ingress-nginx-2-controller-6bfb65b8-9zcjm 1/1 Running 0 12h
kube-system calico-kube-controllers-684bcfdc59-2p72w 1/1 Running 1 (7d11h ago) 7d11h
kube-system calico-node-4zdwr 1/1 Running 2 (5d10h ago) 7d11h
kube-system calico-node-g5zt7 1/1 Running 0 7d11h
kube-system calico-node-x4whm 1/1 Running 0 7d11h
kube-system coredns-8474476ff8-jcj96 1/1 Running 0 5d10h
kube-system coredns-8474476ff8-v5rvz 1/1 Running 0 5d10h
kube-system dns-autoscaler-5ffdc7f89d-9s7rl 1/1 Running 2 (5d10h ago) 7d11h
kube-system kube-apiserver-node1 1/1 Running 2 (5d10h ago) 7d11h
kube-system kube-controller-manager-node1 1/1 Running 3 (5d10h ago) 7d11h
kube-system kube-proxy-2x8fg 1/1 Running 2 (5d10h ago) 7d11h
kube-system kube-proxy-pqqv7 1/1 Running 0 7d11h
kube-system kube-proxy-wdb45 1/1 Running 0 7d11h
kube-system kube-scheduler-node1 1/1 Running 3 (5d10h ago) 7d11h
kube-system nginx-proxy-node2 1/1 Running 0 7d11h
kube-system nginx-proxy-node3 1/1 Running 0 7d11h
kube-system nodelocaldns-6mrqv 1/1 Running 2 (5d10h ago) 7d11h
kube-system nodelocaldns-lsv8x 1/1 Running 0 7d11h
kube-system nodelocaldns-pq6xl 1/1 Running 0 7d11h
kubernetes-dashboard dashboard-metrics-scraper-856586f554-6s52r 1/1 Running 0 4d11h
kubernetes-dashboard kubernetes-dashboard-67484c44f6-gp8r5 1/1 Running 0 4d11h
The Dashboard service works fine as well:
$ kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.233.20.30 <none> 8000/TCP 4d11h
kubernetes-dashboard ClusterIP 10.233.62.70 <none> 443/TCP 4d11h
What I did recently, was creating an Ingress to expose the Dashboard to be available globally:
$ cat ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard
spec:
defaultBackend:
service:
name: kubernetes-dashboard
port:
number: 443
After applying the configuration above, it looks like it works correctly:
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
dashboard <none> * 80 10h
However, trying to access the Dashboard on any of the URLs below, both http and https, returns Connection Refused error:
https://10.11.12.13/api/v1/namespaces/kube-system/services/kube-dns/proxy
https://10.11.12.13/api/v1/
https://10.11.12.13/
What did I miss in this configuration? Additional comment: I don't want to assign any domain to the Dashboard, at the moment it's OK to access its IP address.
Ingress is namespaced resource , and kubernetes-dashboard pod located in "kubernetes-dashboard" namespace .
so you need to move the ingress to the "kubernetes-dashboard" namespace.
:: To list all namespaced k8s resources ::
kubectl api-resources --namespaced=true
Are you running metallb or a similar loadbalancer for nginx or are you using a nodeport for the ingress endpoint?
You need to access the ingress either via a loadbalancer IP or a nginx NodePort. And afaik you will need a hostname/DNS entry for ingress entry.
If you just want to access the dashboard without a hostname you don't need an ingress but a loadbalancer service or NodePort service to the dashboard pods.

Old ReplicaSet not getting replaced by new ReplicaSet after an kubectl edit

I am creating a deployment using this yaml file. It creates a replica of 4 busybox pods. All fine till here.
But when I edit this deployment using the command kubectl edit deployment my-dep2, only changing the version of busybox image to 1.31 (a downgrade but still an update from K8s point of view), the ReplicaSet is not completely replaced.
The output of kubectl get all --selector app=my-dep2 post the edit is:
NAME READY STATUS RESTARTS AGE
pod/my-dep2-55f67b974-5k7t9 0/1 ErrImagePull 2 5m26s
pod/my-dep2-55f67b974-wjwfv 0/1 CrashLoopBackOff 2 5m26s
pod/my-dep2-dcf7978b7-22khz 0/1 CrashLoopBackOff 6 12m
pod/my-dep2-dcf7978b7-2q5lw 0/1 CrashLoopBackOff 6 12m
pod/my-dep2-dcf7978b7-8mmvb 0/1 CrashLoopBackOff 6 12m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-dep2 0/4 2 0 12m
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-dep2-55f67b974 2 2 0 5m27s
replicaset.apps/my-dep2-dcf7978b7 3 3 0 12m
As you can see from the output above, there are 2 ReplicaSet which are existing in parallel. I expect the old ReplicaSet to be completely replaced by new ReplicaSet (containing the 1.31 version of busybox). But this is not happening. What am I missing here?
You are ignoring the errors ErrImagePull and CrashLoopBackOff. Those are telling you it is not being possible to run new containers (the image was not found in the docker registry), so old ones are kept to ensure the service runs (blue-green default/rolling update).
Edit
Also, your Busybox containers start and run nothing (as far as I can remember) and then finish, which causes Kubernetes to restart it and never arrive to an alive state. Maybe you'd better run some sleep 300 to it's entrypoint?
This is totally normal, expected result, related with Rolling Update mechanism in kubernetes
Take a quick look at the following working example, in which I used sample nginx Deployment. Once it's deployed, I run:
kubectl edit deployments.apps nginx-deployment
and removed the image tag which actually equals to performing an update to nginx:latest. Immediatelly after applying the change you can see the following:
$ kubectl get all --selector=app=nginx
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-574b87c764-bvmln 0/1 Terminating 0 2m6s
pod/nginx-deployment-574b87c764-zfzmh 1/1 Running 0 2m6s
pod/nginx-deployment-574b87c764-zskkk 1/1 Running 0 2m7s
pod/nginx-deployment-6fcf476c4-88fdm 0/1 ContainerCreating 0 1s
pod/nginx-deployment-6fcf476c4-btvgv 1/1 Running 0 3s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-deployment ClusterIP 10.3.247.159 <none> 80/TCP 6d4h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 3/3 2 3 2m7s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-574b87c764 2 2 2 2m7s
replicaset.apps/nginx-deployment-6fcf476c4 2 2 1 3s
As you can see, at certain point in time there are running pods in both replicas. It's because of the mentioned rolling update mechanism, which ensures your app availability when it is being updated.
When the update process is ended replicas count in the old replicaset is reduced to 0 so there are no running pods, managed by this replicaset as the new one achieved its desired state:
$ kubectl get all --selector=app=nginx
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-6fcf476c4-88fdm 1/1 Running 0 10s
pod/nginx-deployment-6fcf476c4-btvgv 1/1 Running 0 12s
pod/nginx-deployment-6fcf476c4-db5z7 1/1 Running 0 8s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-deployment ClusterIP 10.3.247.159 <none> 80/TCP 6d4h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 3/3 3 3 2m16s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-574b87c764 0 0 0 2m16s
replicaset.apps/nginx-deployment-6fcf476c4 3 3 3 12s
You may ask yourself: why is it still there ? why it is not deleted immediatelly after the new one becomes ready. Try the following:
$ kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
2 <none>
As you can see, there are 2 revisions of our rollout for this deployment. So now we may want to simply undo this recent change:
$ kubectl rollout undo deployment nginx-deployment
deployment.apps/nginx-deployment rolled back
Now, when wee look at our replicas we can observe a reverse process:
$ kubectl get all --selector=app=nginx
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-574b87c764-6j7l5 0/1 ContainerCreating 0 1s
pod/nginx-deployment-574b87c764-m7956 1/1 Running 0 4s
pod/nginx-deployment-574b87c764-v2r75 1/1 Running 0 3s
pod/nginx-deployment-6fcf476c4-88fdm 0/1 Terminating 0 3m25s
pod/nginx-deployment-6fcf476c4-btvgv 1/1 Running 0 3m27s
pod/nginx-deployment-6fcf476c4-db5z7 0/1 Terminating 0 3m23s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-deployment ClusterIP 10.3.247.159 <none> 80/TCP 6d4h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 3/3 3 3 5m31s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-574b87c764 3 3 2 5m31s
replicaset.apps/nginx-deployment-6fcf476c4 1 1 1 3m27s
Note, that there is no need to create a 3rd replicaset, as there is still the old one which can be used to undo our recent change. The final result looks as follows:
$ kubectl get all --selector=app=nginx
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-574b87c764-6j7l5 1/1 Running 0 40s
pod/nginx-deployment-574b87c764-m7956 1/1 Running 0 43s
pod/nginx-deployment-574b87c764-v2r75 1/1 Running 0 42s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-deployment ClusterIP 10.3.247.159 <none> 80/TCP 6d4h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 3/3 3 3 6m10s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-574b87c764 3 3 3 6m10s
replicaset.apps/nginx-deployment-6fcf476c4 0 0 0 4m6s
I hope that the above example helped you to realize why this old replicaset isn't immediatelly removed and what it can be still useful for.
as #emi said busybox and alpine etc don't do anything in case you give an explicit command. Kubernetes try to keep running but the default container does not perform any action and at the end, Kubernetes say okay, something is wrong no need to try to restart the container again and again. For test purposes, it might look as below.
kind: Pod
apiVersion: v1
metadata:
name: my-test-pod
spec:
containers:
- image: nginx
name: enginx
- image: alpine
name: alpine
command: ["sleep", "3600"]

What is 'AVAILABLE' column in kubernetes daemonsets

I may have a stupid question but could someone explain what "Available" correctly represent in DaemonSets? I checked What is the difference between current and available pod replicas in kubernetes deployment? answer but there are no readiness errors.
In cluster i see below status:
$ kubectl get ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR
kube-proxy 6 6 5 6 5 beta.kubernetes.io/os=linux
Why it is showing as 5 instead of 6?
all pods are running perfectly fine without any "readiness" errors or restarts?
$ kubectl get pods -n kube-system | grep kube-proxy
kube-proxy-cv7vv 1/1 Running 0 20d
kube-proxy-kcd67 1/1 Running 0 20d
kube-proxy-l4nfk 1/1 Running 0 20d
kube-proxy-mkvjd 1/1 Running 0 87d
kube-proxy-qb7nz 1/1 Running 0 36d
kube-proxy-x8l87 1/1 Running 0 87d
Could someone tell what can be checked further?
The Available field shows the number of replicas or pods that are ready to accept traffic and passed all the criterion such as readiness or liveness probe or any other condition that verifies that your application is ready to serve the requests coming from user.

Cannot delete kubernetes service with no deployment [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I cannot force delete kubernetes service. However I don't have any deployments at the moment.
~$ kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/etcd-kubernetes-master 1/1 Running 0 26m
kube-system pod/kube-apiserver-kubernetes-master 1/1 Running 0 26m
kube-system pod/kube-controller-manager-kubernetes-master 1/1 Running 0 26m
kube-system pod/kube-flannel-ds-amd64-5h46j 0/1 CrashLoopBackOff 9 26m
kube-system pod/kube-proxy-ltz4v 1/1 Running 0 26m
kube-system pod/kube-scheduler-kubernetes-master 1/1 Running 0 26m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 48m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-flannel-ds-amd64 1 1 0 1 0 <none> 47m
kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 <none> 47m
kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 <none> 47m
kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 <none> 47m
kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 <none> 47m
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 48m
~$ kubectl get deployments --all-namespaces
No resources found
Please help to stop and delete the kubernetes service
It's not mandatory to a have a deployment for pods. Generally, system pods running in kube-system namespace are created as static pods directly.
You can delete a pod via kubectl delete po podname and deamonset via kubectl delete ds daemonsetname and a service via kubectl delete svc servicename
The services kubernetes and kube-dns in kube-system namespace are managed by kubernetes control plane and will be recreated automatically upon removal. Also I don't think you have a reason to delete those.
The list which you posted are all core kubernetes services, these should not be deleted. If you have used kubeadm to create the kubernetes cluster, you can run kubeadm reset to destroy the cluster.

Google Kubernetes Engine Stackdriver logging/monitoring is gone at gke version 1.15

I'm using GKE for more than year and i never had any problems with stackdriver logging/monitoring. But when i created new cluster with version 1.15.9-gke.26 i don't see any logs in stackdriver (neither metrics). It also didn't work with new cluster with version 1.14. Although it works for older cluster which was updated to version 1.14 from 1.13.
Some settings:
gke version = 1.15.9-gke.26
Stackdriver Kubernetes Engine Monitoring = System and workload logging and monitoring
VPC-native (alias IP) = Enabled
Workload Identity = Disabled
Weird things:
Following daemon sets have 0/0 pods (DaemonSet has no nodes selected):
- metadata-proxy-v0.1
- nvidia-gpu-device-plugin (doesn't sound useful)
I'm not sure how exactly stackdriver works and how to debug it... I will appreciate any tips
Deployments and daemonsets currently running at cluster:
kubectl get daemonsets,deployments --all-namespaces
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.extensions/fluentd-gcp-v3.1.1 3 3 3 3 3 beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/os=linux 16h
kube-system daemonset.extensions/metadata-proxy-v0.1 0 0 0 0 0 beta.kubernetes.io/metadata-proxy-ready=true,beta.kubernetes.io/os=linux 16h
kube-system daemonset.extensions/nvidia-gpu-device-plugin 0 0 0 0 0 <none> 16h
kube-system daemonset.extensions/prometheus-to-sd 3 3 3 3 3 beta.kubernetes.io/os=linux 16h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.extensions/event-exporter-v0.3.0 1/1 1 1 16h
kube-system deployment.extensions/fluentd-gcp-scaler 1/1 1 1 16h
kube-system deployment.extensions/heapster-gke 1/1 1 1 16h
kube-system deployment.extensions/kube-dns 2/2 2 2 16h
kube-system deployment.extensions/kube-dns-autoscaler 1/1 1 1 16h
kube-system deployment.extensions/l7-default-backend 1/1 1 1 16h
kube-system deployment.extensions/metrics-server-v0.3.3 1/1 1 1 16h
kube-system deployment.extensions/stackdriver-metadata-agent-cluster-level 1/1 1 1 16h
per documentation, and as #Darshan Naik mentioned:
If you are using Legacy Logging and Monitoring, then you must switch to Kubernetes Engine Monitoring before support for Legacy Logging and Monitoring is removed. Legacy Logging and Monitoring will no longer be supported as of GKE 1.15.
https://cloud.google.com/monitoring/kubernetes-engine#select