ExternalIP for service is always pending - kubernetes

I have local multi machine vagrant kubernetes cluster which is created using code here.
I have created kubernetes replication controller created using kubia-rc.yaml.
vagrant#k8s-head:~$ kubectl get rc
NAME DESIRED CURRENT READY AGE
kubia 3 3 3 26h
vagrant#k8s-head:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kubia-l28dv 1/1 Running 1 26h
kubia-vd7jf 1/1 Running 1 26h
kubia-wsv42 1/1 Running 1 26h
Then I have created the service of type LoadBalancer using this yaml here.
The output of the command is success and it displays successfully created service
vagrant#k8s-head:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubia ClusterIP 10.103.199.175 <none> 80/TCP 26h
kubia-loadbalancer LoadBalancer 10.107.166.22 <pending> 80:30865/TCP 25h
The output of kubia-loadbalancer is always <pending> and don't know what could the issue.
What is wrong with my setup?

Related

kube-apiserver: constantly 5 to 10% CPU: Although there is no single request

I installed kind to play around with Kubernetes.
If I use top and sort by CPU usage (key C), then I see that kube-apiserver is constantly consuming 5 to 10% CPU.
Why?
I don't have installed something up to now:
guettli#p15:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-558bd4d5db-ntg7c 1/1 Running 0 40h
kube-system coredns-558bd4d5db-sx8w9 1/1 Running 0 40h
kube-system etcd-kind-control-plane 1/1 Running 0 40h
kube-system kindnet-9zkkg 1/1 Running 0 40h
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 40h
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 40h
kube-system kube-proxy-dthwl 1/1 Running 0 40h
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 40h
local-path-storage local-path-provisioner-547f784dff-xntql 1/1 Running 0 40h
guettli#p15:~$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 40h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 40h
guettli#p15:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane,master 40h v1.21.1
guettli#p15:~$ kubectl get nodes --all-namespaces
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane,master 40h v1.21.1
I am curious. Where does the CPU usage come from? How can I investigate this?
Even in an empty cluster with just one master node, there are at least 5 components that reach out to the API server on a regular basis:
kubelet for the master node
Controller manager
Scheduler
CoreDNS
Kube proxy
This is because API Server acts as the only entry point for all components in Kubernetes to know what the cluster state should be and take action if needed.
If you are interested in the details, you could enable audit logs in the API server and get a very verbose file with all the requests being made.
How to do so is not the goal of this answer, but you can start from the apiserver documentation.

I get multiple Ip with my ingress ressource and don't know why or how to fix it?

Hello guys I have an ingress controller running and i deployed an ingress for kafka (deployed through strimzi), but the ingress is showing me multiples Ip for the address, instead of one, so I'd like to know why and what can I do to fix it cuz from what I've seen in tutorials , whent you have an ingress the Ip given in the address is the same as th on in the ingress controller service ( in my case it should be : 172.24.20.195) so here is the ingres controller components :
root#di-admin-general:/home/lda# kubectl get all -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/default-http-backend-598b7d7dbd-ggghv 1/1 Running 2 6d3h 192.168.129.71 pr-k8s-fe-fastdata-worker-02 <none> <none>
pod/nginx-ingress-controller-4rdxd 1/1 Running 2 6d3h 172.24.20.8 pr-k8s-fe-fastdata-worker-02 <none> <none>
pod/nginx-ingress-controller-g6d2f 1/1 Running 2 6d3h 172.24.20.242 pr-k8s-fe-fastdata-worker-01 <none> <none>
pod/nginx-ingress-controller-r995l 1/1 Running 2 6d3h 172.24.20.38 pr-k8s-fe-fastdata-worker-03 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/default-http-backend ClusterIP 192.168.42.107 <none> 80/TCP 6d3h app=default-http-backend
service/nginx-ingress-controller LoadBalancer 192.168.113.157 172.24.20.195 80:32641/TCP,443:32434/TCP 163m workloadID_nginx-ingress-controller=true
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
daemonset.apps/nginx-ingress-controller 3 3 3 3 3 <none> 6d3h nginx-ingress-controller rancher/nginx-ingress-controller:nginx-0.35.0-rancher2 app=ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/default-http-backend 1/1 1 1 6d3h default-http-backend rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1 app=default-http-backend
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/default-http-backend-598b7d7dbd 1 1 1 6d3h default-http-backend rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1 app=default-http-backend,pod-template-hash=598b7d7dbd
root#di-admin-general:/home/lda#
and here are the kafka part:
root#di-admin-general:/home/lda# kubectl get ingress -n kafkanew -o wide
NAME CLASS HOSTS ADDRESS PORTS AGE
kafka-ludo-kafka-0 <none> broker-0.172.24.20.195.nip.io 172.24.20.242,172.24.20.38,172.24.20.8 80, 443 5d22h
kafka-ludo-kafka-1 <none> broker-1.172.24.20.195.nip.io 172.24.20.242,172.24.20.38,172.24.20.8 80, 443 5d22h
kafka-ludo-kafka-2 <none> broker-2.172.24.20.195.nip.io 172.24.20.242,172.24.20.38,172.24.20.8 80, 443 5d22h
kafka-ludo-kafka-bootstrap <none> bootstrap.172.24.20.195.nip.io 172.24.20.242,172.24.20.38,172.24.20.8 80, 443 5d22h
you see that we have 3 ips (172.24.20.242,172.24.20.38,172.24.20.8) instead of just one that from what I think should be 172.24.20.195, please if anyone can provide explanations, the link to the strimzi yaml used to expose an ingress is here : https://developers.redhat.com/blog/2019/06/12/accessing-apache-kafka-in-strimzi-part-5-ingress/
thank you for your help
Your external IP is the one you see in kubectl get svc, in your case 172.24.20.195.
The other IPs you see in kubectl get ingress are ingress controller pod IPs, which are internal to your cluster.

im facing this error in kubernetes using minikube

I tried to deploy nginx server using kubernetes. I was able to create deployment and thn create service. But when i gave the curl command im facing an error. Im not able to curl and open nginx webpage in browser.
Below are the commands i used and error i got.
kubectl get pods
NAME READY STATUS RESTARTS AGE
curl 1/1 Running 8 15d
curl-deployment-646445496f-59fs9 1/1 Running 7 15d
hello-5d448ffc76-cwzcl 1/1 Running 13 23d
hello-node-7567d9fdc9-ffdkx 1/1 Running 8 20d
my-nginx-5b6fb7fb46-bdzdq 0/1 ContainerCreating 0 15d
mytestwebapp 1/1 Running 10 21d
nginx-6799fc88d8-w76cb 1/1 Running 5 13d
nginx-deployment-66b6c48dd5-9mkh8 1/1 Running 12 23d
nginx-test-795d659f45-d9shx 1/1 Running 4 13d
rss-site-7b6794856f-9586w 2/2 Running 40 15d
rss-site-7b6794856f-z59vn 2/2 Running 78 21d
jit#jit-Vostro-15-3568:~$ kubectl logs webserver
Error from server (NotFound): pods "webserver" not found
jit#jit-Vostro-15-3568:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node LoadBalancer 10.104.134.171 <pending> 8080:31733/TCP 13d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d
my-nginx NodePort 10.103.114.92 <none> 8080:32563/TCP,443:32397/TCP 15d
nginx NodePort 10.110.113.60 <none> 80:30985/TCP 13d
nginx-test NodePort 10.109.16.192 <none> 8080:31913/TCP 13d
jit#jit-Vostro-15-3568:~$ curl kube-worker-1:30985
curl: (6) Could not resolve host: kube-worker-1
As you can see you have pod called nginx, that indicates that you have had nginx server already deployed in pod on your cluster. You don't have pod called webserver that's why you're getting
Error from server (NotFound): pods "webserver" not found error.
Also to access nginx service try to pass curl it via ip:port:
$ curl 10.110.113.60:30985
If you point a web browser to http://IP_OF_NODE:ASSIGNED_PORT (where IP_OF_NODE is an IP address of one of your nodes and ASSIGNED_PORT is the port assigned during the create service command), you should see the NGINX Welcome page!
Take a look: nginx-app-kubernetes.
I tried the above scenario locally.
do a kubectl describe svc <svc-name>
check whether it have any end-points.
probably it doesn't have any endpoints

unable to scale down Kubernetes cluster

I have a Cassandra/Kubernetes cluster on GCP
manuchadha25#cloudshell:~ (copper-frame-262317)$ kubectl get statefulsets --all-namespaces
NAMESPACE NAME READY AGE
cass-operator cluster1-dc1-default-sts 3/3 2d9h
manuchadha25#cloudshell:~ (copper-frame-262317)$ kubectl get all -n cass-operator
NAME READY STATUS RESTARTS AGE
pod/cass-operator-5f8cdf99fc-9c5g4 1/1 Running 0 2d9h
pod/cluster1-dc1-default-sts-0 2/2 Running 0 2d9h
pod/cluster1-dc1-default-sts-1 2/2 Running 0 2d9h
pod/cluster1-dc1-default-sts-2 2/2 Running 0 2d9h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cass-operator-metrics ClusterIP 10.51.243.147 <none> 8383/TCP,8686/TCP 2d9h
service/cassandra-loadbalancer LoadBalancer 10.51.240.24 34.91.214.233 9042:30870/TCP 37h
service/cassandradatacenter-webhook-service ClusterIP 10.51.243.86 <none> 443/TCP 2d9h
service/cluster1-dc1-all-pods-service ClusterIP None <none> <none> 2d9h
service/cluster1-dc1-service ClusterIP None <none> 9042/TCP,8080/TCP 2d9h
service/cluster1-seed-service ClusterIP None <none> <none> 2d9h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cass-operator 1/1 1 1 2d9h
NAME DESIRED CURRENT READY AGE
replicaset.apps/cass-operator-5f8cdf99fc 1 1 1 2d9h
NAME READY AGE
statefulset.apps/cluster1-dc1-default-sts 3/3 2d9h
manuchadha25#cloudshell:~ (copper-frame-262317)$
I want to scale it down from 3 nodes to 2 nodes. I am tried running the following commands but both failed.
manuchadha25#cloudshell:~ (copper-frame-262317)$ kubectl scale statefulsets cluster1-dc1-default-sts --replicas=2
Error from server (NotFound): statefulsets.apps "cluster1-dc1-default-sts" not found
What is the right command to scale down cluster?
Use -n parameter to specify correct namespace where the statfulset is deployed. Without the namespace it's trying to delete from default namespace where the statfulset cluster1-dc1-default-sts does not exist.
kubectl scale statefulsets cluster1-dc1-default-sts --replicas=2 -n cass-operator
Execute command in correct namespace using -n parameter (-n cass-operator in your case)
kubectl scale statefulsets cluster1-dc1-default-sts --replicas=2 -n cass-operator
You can also change namespace for all subsequent commands using
kubectl config set-context --current --namespace=cass-operator

How to assign an IP to istio-ingressgateway on localhost?

I am using kubespray to run a kubernetes cluster on my laptop. The cluster is running on 7 VMs and the roles of the VM's spread as follows:
NAME STATUS ROLES AGE VERSION
k8s-1 Ready master 2d22h v1.16.2
k8s-2 Ready master 2d22h v1.16.2
k8s-3 Ready master 2d22h v1.16.2
k8s-4 Ready master 2d22h v1.16.2
k8s-5 Ready <none> 2d22h v1.16.2
k8s-6 Ready <none> 2d22h v1.16.2
k8s-7 Ready <none> 2d22h v1.16.2
I've installed https://istio.io/ to build a microservices environment.
I have 2 services running and like to access from outside:
k get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
greeter-service ClusterIP 10.233.50.109 <none> 3000/TCP 47h
helloweb ClusterIP 10.233.8.207 <none> 3000/TCP 47h
and the running pods:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default greeter-service-v1-8d97f9bcd-2hf4x 2/2 Running 0 47h 10.233.69.7 k8s-6 <none> <none>
default greeter-service-v1-8d97f9bcd-gnsvp 2/2 Running 0 47h 10.233.65.3 k8s-2 <none> <none>
default greeter-service-v1-8d97f9bcd-lkt6p 2/2 Running 0 47h 10.233.68.9 k8s-7 <none> <none>
default helloweb-77c9476f6d-7f76v 2/2 Running 0 47h 10.233.64.3 k8s-1 <none> <none>
default helloweb-77c9476f6d-pj494 2/2 Running 0 47h 10.233.69.8 k8s-6 <none> <none>
default helloweb-77c9476f6d-tnqfb 2/2 Running 0 47h 10.233.70.7 k8s-5 <none> <none>
The problem is, I can not access the services from outside, because I do not have the EXTERNAL IP address(remember the cluster is running on my laptop).
k get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.233.61.112 <pending> 15020:31311/TCP,80:30383/TCP,443:31494/TCP,15029:31383/TCP,15030:30784/TCP,15031:30322/TCP,15032:30823/TCP,15443:30401/TCP 47h
As you can see, the column EXTERNAL-IP the value is <pending>.
The question is, how to assign an EXTERNAL-IP to the istio-ingressgateway.
First of all, you can't make k8s to assign you an external IP address, as LoadBalancer service is Cloud Provider specific. You could push your router external IP address to be mapped to it, I guess, but it is not trivial.
To reach the service, you can do this:
kubectl edit svc istio-ingressgateway -n istio-system
Change the type of the service from LoadBalancer to ClusterIp. You can also do NodePort. Actually you can skip this step, as LoadBalancer service already contains NodePort and ClusterIp. It is just to get rid of that pending status.
kubectl port-forward svc/istio-ingressgateway YOUR_LAPTOP_PORT:INGRESS_CLUSTER_IP_PORT -n istio-system
I don't know to which port you want to access from your localhost. Say 80, you can do:
kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system
Now port 8080 of your laptop (localhost:8080) will be mapped to the port 80 of istio-ingressgateway service.
By default, there is no way Kubernetes can assign external IP to LoadBalancer service.
This service type needs infrastructure support which works in cloud offerings like GKE, AKS, EKS etc.
As you are running this cluster inside your laptop, deploy MetalLB Load Balancer to get EXTERNAL-IP
It's not possible as Suresh explained.
But if you want to access from your laptop you can use in your service type: NodePort, which gives you access from outside the cluster.
You should first obtain the IP of your cluster, then create your service with something like this:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
type: NodePort
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30000
After that, you can access from your laptop with: http://cluster-ip:30000
There is no need to create an ingress for that.
You should use a port in range (30000-32767), as stated below:
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).
If you are using minikube, just run:
$ minikube tunnel
$ k get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.111.187.167 127.0.0.1 15021:31949/TCP,80:32215/TCP,443:30585/TCP 9m48s