I am new to kubernetes and was trying to create horizontal auto scaler. For this I deployed metrics server. I used the official gitHub repository for metrics-server. I can see the process running as below
NAME READY STATUS RESTARTS AGE
pod/metrics-server-766c9b8df-dltgd 1/1 Running 0 13m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/metrics-server ClusterIP 10.106.14.34 <none> 443/TCP 37m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/metrics-server 1/1 1 1 37m
I deployed a pod which is running on the worker nodes and hence I can see below metrics
ubuntu#master:~/metrics-server$ kubectl top pods
NAME CPU(cores) MEMORY(bytes)
demo-deploy-d86b8cfcc-2jg9w 2m 289Mi
demo-deploy-d86b8cfcc-5xtww 1m 284Mi
demo-deploy-d86b8cfcc-hk2bq 1m 278Mi
demo-deploy-d86b8cfcc-jkdmc 1m 286Mi
But issue is
ubuntu#master:~/metrics-server$ kubectl top nodes
error: metrics not available yet
I searched a lot but my bad I couldn't get answer for this. Can someone help me why it is so?
ubuntu#master:~/metrics-server$ kubectl top nodes
this should show the metrics for the worker nodes. But to my bad I am not getting it. Not even blank status.
I want to configure Kubernetes Dashboard on a remote server using this guide: https://k21academy.com/docker-kubernetes/kubernetes-dashboard/
I installed it using:
kubernetes#kubernetes1:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml
List service:
kubernetes#kubernetes1:~$ kubectl get all -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-64bcc67c9c-q8f7j 1/1 Running 0 71m
pod/kubernetes-dashboard-66c887f759-pq58q 1/1 Running 0 71m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.105.143.75 <none> 8000/TCP 71m
service/kubernetes-dashboard ClusterIP 10.102.209.213 <none> 443/TCP 71m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 71m
deployment.apps/kubernetes-dashboard 1/1 1 1 71m
NAME DESIRED CURRENT READY AGE
replicaset.apps/dashboard-metrics-scraper-64bcc67c9c 1 1 1 71m
replicaset.apps/kubernetes-dashboard-66c887f759 1 1 1 71m
kubernetes#kubernetes1:~$
But when I try to edit the port according to the guide I get:
kubernetes#kubernetes1:~$ kubectl edit service/kubernetes-dashboard
Error from server (NotFound): services "kubernetes-dashboard" not found
kubernetes#kubernetes1:~$
Do you know how I can change the port?
Seems like you are looking into default or some other namespace.
you can try
kubectl edit service/kubernetes-dashboard -n kubernetes-dashboard
A nice tool for namespace switching
curl -LO https://github.com/kvaps/kubectl-use/raw/master/kubectl-use
chmod +x ./kubectl-use
sudo mv ./kubectl-use /usr/local/bin/kubectl-use
then
kubectl use kubernetes-dashboard
After this, you do not need to specify namespace -n kubernetes-dashboard in the edit command, or kubectl get pods, it will use kubernetes-dashboard as a default context.
kubectl-use
I installed Istio on my EKS cluster and installed bookinfo from samples.
$ sudo Kubectl apply -f /samples/bookinfo/platform/kube/bookinfo.yaml
After installation, I am able to see the services but not the pods for those services
$ sudo Kubectl get services
NAME. TYPE
productpage ClusterIP.
ratings. ClusterIP
reviews. ClusterIP
But the pods in the above services are not to be seen
$ sudo Kubectl get pods
No resources found in default namespace
Any idea why I can view the services but not the pods in those services installed by booking app?
I've verified the bookinfo app with istio 1.9.3 and it works correctly.
I went to the istio 1.9.3 directory with the following command cd istio-1.9.3 and used kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml to install the bookinfo application.
kubectl get pods
NAME READY STATUS RESTARTS AGE
details-v1-66b6955995-q2nwh 2/2 Running 0 44s
productpage-v1-5d9b4c9849-lhc2b 2/2 Running 0 44s
ratings-v1-fd78f799f-t8gkp 2/2 Running 0 43s
reviews-v1-6549ddccc5-jv2tg 2/2 Running 0 43s
reviews-v2-76c4865449-wjkxx 2/2 Running 0 43s
reviews-v3-6b554c875-9gsnd 2/2 Running 0 42s
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.112.2.127 <none> 9080/TCP 81s
kubernetes ClusterIP 10.112.0.1 <none> 443/TCP 6m41s
productpage ClusterIP 10.112.5.110 <none> 9080/TCP 75s
ratings ClusterIP 10.112.1.157 <none> 9080/TCP 79s
reviews ClusterIP 10.112.1.106 <none> 9080/TCP 78s
As you can see both pods and services were deployed correctly.
I would just recommend to redeploy the bookinfo application with the newest version and it should work.
Also you can use the raw.githubusercontent.com instead of the samples directory to deploy it. You can find more about that on istio documentation.
I am trying to get the metrics in Kubernetes dashboard. For that I'm running the influxdb and heapster pod in my kube-system namespace. I checked the status of pods using the command kubectl get pods -n kube-system. Here is the link which I was followed But heapster shows the logs as
E1023 13:41:07.915723 1 reflector.go:190] k8s.io/heapster/metrics/util/util.go:30: Failed to list *v1.Node: Get https://kubernetes.default/api/v1/nodes?resourceVersion=0: dial tcp: i/o timeout
Could anybody suggest where might be I will do the changes in my configurations?
Looks like the heapster cannot talk to you kube-apiserver through the kubernetes service on your default namespace. A few of things, you can try:
Check that the service is defined in the default namespace:
$ kubectl get svc kubernetes
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 92d
Check that all your kube-proxy pods are running ok:
$ kubectl -n kube-system -l=k8s-app=kube-proxy get pods
NAME READY STATUS RESTARTS AGE
kube-proxy-xxxxx 1/1 Running 0 4d18h
...
Check that all your overlay pods are running. For example for calico
$ kubectl -n kube-system -l=k8s-app=calico-node get pods
NAME READY STATUS RESTARTS AGE
calico-node-88fgd 2/2 Running 3 4d21h
...
I executed the following command: % kubectl get service
It returned this list of services that were created at one point in time with kubectl:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
car-example-service 10.0.0.129 <nodes> 8025:31564/TCP,1025:31764/TCP 10h
circle-example-service 10.0.0.48 <nodes> 9000:30362/TCP 9h
demo-service 10.0.0.9 <nodes> 8025:30696/TCP,1025:32047/TCP 10h
example-servic 10.0.0.168 <nodes> 8080:30231/TCP 1d
example-service 10.0.0.68 <nodes> 8080:32308/TCP 1d
example-service2 10.0.0.184 <nodes> 9000:32727/TCP 13h
example-webservice 10.0.0.35 <nodes> 9000:32256/TCP 1d
hello-node 10.0.0.224 <pending> 8080:32393/TCP 120d
kubernetes 10.0.0.1 <none> 443/TCP 120d
mouse-example-service 10.0.0.40 <nodes> 9000:30189/TCP 9h
spring-boot-web 10.0.0.171 <nodes> 8080:32311/TCP 9h
spring-boot-web-purple 10.0.0.42 <nodes> 8080:31740/TCP 9h
I no longer want any of these services listed, because when I list resources:
% kubectl get rs
I am expecting that I only see the spring-boot-web resource listed.
NAME DESIRED CURRENT READY AGE
spring-boot-web-1175758536 1 1 0 18m
Please help clarify why I am seeing services that are listed , when the resources only show 1 resource.
Simply call this command.
1/Get all available services:
kubectl get service -o wide
2/ Then you can delete any services like this:
kubectl delete svc <YourServiceName>
show deployment
$ kubectl get deployments;
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
spring-hello 1 1 1 1 22h
spring-world 1 1 1 1 22h
vfe-hello-wrold 1 1 1 1 14m
show services
$kubectl get services;
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d
spring-hello NodePort 10.103.27.226 <none> 8081:30812/TCP 23h
spring-world NodePort 10.102.21.165 <none> 8082:31557/TCP 23h
vfe-hello-wrold NodePort 10.101.23.36 <none> 8083:31532/TCP 14m
delete deployment
$ kubectl delete deployments vfe-hello-wrold
deployment.extensions "vfe-hello-wrold" deleted
delete services
$ kubectl delete service vfe-hello-wrold
service "vfe-hello-wrold" deleted
Kubernetes objects like Service and Deployment/ReplicaSet/Pod are independent and their deletions do not cascade to each other (like it does between say Deployment/RS/Pod). You need to manage your services independently from other objects, so you just need to delete the ones that are still lingering behind.
If you want to delete multiple related or non related objects at the same time
kubectl delete <objType>/objname <objType>/objname <objType>/objname
Example
kubectl delete service/myhttpd-clusterip service/myhttpd-nodeport
kubectl delete service/myhttpd-lb deployment/myhttpd
This also works
kubectl delete deploy/httpenv svc/httpenv-np
To delete ALL services in ALL namespaces just run:
kubectl delete --all services --namespace=*here-you-enter-namespace
The other option is to delete the deployment with:
kubectl delete deployment deployment-name
That will delete the service as well!
IMPORTANT: And watch out when you run this command in production!
Cheers!
First find the service:
kubectl get service -A
Note the namespace of the service you want to delete.
Then delete using
kubectl delete service <YourServiceName> --namespace <YourServiceNameSpace>
Also, check -carefully- if answer by #Dragomir Ivanov better fits your needs
If you're having trouble you probably forgot to specify the namespace:
-n some-namespace
This tripped me up quite a bit.