I executed the following command: % kubectl get service
It returned this list of services that were created at one point in time with kubectl:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
car-example-service 10.0.0.129 <nodes> 8025:31564/TCP,1025:31764/TCP 10h
circle-example-service 10.0.0.48 <nodes> 9000:30362/TCP 9h
demo-service 10.0.0.9 <nodes> 8025:30696/TCP,1025:32047/TCP 10h
example-servic 10.0.0.168 <nodes> 8080:30231/TCP 1d
example-service 10.0.0.68 <nodes> 8080:32308/TCP 1d
example-service2 10.0.0.184 <nodes> 9000:32727/TCP 13h
example-webservice 10.0.0.35 <nodes> 9000:32256/TCP 1d
hello-node 10.0.0.224 <pending> 8080:32393/TCP 120d
kubernetes 10.0.0.1 <none> 443/TCP 120d
mouse-example-service 10.0.0.40 <nodes> 9000:30189/TCP 9h
spring-boot-web 10.0.0.171 <nodes> 8080:32311/TCP 9h
spring-boot-web-purple 10.0.0.42 <nodes> 8080:31740/TCP 9h
I no longer want any of these services listed, because when I list resources:
% kubectl get rs
I am expecting that I only see the spring-boot-web resource listed.
NAME DESIRED CURRENT READY AGE
spring-boot-web-1175758536 1 1 0 18m
Please help clarify why I am seeing services that are listed , when the resources only show 1 resource.
Simply call this command.
1/Get all available services:
kubectl get service -o wide
2/ Then you can delete any services like this:
kubectl delete svc <YourServiceName>
show deployment
$ kubectl get deployments;
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
spring-hello 1 1 1 1 22h
spring-world 1 1 1 1 22h
vfe-hello-wrold 1 1 1 1 14m
show services
$kubectl get services;
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d
spring-hello NodePort 10.103.27.226 <none> 8081:30812/TCP 23h
spring-world NodePort 10.102.21.165 <none> 8082:31557/TCP 23h
vfe-hello-wrold NodePort 10.101.23.36 <none> 8083:31532/TCP 14m
delete deployment
$ kubectl delete deployments vfe-hello-wrold
deployment.extensions "vfe-hello-wrold" deleted
delete services
$ kubectl delete service vfe-hello-wrold
service "vfe-hello-wrold" deleted
Kubernetes objects like Service and Deployment/ReplicaSet/Pod are independent and their deletions do not cascade to each other (like it does between say Deployment/RS/Pod). You need to manage your services independently from other objects, so you just need to delete the ones that are still lingering behind.
If you want to delete multiple related or non related objects at the same time
kubectl delete <objType>/objname <objType>/objname <objType>/objname
Example
kubectl delete service/myhttpd-clusterip service/myhttpd-nodeport
kubectl delete service/myhttpd-lb deployment/myhttpd
This also works
kubectl delete deploy/httpenv svc/httpenv-np
To delete ALL services in ALL namespaces just run:
kubectl delete --all services --namespace=*here-you-enter-namespace
The other option is to delete the deployment with:
kubectl delete deployment deployment-name
That will delete the service as well!
IMPORTANT: And watch out when you run this command in production!
Cheers!
First find the service:
kubectl get service -A
Note the namespace of the service you want to delete.
Then delete using
kubectl delete service <YourServiceName> --namespace <YourServiceNameSpace>
Also, check -carefully- if answer by #Dragomir Ivanov better fits your needs
If you're having trouble you probably forgot to specify the namespace:
-n some-namespace
This tripped me up quite a bit.
Related
I want to configure Kubernetes Dashboard on a remote server using this guide: https://k21academy.com/docker-kubernetes/kubernetes-dashboard/
I installed it using:
kubernetes#kubernetes1:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml
List service:
kubernetes#kubernetes1:~$ kubectl get all -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-64bcc67c9c-q8f7j 1/1 Running 0 71m
pod/kubernetes-dashboard-66c887f759-pq58q 1/1 Running 0 71m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.105.143.75 <none> 8000/TCP 71m
service/kubernetes-dashboard ClusterIP 10.102.209.213 <none> 443/TCP 71m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 71m
deployment.apps/kubernetes-dashboard 1/1 1 1 71m
NAME DESIRED CURRENT READY AGE
replicaset.apps/dashboard-metrics-scraper-64bcc67c9c 1 1 1 71m
replicaset.apps/kubernetes-dashboard-66c887f759 1 1 1 71m
kubernetes#kubernetes1:~$
But when I try to edit the port according to the guide I get:
kubernetes#kubernetes1:~$ kubectl edit service/kubernetes-dashboard
Error from server (NotFound): services "kubernetes-dashboard" not found
kubernetes#kubernetes1:~$
Do you know how I can change the port?
Seems like you are looking into default or some other namespace.
you can try
kubectl edit service/kubernetes-dashboard -n kubernetes-dashboard
A nice tool for namespace switching
curl -LO https://github.com/kvaps/kubectl-use/raw/master/kubectl-use
chmod +x ./kubectl-use
sudo mv ./kubectl-use /usr/local/bin/kubectl-use
then
kubectl use kubernetes-dashboard
After this, you do not need to specify namespace -n kubernetes-dashboard in the edit command, or kubectl get pods, it will use kubernetes-dashboard as a default context.
kubectl-use
I have deployed pihole on my k3s cluster using this helm chart https://github.com/MoJo2600/pihole-kubernetes.
(I used this tutorial)
I now have my services but they dont have external IPs:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pihole-web ClusterIP 10.43.58.197 <none> 80/TCP,443/TCP 11h
pihole-dns-udp NodePort 10.43.248.252 <none> 53:30451/UDP 11h
pihole-dns-tcp NodePort 10.43.248.144 <none> 53:32260/TCP 11h
pihole-dhcp NodePort 10.43.96.49 <none> 67:30979/UDP 11h
I have tried to assing the IPs manually with this command:
kubectl patch svc pihole-dns-tcp -p '{"spec":{"externalIPs":["192.168.178.210"]}}'
But when executing the command i'm getting this error:
Error from server (NotFound): services "pihole-dns-tcp" not found
Any Ideas for a fix?
Thank you in advance :)
Looks Like "pihole-dns-tcp" is in a different namespace to the namespace where patch command is being ran.
As per the article you have shared , it seems like service pihole-dns-tcp is in pihole . So the command should be
kubectl patch svc pihole-dns-tcp -n pihole -p '{"spec":{"externalIPs":["192.168.178.210"]}}'
I want to remove all the pods from a specific namespace through Ansible Play. Here , I'm trying to delete all the postgres pods from a namespace 'postgres-ns'.
I'm using below ansible play to remove it:
- name: Unistalling postgres from K8s
block:
- name: Removing Statefulsets & Service from "{{postgres_namespace}}"
action:
shell kubectl -n "{{postgres_namespace}}" delete statefulsets "{{postgres_release_name}}" && kubectl -n "{{postgres_namespace}}" delete service "{{postgres_release_name}}"-service
register: postgres_removal_status
- debug:
var: postgres_removal_status.stdout_lines
but getting this error:
Error from server (NotFound): statefulsets.apps \"postgres\" not found
This is the result from kc -n postgres-ns get all:
NAME READY STATUS RESTARTS AGE`
`pod/postgres-postgresql-0 1/1 Running 0 57s`
`NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE`
`service/postgres-postgresql ClusterIP 10.108.64.70 <none> 5432/TCP 57s`
`service/postgres-postgresql-headless ClusterIP None <none> 5432/TCP 57s`
`NAME READY AGE`
`statefulset.apps/postgres-postgresql 1/1 57s
Can some one help me here?
Thanks in advance.
Error from server (NotFound): statefulsets.apps \"postgres\" not found
This says that you want to delete a statefulset which name is postgress, But from your get all command the name of statefulset is statefulset.apps/postgres-postgresql. You need to update delete statefulsets "{{postgres_release_name}}"-postgresql or pass correct value of postgres_release_name?
I'm learning about Kubernetes and ingress controllers but I'm stucked getting this error when I try to apply kong ingress manifest...
ingress-kong-7dd57556c5-bh687 0/2 Init:0/1 0 29s
kong-migrations-gzlqj 0/1 Init:0/1 0 28s
postgres-0 0/1 Pending 0 28s
Is it possible to run this ingress on my home server without minikube ? If so, how?
Note: I have a FQDN pointing to my home server.
I guess you run manifest from Github
Issues with Pods
I have reproduced your case. As you have 3 pods, you have used option with DB.
If you will describe pods using
$ kubectl describe pod <podname> -n kong
you will receive error output:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7s (x4 over 17s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
You can also check job in kong namespace.
It is work correctly on fresh Minikube cluster, so I guess you might apply same changes to storageclass.
Is it possible to run this ingress on my home server without minikube ? If so, how?
You have to use Kubernetes to do it. Since Minikube is supporting LoadBalancer you can can use it in Home.
You can check this thread about FQDN. As mentioned:
The host machine should be able to resolve the name of that FQDN. You
might add a record into the /etc/hosts at the Mac host to achieve
that:
10.0.0.2 mydb.mytestdomain
But in your case it should be IP address of LoadBalancer, kong-proxy.
Obtain LoadBalancer IP in Minikube
If you will deploy everything correctly you can check your services.
$ kubectl get svc -n kong
You will see kong-proxy service with LoadBalancer type wit <pending> EXTERNAL-IP.
To obtain ExternalIP you have to use minikbue tunnel.
Please note that you need have $ sudo minikube tunnel run in one console whole time.
Before Minikube tunnel
$ kubectl get svc -n kong
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-proxy LoadBalancer 10.110.218.74 <pending> 80:31881/TCP,443:31319/TCP 103m
kong-validation-webhook ClusterIP 10.108.204.137 <none> 443/TCP 103m
postgres ClusterIP 10.105.9.54 <none> 5432/TCP 103m
After
$ kubectl get svc -n kong
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-proxy LoadBalancer 10.110.218.74 10.110.218.74 80:31881/TCP,443:31319/TCP 104m
kong-validation-webhook ClusterIP 10.108.204.137 <none> 443/TCP 104m
postgres ClusterIP 10.105.9.54 <none> 5432/TCP 104m
Testing Kong
Here you can find how to get start with Kong. It will show you how to create Ingress. Later as I mentioned you have to edit ingress and add rule (host) similar like in K8s docs.
Per the documentation at: https://kubernetes.io/docs/tasks/web-ui-dashboard/
I ran :
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
Then I tried running this to expose the service
cluster/kubectl.sh expose svc/kubernetes
but I keep getting an error:
error: couldn't retrieve selectors via --selector flag or introspection: the service has no pod selector set
See 'kubectl expose -h' for help and examples.
I have looked at the examples but can't understand what I am doing wrong.
kubernetes# cluster/kubectl.sh get all
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes 10.0.0.1 <none> 443/TCP 7h
kubernetes# cluster/kubectl.sh get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-dns-806549836-r6wtk 0/3 Pending 0 7h
kube-system kubernetes-dashboard-2396447444-9675d 0/1 Pending 0 6h
To get access to the dashboard, usually you would just type:
kubectl cluster-info
Which then gives you all the required urls for accessing your cluster.