I've applied the yaml for the kubernetes dashboard.
Now I want to expose this service with the public IP of my server: https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/#objectives
But there is no service/deployment on my cluster:
$ sudo kubectl get services kubernetes
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 63d
$ sudo kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
What did I do wrong?
Thanks for the help
The command that you ran is fetching objects in default namespace.
However, Dashboard is deployed on kube-system namespace.
kubectl -n kube-system get services kubernetes
kubectl -n kube-system get deployment
I am giving you this info according to the link that you share kubernetes dashboard . And namely the YAML file
Oky thanks, now I get the right name:
sudo kubectl -n kube-system get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
calico-kube-controllers 1/1 1 1 63d
coredns 2/2 2 2 63d
kubernetes-dashboard 1/1 1 1 103m
tiller-deploy 0/1 1 0 63d
But I still can't expose the service
sudo kubectl expose deployment kubernetes-dashboard
Error from server (NotFound): deployments.extensions "kubernetes-dashboard" not found
As mentionned here
SO, to reproduce and show how does it works - I spawned new fresh cluster on GKE.
Lets see what we have after applying dashboard yaml:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
kubectl get deployment kubernetes-dashboard -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kubernetes-dashboard 1 1 1 1 3m22s
kubectl get services kubernetes-dashboard -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.0.6.26 <none> 443/TCP 5m1
kubectl describe service kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard"...
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.0.6.26
Port: <unset> 443/TCP
TargetPort: 8443/TCP
Endpoints: 10.40.1.5:8443
Session Affinity: None
Events: <none>
During this deployment:
1) kubernetes-dashboard deployment has been created. Note that it was created with the k8s-app=kubernetes-dashboard label.
2) kubernetes-dashboard service was created and works using k8s-app=kubernetes-dashboard [selector](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/.
So basically when you receive such an error - this is expected. Because kubectl expose deployment kubernetes-dashboard -n kube-system is trying to create new service with the kubernetes-dashboard name.
Just to play with it - you can easily expose the same, but use another service names, for example:
kubectl expose deployment kubernetes-dashboard -n kube-system --name kube-dashboard-service2
service/kube-dashboard-service2 exposed
Note that default kubernetes-dashboard service is created using ClusterIP type - so you are able right now to access it
1) withing the cluster
2) using kubectl proxy from local machine
$ kubectl proxy
In browser: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
If you want to expose the same - you can use:
1) Ingress
2) Nodeport service type
In 2 words: edit clusterIP --> Nodeport type while kubectl -n kube-system edit service kubernetes-dashboard and access dashboard using https://[node_ip]:[port]
More detailed article is here: How To Access Kubernetes Dashboard Externally
3) Loadbalancer service type. This is Cloud specific feature, so it will work only with cloud providers
Traffic from the external load balancer is directed at the backend
Pods. The cloud provider decides how it is load balanced.
Some cloud providers allow you to specify the loadBalancerIP. In those
cases, the load-balancer is created with the user-specified
loadBalancerIP. If the loadBalancerIP field is not specified, the
loadBalancer is set up with an ephemeral IP address. If you specify a
loadBalancerIP but your cloud provider does not support the feature,
the loadbalancerIP field that you set is ignored.
Related
I have deployed pihole on my k3s cluster using this helm chart https://github.com/MoJo2600/pihole-kubernetes.
(I used this tutorial)
I now have my services but they dont have external IPs:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pihole-web ClusterIP 10.43.58.197 <none> 80/TCP,443/TCP 11h
pihole-dns-udp NodePort 10.43.248.252 <none> 53:30451/UDP 11h
pihole-dns-tcp NodePort 10.43.248.144 <none> 53:32260/TCP 11h
pihole-dhcp NodePort 10.43.96.49 <none> 67:30979/UDP 11h
I have tried to assing the IPs manually with this command:
kubectl patch svc pihole-dns-tcp -p '{"spec":{"externalIPs":["192.168.178.210"]}}'
But when executing the command i'm getting this error:
Error from server (NotFound): services "pihole-dns-tcp" not found
Any Ideas for a fix?
Thank you in advance :)
Looks Like "pihole-dns-tcp" is in a different namespace to the namespace where patch command is being ran.
As per the article you have shared , it seems like service pihole-dns-tcp is in pihole . So the command should be
kubectl patch svc pihole-dns-tcp -n pihole -p '{"spec":{"externalIPs":["192.168.178.210"]}}'
I'm struggling with kubernates configurations. What I want to get it's just to reach a deployment within the cluster. The cluster is on my dedicated server and I'm deploying it by using Kubeadm.
My nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 9d v1.19.3
k8s-worker1 Ready <none> 9d v1.19.3
I've a deployment running (nginx basic example)
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 29m
I've created a service
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
my-service ClusterIP 10.106.109.94 <none> 80/TCP 20m
The YAML file for my service is the following:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx-deployment
ports:
- protocol: TCP
port: 80
Now I should expect, if I run curl 10.106.109.94:80 on my k8s-master to get the http answer.. but what I got is:
curl: (7) Failed to connect to 10.106.109.94 port 80: Connection refused
I've tried with NodePort as well and with targetPort and nodePort but the result is the same.
The cluster ip can not be reachable from outside of your cluster that means you will not get any response from the host machine that host your k8s cluster as this ip is not a part of your machine or any other machine rather than its a cluster ip which is used by your cluster CNI network like flunnel,weave.
So to get your services accessible from the outside or atleast from the host machine you have to change the type of your service like NodePort,LoadBalancer,K8s port-forward.
If you can change the service type NodePort then you will get response with any of your host machine ip and the allocated nodeport.
For example,if your k8s-master is 192.168.x.x and nodePort is 33303 then you can get response by
curl http://192.168.x.x:33303
or
curl http://worker_node_ip:33303
if your cluster is in locally installed, then you can install metalLB to get the privilege of load balancer.
You can also use port-forward to get your service accessible from the host that has kubectl client with k8s cluster access.
kubectl port-forward svc/my-service 80:80
kubectl -n namespace port-forward svc/service_name Port:Port
I want to change the IP address of my LoadBalancer ingress-nginx-controller in Google Cloud. I have now assigned the IP address via LoadBalancer. See the screenshot. Unfortunately it is not adopted in GKE. Why? Is that a bug?
GKE lb IP address change
I have verified this on my GKE test cluster.
When you Reserving a static external IP address it isn't assigned to any of your VMs. Depends on how you created cluster/reserved ip (standard or premium) you can get error like below:
Error syncing load balancer: failed to ensure load balancer: failed to create forwarding rule for load balancer (a574130f333b143a2a62281ef47c8dbb(default/nginx-ingress-controller)): googleapi: Error 400: PREMIUM network tier (the project's default network tier) is not supported: The network tier of specified IP address is STANDARD, that of Forwarding Rule must be the same., badRequest
In this scenario I've used cluster based in us-central-1c and reserved IP as Network Service Tier: Premium, Type: Regional and used region where my cluster is based - us-central-1. My ExternalIP: 34.66.79.1X8
NOTE Reserved IP must be in the same reagion as your cluster
Option 1: - Use Helm chart
Deploy Nginx
helm install nginx-ingress stable/nginx-ingress --set controller.service.loadBalancerIP=34.66.79.1X8,rbac.create=true
Service output:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.8.0.1 <none> 443/TCP 5h49m
nginx-ingress-controller LoadBalancer 10.8.5.158 <pending> 80:31898/TCP,443:30554/TCP 27s
nginx-ingress-default-backend ClusterIP 10.8.13.209 <none> 80/TCP 27s
Service describe output:
$ kubectl describe svc nginx-ingress-controller
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 32s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 5s service-controller Ensured load balancer
Final output:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.8.0.1 <none> 443/TCP 5h49m
nginx-ingress-controller LoadBalancer 10.8.5.158 34.66.79.1X8 80:31898/TCP,443:30554/TCP 35s
nginx-ingress-default-backend ClusterIP 10.8.13.209 <none> 80/TCP 35s
Option 2 - Editing Nginx YAMLs before deploying Nginx
As per docs:
Initialize your user as a cluster-admin with the following command:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
Download YAML
$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/cloud/deploy.yaml
Edit LoadBalancer service and add loadBalancerIP: <your-reserved-ip> like below:
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-2.13.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.35.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
loadBalancerIP: 34.66.79.1x8 #This line
externalTrafficPolicy: Local
ports:
Deploy it kubectl apply -f deploy.yaml. Service output below:
$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.8.0.1 <none> 443/TCP 6h6m
ingress-nginx ingress-nginx-controller LoadBalancer 10.8.5.165 <pending> 80:31226/TCP,443:31161/TCP 17s
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.8.9.216 <none> 443/TCP 18s
6h6m
...
Describe output:
$ kubectl describe svc ingress-nginx-controller -n ingress-nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 40s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 2s service-controller Ensured load balancer
Service with reserved IP:
$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.8.5.165 34.66.79.1X8 80:31226/TCP,443:31161/TCP 2m22s
ingress-nginx-controller-admission ClusterIP 10.8.9.216 <none> 443/TCP 2m23s
In Addition
Also please keep in mind that you should add annotations: kubernetes.io/ingress.class: nginx in your ingress resource when you want force GKE to use Nginx Ingress features, like rewrite.
I am using kubespray to run a kubernetes cluster on my laptop. The cluster is running on 7 VMs and the roles of the VM's spread as follows:
NAME STATUS ROLES AGE VERSION
k8s-1 Ready master 2d22h v1.16.2
k8s-2 Ready master 2d22h v1.16.2
k8s-3 Ready master 2d22h v1.16.2
k8s-4 Ready master 2d22h v1.16.2
k8s-5 Ready <none> 2d22h v1.16.2
k8s-6 Ready <none> 2d22h v1.16.2
k8s-7 Ready <none> 2d22h v1.16.2
I've installed https://istio.io/ to build a microservices environment.
I have 2 services running and like to access from outside:
k get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
greeter-service ClusterIP 10.233.50.109 <none> 3000/TCP 47h
helloweb ClusterIP 10.233.8.207 <none> 3000/TCP 47h
and the running pods:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default greeter-service-v1-8d97f9bcd-2hf4x 2/2 Running 0 47h 10.233.69.7 k8s-6 <none> <none>
default greeter-service-v1-8d97f9bcd-gnsvp 2/2 Running 0 47h 10.233.65.3 k8s-2 <none> <none>
default greeter-service-v1-8d97f9bcd-lkt6p 2/2 Running 0 47h 10.233.68.9 k8s-7 <none> <none>
default helloweb-77c9476f6d-7f76v 2/2 Running 0 47h 10.233.64.3 k8s-1 <none> <none>
default helloweb-77c9476f6d-pj494 2/2 Running 0 47h 10.233.69.8 k8s-6 <none> <none>
default helloweb-77c9476f6d-tnqfb 2/2 Running 0 47h 10.233.70.7 k8s-5 <none> <none>
The problem is, I can not access the services from outside, because I do not have the EXTERNAL IP address(remember the cluster is running on my laptop).
k get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.233.61.112 <pending> 15020:31311/TCP,80:30383/TCP,443:31494/TCP,15029:31383/TCP,15030:30784/TCP,15031:30322/TCP,15032:30823/TCP,15443:30401/TCP 47h
As you can see, the column EXTERNAL-IP the value is <pending>.
The question is, how to assign an EXTERNAL-IP to the istio-ingressgateway.
First of all, you can't make k8s to assign you an external IP address, as LoadBalancer service is Cloud Provider specific. You could push your router external IP address to be mapped to it, I guess, but it is not trivial.
To reach the service, you can do this:
kubectl edit svc istio-ingressgateway -n istio-system
Change the type of the service from LoadBalancer to ClusterIp. You can also do NodePort. Actually you can skip this step, as LoadBalancer service already contains NodePort and ClusterIp. It is just to get rid of that pending status.
kubectl port-forward svc/istio-ingressgateway YOUR_LAPTOP_PORT:INGRESS_CLUSTER_IP_PORT -n istio-system
I don't know to which port you want to access from your localhost. Say 80, you can do:
kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system
Now port 8080 of your laptop (localhost:8080) will be mapped to the port 80 of istio-ingressgateway service.
By default, there is no way Kubernetes can assign external IP to LoadBalancer service.
This service type needs infrastructure support which works in cloud offerings like GKE, AKS, EKS etc.
As you are running this cluster inside your laptop, deploy MetalLB Load Balancer to get EXTERNAL-IP
It's not possible as Suresh explained.
But if you want to access from your laptop you can use in your service type: NodePort, which gives you access from outside the cluster.
You should first obtain the IP of your cluster, then create your service with something like this:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
type: NodePort
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30000
After that, you can access from your laptop with: http://cluster-ip:30000
There is no need to create an ingress for that.
You should use a port in range (30000-32767), as stated below:
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).
If you are using minikube, just run:
$ minikube tunnel
$ k get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.111.187.167 127.0.0.1 15021:31949/TCP,80:32215/TCP,443:30585/TCP 9m48s
I've created a test k8s cluster using kubespray (3 nodes, virtualbox
centos vm based) and have been trying to follow the guide for setting up nginx ingress, but i never seem to get an external address assigned to my service:
I can see that the ingress controller is apparently installed:
[root#k8s-01 ~]# kubectl get pods --all-namespaces -l app=ingress-nginx
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-58c9df5856-v6hml 1/1 Running 0 28m
And following the prerequisites docs, i have set up the http-svc sample service:
[root#k8s-01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
http-svc-794dc89f5-f2vlx 1/1 Running 0 27m
[root#k8s-01 ~]# kubectl get svc http-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc LoadBalancer 10.233.25.131 <pending> 80:30055/TCP 27m
[root#k8s-01 ~]# kubectl describe svc http-svc
Name: http-svc
Namespace: default
Labels: app=http-svc
Annotations: <none>
Selector: app=http-svc
Type: LoadBalancer
IP: 10.233.25.131
Port: http 80/TCP
TargetPort: 8080/TCP
NodePort: http 30055/TCP
Endpoints: 10.233.65.5:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 27m service-controller ClusterIP -> LoadBalancer
As far as i know, i should see a LoadBalancer Ingress entry, but the External IP for the service still appears to be pending, so something isn't working, but i'm at a loss where to diagnose what has gone wrong
Since you are creating your cluster locally, exposing your service as type LoadBalancer will not provision a loadbalancer for you. Use the type LoadBalancer if you are creating your cluster in a cloud environment such as AWS or GKE. In AWS it will auto-provision you an loadbalancer (ELB) and assign an external ip for the service.
To make your service work with current settings and environment change your service type from Loadbalancer to NodePort.