Helm and minikube: service ip missing - kubernetes-helm

I am trying examples from the book Learning Helm. I seem to be missing something. I cannot install chart from helm repo:
xxxxx:~ $ helm install my-nginx bitnami/nginx
NAME: my-nginx
LAST DEPLOYED: Sat Jan 9 20:26:22 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
NGINX can be accessed through the following DNS name from within your cluster:
my-nginx.default.svc.cluster.local (port 80)
To access NGINX from outside the cluster, follow the steps below:
1. Get the NGINX URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace default -w my-nginx'
export SERVICE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].port}" services my-nginx)
export SERVICE_IP=$(kubectl get svc --namespace default my-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "http://${SERVICE_IP}:${SERVICE_PORT}"
xxxxx:~ $ export SERVICE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].port}" services my-nginx)
xxxxx:~ $ export SERVICE_IP=$(kubectl get svc --namespace default my-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
xxxxx:~ $ echo "http://${SERVICE_IP}:${SERVICE_PORT}"
http://:80
$ kubectl get svc --namespace default -w my-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx LoadBalancer 10.104.16.177 <pending> 80:30977/TCP 17h
Some more details.

The line in the following to extract the service IP is to get the external IP in the service object.
export SERVICE_IP=$(kubectl get svc --namespace default my-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
In LoadBalancer service type, Kubernetes tries to get an external IP address and release the target service with that IP address. In self-hosted Kubernetes cluster, the external IP cannot be provisioned automatically. In most of the hosted clusters in public cloud, like GKE, EKS etc, they have had the integration with the external IP address provision. So, you can get it automatically once you set up the service as LoadBalancer.
It is still possible to do this automation with 3rd party operators/applications, like MetalLB. But in most of the self-hosted Kubernetes cluster, it is suggested to access the service with NodePort service type.
Please rerun the helm command with the follow argument. It will change the service type from LoadBalancer to NodePort. Following the instruction from the stdout may allow you to access your service.
> helm install my-nginx bitnami/nginx --set service.type=NodePort
On the other hand, you can follow the minikube official doc here to set up the support in LoadBalancer service.

Related

My app is not accessible, is my service definition wrong? [duplicate]

In minikube, how to expose a service using nodeport ?
For example, I start a kubernetes cluster using the following command and create and expose a port like this:
$ minikube start
$ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
$ kubectl expose deployment hello-minikube --type=NodePort
$ curl $(minikube service hello-minikube --url)
CLIENT VALUES:
client_address=192.168.99.1
command=GET
real path=/ ....
Now how to access the exposed service from the host? I guess the minikube node needs to be configured to expose this port as well.
I am not exactly sure what you are asking as it seems you already know about the minikube service <SERVICE_NAME> --url command which will give you a url where you can access the service. In order to open the exposed service, the minikube service <SERVICE_NAME> command can be used:
$ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
deployment "hello-minikube" created
$ kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube 10.0.0.102 <nodes> 8080/TCP 7s
kubernetes 10.0.0.1 <none> 443/TCP 13m
$ minikube service hello-minikube
Opening kubernetes service default/hello-minikube in default browser...
This command will open the specified service in your default browser.
There is also a --url option for printing the url of the service which is what gets opened in the browser:
$ minikube service hello-minikube --url
http://192.168.99.100:31167
As minikube is exposing access via nodeIP:nodePort and not on localhost:nodePort, you can get this working by using kubectl's port forwarding capability. For example, if you are running mongodb service:
kubectl port-forward svc/mongo 27017:27017
This would expose the service on localhost:27017, FWIW. Furthermore, you might want to figure out how to run this in background.
minikube runs on something like 192.168.99.100. So you should be able to access it on the NodePort you exposed your service at. For eg, say your NodePort is 30080, then your service will be accessible as 192.168.99.100:30080.
To get the minikube ip, run the command minikube ip.
Update Sep 14 2017:
Here's a small example that works with minikube v0.16.0.
1) Run the commands below to create an nginx running on 8080 and a NodePort svc forwarding to it:
$ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
deployment "hello-minikube" created
$ kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed
2) Find the nodeport used by the svc:
$ kubectl get svc hello-minikube
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube 10.0.0.76 <nodes> 8080:30341/TCP 4m
3) Find the minikube ip:
$ minikube ip
192.168.99.100
4) Talk to it with curl:
$ curl 192.168.99.100:30341
CLIENT VALUES:
client_address=172.17.0.1
command=GET
real path=/
...
I ran into a similar issue in 2022. Here are the commands I ran:
kubectl create deployment deploymentName --image=dockerHubUsername/imageTag:imageVersion
kubectl expose deployment deploymentName --type=LoadBalancer --port=8080
minikube tunnel
kubectl get services deploymentName this provides the external ip address needed to access the application. I access the app with 127.0.0.1:8080
Source
Just a note for anyone looking for connection refused answers: If your minikube does not run on "something like 192.168.99.100" you probably runned with another vm-driver like "none". In that case delete your minikube cluster and rebuild using the default. it 'll work....ish... I do not seem to be able to get the tunnel working...

LoadBalancer 'EXTERNAL IP" is in pending state after I installed k8s using helm Charts

I Installed K8S with Helm Charts on EKS but the Loadbalancer EXTERNAL IP is in pending state , I see that EKS does support the service Type : LoadBalancer now.
Is it something I will have to check at the network outgoing traffic level ? Please share your experience if any.
Tx,
The Loadbalancer usually takes some seconds or a few minutes to provision you an IP.
If after 5 minutes the IP isn't provisioned:
- run kubectl get svc <SVC_NAME> -o yaml and if there is any different annotation set.
By default services with Type:LoadBalancer are provisioned with Classic Load Balancers automatically. Learn more here.
If you wish to use Network load Balancers you have to use the annotation:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
The process is really automatic, you don't have to check for network traffic.
You can check if there is any issue with the Helm Chart you are deploying by manually creating a service with loadbalancer type and check if it gets provisioned:
$ kubectl run --generator=run-pod/v1 nginx --image=nginx --port=80
pod/nginx created
$ kubectl get pod nginx
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 34s
$ kubectl expose pod nginx --type=LoadBalancer
service/nginx exposed
$ kubectl get svc nginx -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.1.63.178 <pending> 80:32522/TCP 7s
nginx LoadBalancer 10.1.63.178 35.238.146.136 80:32522/TCP 42s
In this example the LoadBalancer took 42s to be provisioned. This way you can verify if the issue is on the Helm Chart or something else.
If Kubernetes is running in an environment that doesn't support LoadBalancer services, the load balancer will not be provisioned, but the service will still behave like a NodePort service, your cloud/K8 engine should support LoadBalancer Service.
In that case, if you manage to add EIP or VIP to your node then you can attach to the EXTERNAL-IP of your TYPE=LoadBalancer in the K8 cluster, for example attaching the EIP/VIP address to the node 172.16.2.13.
kubectl patch svc ServiceName -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.16.2.13"]}}'

The external IP of istio ingress gateway stay pending

I deployed a istio to k8s and it works well at first, but after one day, I can't access the app via ingress gateway. Then checked the istio svc status. It shows the external ip of the istio ingress gateway is pending.
I checked logs and events of the service, but there is nothing. What's the most possibility cause of the issue?
the external ip stay pending:
removing the traefik service resolved my issue on k3d on localhost (dev environment).
kubectl get svc -n kube-system
kubectl -n kube-sytem delete svc traefik
I'm not an expert! This might have some side effects or cause other issues.
This is most likely caused by using platform that does not provide an external loadbalancer to istio ingress gateway.
According to istio documentation:
If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is <none> (or perpetually <pending>), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
Follow these instructions if you have determined that your environment has an external load balancer.
Set the ingress IP and ports:
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].port}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="https")].port}')
export TCP_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="tcp")].port}')
In certain environments, the load balancer may be exposed using a host name, instead of an IP address. In this case, the ingress gateway’s EXTERNAL-IP value will not be an IP address, but rather a host name, and the above command will have failed to set the INGRESS_HOST environment variable. Use the following command to correct the INGRESS_HOST value:
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

Why service does not have any active Endpoint when install grafana from helm to kubernetes-sigs/kind?

https://github.com/kubernetes-sigs/kind - version 0.4.0
Create kubernetes from kubernetes-sigs/kind
kind create cluster
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.15.0) 🖼
kubectl create serviceaccount
kubectl create serviceaccount --namespace kube-system tiller
serviceaccount/tiller created
kubectl create clusterrolebinding
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
kubectl patch deploy
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
deployment.extensions/tiller-deploy patched
helm init
helm install stable/nginx-ingress
helm install --name grafana stable/grafana --set=ingress.enabled=True,ingress.hosts={grafana.domain.com} --namespace demo --set rbac.create=true
kubectl logs loping-wallaby-nginx-ingress-controller-76d574f8b7-5m6n5
W0629 17:13:59.709497 6 controller.go:797] Service "demo/grafana" does not have any active Endpoint.
[29/Jun/2019:17:14:03 +0000]TCP200000.000
I0629 17:14:45.223234 6 status.go:295] updating Ingress demo/grafana status from [] to [{ }]
I0629 17:14:45.226343 6 event.go:209] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"demo", Name:"grafana", UID:"228cde81-cb97-4313-ad86-90a273b2206d", APIVersion:"extensions/v1beta1", ResourceVersion:"1938", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress demo/grafana
kubectl get ingress --all-namespaces
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
demo grafana grafana.domain.com 80 3m58s
kubectl get svc --all-namespaces -l app=grafana
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default grafana ClusterIP 10.104.203.243 <none> 80/TCP 24m
kubectl get endpoints
NAME ENDPOINTS AGE
grafana 10.244.0.10:3000 21m
kubernetes 172.17.0.2:6443 56m
loping-wallaby-nginx-ingress-controller 10.244.0.8:80,10.244.0.8:443 48m
loping-wallaby-nginx-ingress-default-backend 10.244.0.7:8080 48m
Thanks!
A few concerns about your current scenario:
You have to check installed nginx-ingress helm chart in order to find out why grafana service resides in separate namespace default and not in demo namespace as per helm deploy parameter --namespace demo.
Since you have not specified in helm install command controller.service.type parameter, Nginx Ingress Controller would be implemented with LoadBalancer type of relevant service, in this case Ingress Controller expects to receive external IP address using cloud provider’s load balancer and I assume that your current kubernetes provisioner kubernetes-sigs/kind is not a good choice to adopt outward access to the Kubernetes cluster. Therefore, I would suggest to use NodePort service for Nginx Ingress controller in order to expose 80 and 443 port on some specific port in the host machine.
helm install --name grafana stable/grafana --set=ingress.enabled=True,ingress.hosts={grafana.domain.com} --namespace demo --set rbac.create=true --controller.service.type=NodePort
Issue that you mentioned is more like harmless and doesn't significantly affect the Nginx Ingress Controller's functionality, because it means that for some short period of time Liveness probe for Grafana Pod has not been initiated and target enpoint has not been released during Grafana Helm chart deploy. You can even re-spawn Nginx Ingress controller Pod to justify my assumption.
you are using type of service as "ClusterIp" , so you will not get external Ip address.
Change the Service type to "Loadbalancer" then you get Ip address which you can browse through internet.

Expose port in minikube

In minikube, how to expose a service using nodeport ?
For example, I start a kubernetes cluster using the following command and create and expose a port like this:
$ minikube start
$ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
$ kubectl expose deployment hello-minikube --type=NodePort
$ curl $(minikube service hello-minikube --url)
CLIENT VALUES:
client_address=192.168.99.1
command=GET
real path=/ ....
Now how to access the exposed service from the host? I guess the minikube node needs to be configured to expose this port as well.
I am not exactly sure what you are asking as it seems you already know about the minikube service <SERVICE_NAME> --url command which will give you a url where you can access the service. In order to open the exposed service, the minikube service <SERVICE_NAME> command can be used:
$ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
deployment "hello-minikube" created
$ kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube 10.0.0.102 <nodes> 8080/TCP 7s
kubernetes 10.0.0.1 <none> 443/TCP 13m
$ minikube service hello-minikube
Opening kubernetes service default/hello-minikube in default browser...
This command will open the specified service in your default browser.
There is also a --url option for printing the url of the service which is what gets opened in the browser:
$ minikube service hello-minikube --url
http://192.168.99.100:31167
As minikube is exposing access via nodeIP:nodePort and not on localhost:nodePort, you can get this working by using kubectl's port forwarding capability. For example, if you are running mongodb service:
kubectl port-forward svc/mongo 27017:27017
This would expose the service on localhost:27017, FWIW. Furthermore, you might want to figure out how to run this in background.
minikube runs on something like 192.168.99.100. So you should be able to access it on the NodePort you exposed your service at. For eg, say your NodePort is 30080, then your service will be accessible as 192.168.99.100:30080.
To get the minikube ip, run the command minikube ip.
Update Sep 14 2017:
Here's a small example that works with minikube v0.16.0.
1) Run the commands below to create an nginx running on 8080 and a NodePort svc forwarding to it:
$ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
deployment "hello-minikube" created
$ kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed
2) Find the nodeport used by the svc:
$ kubectl get svc hello-minikube
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube 10.0.0.76 <nodes> 8080:30341/TCP 4m
3) Find the minikube ip:
$ minikube ip
192.168.99.100
4) Talk to it with curl:
$ curl 192.168.99.100:30341
CLIENT VALUES:
client_address=172.17.0.1
command=GET
real path=/
...
I ran into a similar issue in 2022. Here are the commands I ran:
kubectl create deployment deploymentName --image=dockerHubUsername/imageTag:imageVersion
kubectl expose deployment deploymentName --type=LoadBalancer --port=8080
minikube tunnel
kubectl get services deploymentName this provides the external ip address needed to access the application. I access the app with 127.0.0.1:8080
Source
Just a note for anyone looking for connection refused answers: If your minikube does not run on "something like 192.168.99.100" you probably runned with another vm-driver like "none". In that case delete your minikube cluster and rebuild using the default. it 'll work....ish... I do not seem to be able to get the tunnel working...