minikube - EXTERNAL-IP remains <pending> - kubernetes

My Service definition is as follows
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
type: LoadBalancer
#type: NodePort
ports:
# the port that this service should serve on
- targetPort: 80
port: 80
selector:
app: guestbook
tier: frontend
After applying it
I was expecting to get External IP as explained here but instead, it remains pending and doesn't change as shown below
Can you please help me find why I'm not getting EXTERNAL-IP?

where are you running this minikube, if you are running it on your local, external ip will not appear, as external-ip is specific to external cloud providers.

I think, you should try minikube tunnel. After starting the minikube, you have to execute this command -
$ minikube tunnel
minikube tunnel runs as a process, creating a network route on the host to the service CIDR of the cluster using the cluster’s IP address as a gateway.

Related

How do I access Kubernetes pods through a single IP?

I have a set of pods running based on the following fleet:
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
name: bungee
spec:
replicas: 2
template:
metadata:
labels:
run: bungee
spec:
ports:
- name: default
containerPort: 25565
protocol: TCP
template:
spec:
containers:
- name: bungee
image: a/b:test
I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones.
My goal is to be able to connect to these pods through a single IP, meaning I have to add some sort of load balancer. I tried using this service of type LoadBalancer, but I can't connect to any of the pods with it.
apiVersion: v1
kind: Service
metadata:
name: bungee-svc
spec:
type: LoadBalancer
loadBalancerIP: XXX.XX.XX.XXX
ports:
- port: 25565
protocol: TCP
selector:
run: bungee
externalTrafficPolicy: Local
Is a service like this the wrong approach here, and if so what should I use instead? If it is correct, why is it not working?
Edit: External IP field says pending while checking the service status. I am running Kubernetes on bare-metal.
Edit 2: Attempting to use NodePort as suggested, I see the service has not been given an external IP address. Trying to connect to <node-IP>:<nodePort> does not work. Could it be a problem related to the selector?
LoadBalancer Services could have worked, in clusters that are integrating with the API of the cloud provider hosting your Kubernetes nodes (cloud-controller-manager component). Since this is not your case, you're looking for a NodePort Service.
Something like:
apiVersion: v1
kind: Service
metadata:
name: bungee-svc
spec:
type: NodePort
ports:
- port: 25565
protocol: TCP
selector:
run: bungee
Having created that service, you can check its description - or yaml/json representation:
# kubectl describe svc xxx
Type: NodePort
IP: 10.233.24.89 <- ip within SDN
Port: tcp-8080 8080/TCP <- ports within SDN
TargetPort: 8080/TCP <- port on your container
NodePort: tcp-8080 31655/TCP <- port exposed on your nodes
Endpoints: 10.233.108.232:8080 <- pod:port ...
Session Affinity: None
Now, I know the port 31655 was allocated to my NodePort Service -- ports are unique on your cluster, they are picked within a range, depends on your cluster configuration.
I can connect to my service, accessing any Kubernetes node IP, on the port that was allocated to my NodePort service.
curl http://k8s-worker1.example.com:31655/
As a sidenote: a LoadBalancer Service extends a NodePort Service. While the externalIP won't ever show up, note that your Service was already allocated with its own port, as any NodePort Service - which is meant to receive traffic from whichever LoadBalancer would have been configured on behalf of your cluster, onto the cloud infrastructure it is integrated with.
And ... I have to say I'm not familiar with Agones. When you say "I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones". Are you sure ports are allocated on a per-pod basis, and bound to a given node? Or could it be they're already using a NodePort Service. Give it another look: have you tried connecting that port on other nodes of your cluster?

How should I use externalIPs on service with EKS?

I was trying to apply service externalIPs feature on EKS cluster.
What I do
I've created EKS cluster with eksctl:
eksctl create cluster --name=test --region=eu-north-1 --nodes=1
I've opened all security groups to make sure I don't have issue with firewall. ACL also allow all traffic.
I took public IP for the only available worker node and try to use it with simple service + deployment.
This should be only 1 deployment with 1 replicaset and 1 pod with nginx. This should be attached to a service with external/public IP everyone can reach.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: app
labels:
app: app
spec:
ports:
- port: 80
targetPort: 80
selector:
app: app
externalIPs:
- 13.51.55.82
When I apply it then everything seems to work just fine. I can port-forward my app service to localhost and I can see the output (kubectl port-forward svc/app 9999:80 -> curl localhost:9999).
But the problem is I cannot reach this service via public IP.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app ClusterIP 10.100.140.38 13.51.55.82 80/TCP 49m
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 62m
$ curl 13.51.55.82:80
curl: (7) Failed to connect to 13.51.55.82 port 80: Connection refused
Thoughts
For me it looks like the service is not connected to node itself. When I ssh to the node and setup simple web server on port 80 it respond immediately.
I know I can use NodePort but in my case I want finally use fixed port 4000 and NodePort allow me only to use ports in range 30000-32768.
Question
I want to be able to curl my service via public IP on certain port below 30000 (NodePort doesn't apply).
How can I make it work with Kubernetes Service externalIPs on EKS cluster?
Edit I:
FYI: I do not want to use LoadBalancer.

Kubernetes service is getting external ip as pending

I running a kubernetes LB but the external ip says "pending", looks like it is trying to get a IP but, I need it as "localhost" to access it in my browser:
What do I miss?
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dev-pypi LoadBalancer 10.106.128.15 <pending> 80:30914/TCP 2m7s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 47h
service/qa-pypi LoadBalancer 10.97.62.94 <pending> 8200:30114/TCP 94m
Thanks in advance.
This is my yaml file:
kind: Service
apiVersion: v1
metadata:
name: qa-pypi
spec:
type: LoadBalancer
selector:
app: pypi-qa
ports:
- protocol: TCP
port: 8200
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: qa-pypi
labels:
app: pypi-qa
spec:
replicas: 2
selector:
matchLabels:
app: pypi-qa
template:
metadata:
labels:
app: pypi-qa
spec:
containers:
- name: pypi-qa
imagePullPolicy: IfNotPresent
image: myimg2
ports:
- containerPort: 8080
volumeMounts:
- name: storageqa
mountPath: /app/local
volumes:
- name: storageqa
persistentVolumeClaim:
claimName: persistvolumeqa
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persistvolumeqa
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Thank you !!!!!!!!!!!!!!!!!!!!
You already have here a lot of advises how to use localhost, what you can do with NodePort. Let me fill the gap and explain why you see Pending state.
Look, minikube itself, doesnt allocate and provide you LoadBalancer ip-address. LoadBalancer service type is widely used in cloud implementations, like EKS, AKS, GKE and others. Because cloud service providers create you loadbalancer in the background as soon as you chose this type.
If you want to use LoadBalancer with minikube you should configure minikube first. What you can do is use built-in metallb minikube addon.
MetalLB had addressed the gap and provides the network LoadBalancer
implementation as an addon.
During metallb installation and configuration you are able to set the range og local ip-adressed that would be assigned to LoadBalancer service instead of Pending state
If you want to know more about this and check real example of configuration - check for example MetalLB Configuration in Minikube — To enable Kubernetes service of type “LoadBalancer” article
you can use ingress to access your application from the browser
ingress
settingup_ingress
tutorial
if you doesn't find your host after setup it may need to register at root ./etc/hosts
127.0.0.1 localhost
127.0.0.1 https-my-nginx.com
To get expose your app in localhost you can try the NodePort type. If you are trying with kubeadm you can use NodePort type or ClusterIp type, so that you can able to expose it locally. Usually, Load-balancer is widely used in deployment on cloud machines.
If you are trying with minikube, then run minikube tunnel so that external-IP will be added for Loadbalancer.
you have to configure external IP on your machine(platform) first then only external ip will be allocated to your service. It can be VIP which you can advertise or address pool needs to make, other wise setting up ingress will also not going to serve the purpose as ingress itself needs external IP to be allocated for communication.
it's a networking thing you need to setup on your private machine.
if you don't want to use this option then can go for node port with that you can access externally with nodeip:nodeport.

Kubernetes service running fine but unable to access from outside

Hi I am trying to do communication between a mongo database and nodejs application using kubernetes. everything is running fine . but I ma unable to access my api from outside environment. I am also not able to telnet the port.
apiVersion: v1
kind: Service
metadata:
name: node
labels:
app: node
tier: backend
spec:
type: NodePort
ports:
- port: 3000
nodePort: 30005
externalIPs:
- 34.73.154.127
# # Replace with the IP of your minikube node / master node
# selector:
# app: node
# tier: backend
this is my service yaml file
when i am checking the status of port using command
sudo lsof -i:30005
I am able to see the results as below
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kube-prox 2925 root 8u IPv6 32281 0t0 TCP *:30005 (LISTEN)
Now i should be able to telnet the port with ip like
telnet 34.73.154.127 30005 but I am getting result as below.
Trying 34.73.154.127...
telnet: Unable to connect to remote host: Connection refused
If any of my friend is going to suggest that port is not open then please note that i have open all the port range from anywhere.
One more thing I want to let you know that I deployed a sample node application natively using npm on port 30006 and i am able to telnet on this port. So conclusion is that all the port range is open and working.
This is the describe command result of service
kubectl describe service/node
result:
Name: node
Namespace: default
Labels: app=node
tier=backend
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"node","tier":"backend"},"name":"node","namespace":"defau...
Selector: <none>
Type: NodePort
IP: 10.102.42.145
External IPs: 34.73.154.127
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
NodePort: <unset> 30005/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Please let me know what is wrong with me ..
ENVIRONMENT:
cloud :google cloud platform
container :using docker and kubernetes
ubuntu 16.04 LTS
kubernetes 1.13.0
Hi I was doing a silly mistake .
Just uncommented this in my service yaml file and it started working
# # Replace with the IP of your minikube node / master node
# selector:
# app: node
# tier: backend
In order to access your service from the outside you need to expose this service as a LoadBalancer type such as:
apiVersion: v1
kind: Service
metadata:
name: node
labels:
app: node
tier: backend
spec:
type: LoadBalancer
ports:
- port: 3000
nodePort: 30005
Google Cloud Platform will provision you an IP Address that is publicly routable and will open the firewall for you.

Expose service on local kubernetes

I'm running a local kubernetes bundled with docker on Mac OS.
How can I expose a service, so that I can access the service via a browser on my Mac?
I've created:
a) deployment including apache httpd.
b) service via yaml:
apiVersion: v1
kind: Service
metadata:
name: apaches
spec:
selector:
app: web
type: NodePort
ports:
- protocol: TCP
port: 80
externalIPs:
- 192.168.1.10 # Network IP of my Mac
My service looks like:
$ kubectl get service apaches
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apaches NodePort 10.102.106.158 192.168.1.10 80:31137/TCP 14m
I can locally access the service in my kubernetes cluster by wget $CLUSTER-IP
I tried to call http://192.168.1.10/ on my Mac, but it doesn't work.
This question deals to a similar issue. But the solution does not help, because I do not know which IP I can use.
Update
Thanks to Michael Hausenblas I worked out a solution using Ingress.
Nevertheless there are still some open questions:
What is the meaning of a service's externalIP? Why do I need an externalIP when I do not directly access a service from external?
What is the meaning of the service port 31137?
The kubernetes docs describe a method to [publish a service in minikube via NodePort][4]. Is this also possible with kubernetes bundled on docker?
There are several solutions to expose services in kubernetes:
http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/
Here are my solutions according to alesnosek for a local kubernetes bundled with docker:
1. hostNetwork
hostNetwork: true
Dirty (the host network should not be shared for security reasons) => I did not check this solution.
2. hostPort
hostPort: 8086
Does not apply to services => I did not check this solution.
3. NodePort
Expose the service by defining a nodePort:
apiVersion: v1
kind: Service
metadata:
name: apaches
spec:
type: NodePort
ports:
- port: 80
nodePort: 30000
selector:
app: apache
4. LoadBalancer
EDIT
#MathObsessed posted the solution in his anwer.
5. Ingress
a. Install Ingress Controller
git clone https://github.com/jnewland/local-dev-with-docker-for-mac-kubernetes.git
kubectl apply -f nginx-ingress/namespaces/nginx-ingress.yaml -Rf nginx-ingress
b. Configure Ingress
kubectl apply -f apache-ing.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: apache-ingress
spec:
rules:
- host: localhost
http:
paths:
- path: /
backend:
serviceName: apaches
servicePort: 80
Now I can access my apache deployed with kubernetes by calling http://localhost/
Remarks for using local-dev-with-docker-for-mac-kubernetes
The repo simplifies the deployment of the offical ingress-nginx controller
For production use I would follow the official guide.
The repos ships with a tiny full featured ingress example. Very useful for getting quickly a working example application.
Further documentation
https://kubernetes.io/docs/concepts/services-networking/ingress
For those still looking for an answer. I've managed to achieve this by adding another Kube service just to expose my app to localhost calls (via browser or Postman):
kind: Service
apiVersion: v1
metadata:
name: apaches-published
spec:
ports:
- name: http
port: 8080
targetPort: 80
protocol: TCP
selector:
app: web
type: LoadBalancer
Try it now on: http://localhost:8080
Really simple example
METHOD1
$ kubectl create deployment nginx-dep --image=nginx --replicas=2
Get the pods
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dep-5c5477cb4-76t9q 1/1 Running 0 7h5m
nginx-dep-5c5477cb4-9g84j 1/1 Running 0 7h5m
Access the pod using kubectl port
$ kubectl port-forward nginx-dep-5c5477cb4-9g84j 8888:80
Forwarding from 127.0.0.1:8888 -> 80
Forwarding from [::1]:8888 -> 80
Now do a curl to the localhost:8888
$ curl -v http://localhost:8888
METHOD2
You can expose port 80 of the deployment (where the application is runnin i.e. nginx port)
via a NodePort
$ kubectl expose deployment nginx-dep --name=nginx-dep-svc --type=NodePort --port=80
Get the service
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 31d
nginx-dep-svc NodePort 10.110.80.21 <none> 80:31239/TCP 21m
Access the deployment using hte NodePort
$ curl http://localhost:31239
As already mentioned in Matthias Ms answer there are several ways.
As the offical Kubernetes documentation specifically describes using a Service with a type NodePort I wanted to describe the workflow.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service. Your Service reports the allocated port in its .spec.ports[*].nodePort field.
Setup a Service with a type of NodePort
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
clusterIP: 10.0.171.239
type: NodePort
Then you can check on which port the Service is exposed to via
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service NodePort 10.103.218.215 <none> 9376:31040/TCP 52s
and access it via localhost using the exposed port. E.g.
curl http://localhost:31040