Kubernetes service running fine but unable to access from outside - kubernetes

Hi I am trying to do communication between a mongo database and nodejs application using kubernetes. everything is running fine . but I ma unable to access my api from outside environment. I am also not able to telnet the port.
apiVersion: v1
kind: Service
metadata:
name: node
labels:
app: node
tier: backend
spec:
type: NodePort
ports:
- port: 3000
nodePort: 30005
externalIPs:
- 34.73.154.127
# # Replace with the IP of your minikube node / master node
# selector:
# app: node
# tier: backend
this is my service yaml file
when i am checking the status of port using command
sudo lsof -i:30005
I am able to see the results as below
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kube-prox 2925 root 8u IPv6 32281 0t0 TCP *:30005 (LISTEN)
Now i should be able to telnet the port with ip like
telnet 34.73.154.127 30005 but I am getting result as below.
Trying 34.73.154.127...
telnet: Unable to connect to remote host: Connection refused
If any of my friend is going to suggest that port is not open then please note that i have open all the port range from anywhere.
One more thing I want to let you know that I deployed a sample node application natively using npm on port 30006 and i am able to telnet on this port. So conclusion is that all the port range is open and working.
This is the describe command result of service
kubectl describe service/node
result:
Name: node
Namespace: default
Labels: app=node
tier=backend
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"node","tier":"backend"},"name":"node","namespace":"defau...
Selector: <none>
Type: NodePort
IP: 10.102.42.145
External IPs: 34.73.154.127
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
NodePort: <unset> 30005/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Please let me know what is wrong with me ..
ENVIRONMENT:
cloud :google cloud platform
container :using docker and kubernetes
ubuntu 16.04 LTS
kubernetes 1.13.0

Hi I was doing a silly mistake .
Just uncommented this in my service yaml file and it started working
# # Replace with the IP of your minikube node / master node
# selector:
# app: node
# tier: backend

In order to access your service from the outside you need to expose this service as a LoadBalancer type such as:
apiVersion: v1
kind: Service
metadata:
name: node
labels:
app: node
tier: backend
spec:
type: LoadBalancer
ports:
- port: 3000
nodePort: 30005
Google Cloud Platform will provision you an IP Address that is publicly routable and will open the firewall for you.

Related

minikube - EXTERNAL-IP remains <pending>

My Service definition is as follows
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
type: LoadBalancer
#type: NodePort
ports:
# the port that this service should serve on
- targetPort: 80
port: 80
selector:
app: guestbook
tier: frontend
After applying it
I was expecting to get External IP as explained here but instead, it remains pending and doesn't change as shown below
Can you please help me find why I'm not getting EXTERNAL-IP?
where are you running this minikube, if you are running it on your local, external ip will not appear, as external-ip is specific to external cloud providers.
I think, you should try minikube tunnel. After starting the minikube, you have to execute this command -
$ minikube tunnel
minikube tunnel runs as a process, creating a network route on the host to the service CIDR of the cluster using the cluster’s IP address as a gateway.

NodePort is not working on minikube on virtual box

I am learning Kubernetes and have a simple deployment and a nodePort service. I am not able to access my deployment using nodePort. I tried hyperkit, docker and virtualbox .
Context
My java application is running on 8080 port ( tomcat server )
My service port is 8080.
My nodePort is 32000.
Here is the service file
apiVersion: v1
kind: Service
metadata:
name: file-process-service
labels:
app: file-process-service
spec:
type: NodePort
ports:
- name: http
port: 8080 ---------> Service
targetPort: 8080 ----> Tomcat Port
nodePort: 32000 ----> NodePort
protocol: TCP
selector:
app: file-process-app
Minukube URl is
minikube service file-process-service --url
>> http://192.168.59.100:32000
Now, when I am trying to access it via postman, I am getting connection refused. Can any one help me where I am doing it wrong or how can I debug it further?
Thanks DavidMaze - I am attaching the end points. It is None for my service
It was hinted by #DavidMaze - I found the service is created however end points were not created. I further debug the issue and found that in the service I have mentioned the wrong pods selectors, hence no ep was created.

How do I access Kubernetes pods through a single IP?

I have a set of pods running based on the following fleet:
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
name: bungee
spec:
replicas: 2
template:
metadata:
labels:
run: bungee
spec:
ports:
- name: default
containerPort: 25565
protocol: TCP
template:
spec:
containers:
- name: bungee
image: a/b:test
I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones.
My goal is to be able to connect to these pods through a single IP, meaning I have to add some sort of load balancer. I tried using this service of type LoadBalancer, but I can't connect to any of the pods with it.
apiVersion: v1
kind: Service
metadata:
name: bungee-svc
spec:
type: LoadBalancer
loadBalancerIP: XXX.XX.XX.XXX
ports:
- port: 25565
protocol: TCP
selector:
run: bungee
externalTrafficPolicy: Local
Is a service like this the wrong approach here, and if so what should I use instead? If it is correct, why is it not working?
Edit: External IP field says pending while checking the service status. I am running Kubernetes on bare-metal.
Edit 2: Attempting to use NodePort as suggested, I see the service has not been given an external IP address. Trying to connect to <node-IP>:<nodePort> does not work. Could it be a problem related to the selector?
LoadBalancer Services could have worked, in clusters that are integrating with the API of the cloud provider hosting your Kubernetes nodes (cloud-controller-manager component). Since this is not your case, you're looking for a NodePort Service.
Something like:
apiVersion: v1
kind: Service
metadata:
name: bungee-svc
spec:
type: NodePort
ports:
- port: 25565
protocol: TCP
selector:
run: bungee
Having created that service, you can check its description - or yaml/json representation:
# kubectl describe svc xxx
Type: NodePort
IP: 10.233.24.89 <- ip within SDN
Port: tcp-8080 8080/TCP <- ports within SDN
TargetPort: 8080/TCP <- port on your container
NodePort: tcp-8080 31655/TCP <- port exposed on your nodes
Endpoints: 10.233.108.232:8080 <- pod:port ...
Session Affinity: None
Now, I know the port 31655 was allocated to my NodePort Service -- ports are unique on your cluster, they are picked within a range, depends on your cluster configuration.
I can connect to my service, accessing any Kubernetes node IP, on the port that was allocated to my NodePort service.
curl http://k8s-worker1.example.com:31655/
As a sidenote: a LoadBalancer Service extends a NodePort Service. While the externalIP won't ever show up, note that your Service was already allocated with its own port, as any NodePort Service - which is meant to receive traffic from whichever LoadBalancer would have been configured on behalf of your cluster, onto the cloud infrastructure it is integrated with.
And ... I have to say I'm not familiar with Agones. When you say "I can access these pods outside the cluster with <node-IP>:<port> where the port is random per pod given by Agones". Are you sure ports are allocated on a per-pod basis, and bound to a given node? Or could it be they're already using a NodePort Service. Give it another look: have you tried connecting that port on other nodes of your cluster?

Containers on Statefulset not being registred on minikube dns

I'm trying to setup a zookeeper cluster (3 replicas) but each host can't connect to another and I really don't know where's the problem.
It's creating 3 pods successfully with names like
zookeeper-0.zookeeper-internal.default.svc.cluster.local
zookeeper-1.zookeeper-internal.default.svc.cluster.local
zookeeper-2.zookeeper-internal.default.svc.cluster.local
but when connected to one of them and trying to connect to the open port it returns the Unknown host message:
zookeeper#zookeeper-0:/opt$ nc -z zookeeper-1.zookeeper-internal.default.svc.cluster.local 2181
zookeeper-1.zookeeper-internal.default.svc.cluster.local: forward host lookup failed: Unknown host
My YAML file is here
I really appreciate any help.
Did you create a headless service as you had mentioned in your yaml - serviceName: zookeeper-internal ?
You need to create this service (update the port) to access the zookeeper-0.zookeeper-internal.default.svc.cluster.local
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-cluster-dev
name: zookeeper
name: zookeeper-internal
spec:
ports:
- name: zookeeper-port
port: 80
protocol: TCP
targetPort: 80
selector:
name: zookeeper
clusterIP: None
type: ClusterIP
Service is required. But it does not expose anything outside the cluster. It is only within cluster. Any pods can access this service within the cluster. So you can not access it from your browser unless you expose it via NodePort / LoadBalancer / Ingress!

Unable to connect to external load balancer even after exposing service in kubernetes

I have the following deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: family-tree-deployment
labels:
app: familytree
spec:
replicas: 1
selector:
matchLabels:
app: familytree
template:
metadata:
labels:
app: familytree
spec:
containers:
- name: familytree
image: index.docker.io/koustubh/familytree:v1.0
ports:
- containerPort: 8080
I could successfully create the deployment using kubectl create -f deploy.yml
Now, I simply exposed this deployment with the following command
kubectl expose deployment family-tree-deployment --type=LoadBalancer --name=familytree-service
The service was successfully created.
The output is
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
familytree-service LoadBalancer 10.51.244.161 35.221.113.235 8080:30505/TCP 1h
$ kubectl describe svc familytree-service
Name: familytree-service
Namespace: default
Labels: app=familytree
Annotations: <none>
Selector: app=familytree
Type: LoadBalancer
IP: 10.51.244.161
LoadBalancer Ingress: 35.221.113.235
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30505/TCP
Endpoints: 10.48.4.7:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I could login to the pod and I made sure the service is working.
However, when I use the external ip of the load balancer and query my api, the connection times out.
I have made sure firewall allows port 8080.
My application is running on port 8080
The generated Service object looks perfectly valid, so we can exclude a label issue or a missing public IP address. Besides you can access your Service internally, which means the firewall rule was applied incorrectly, most likely.
Please ensure you allow incoming traffic as follows
from the internet to the load balancer on TCP port 8080
from the load balancer to all Kubernetes nodes on TCP port 30505