NodePort doesn't work in OpenShift CodeReady Container - kubernetes

Install a latest OpenShift CodeReady Container on CentOS VM, and then run a TCP server app written by Java on OpenShift. The TCP Server is listening on port 7777.
Run app and expose it as a service with NodePort, seems that everything runs well. The pod port is 7777, and the service port is 31777.
$ oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tcpserver-57c9b44748-k9dxg 1/1 Running 0 113m 10.128.0.229 crc-2n9vw-master-0 <none> <none>
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tcpserver-ingres NodePort 172.30.149.98 <none> 7777:31777/TCP 18m
Then get node IP, the command shows as 192.168.130.11, I can ping this ip on my VM successfully.
$ oc get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
crc-2n9vw-master-0 Ready master,worker 26d v1.14.6+6ac6aa4b0 192.168.130.11 <none> Red Hat Enterprise Linux CoreOS 42.81.20191119.1 (Ootpa) 4.18.0-147.0.3.el8_1.x86_64 cri-o://1.14.11-0.24.dev.rhaos4.2.gitc41de67.el8
Now, run a client app which is located in my VM, because I can ping OpenShift Node IP, so I think I can run the client app successfully. The result is that connection time out, my client fails to connect server running on OpenShift.
Please give your advice how to troubleshoot the issue, or any ideas for the issue.

I understood your problem. As per what you described, I can see your Node port is 31777.
The best way to debug this problem is going step by step.
Step 1:
Check if you are able to access your app server using your pod IP and port i.e curl 10.128.0.229:7777/endpoint from one of your nodes within your cluster. This helps you with checking if pod is working or not. Even though kubectl describe pod gives you everything.
Step 2:
After that, on the Node which the pod is deployed i.e 192.168.130.11 on this try to access your app server using curl localhost:31777/endpoint. If this works, Nodeport is accessible i.e your service is working fine without any issues.
Step 3:
After that, try to connect to your node using curl 192.168.130.11:31777/endpoint from the vm running your client server. Just to let you know, 192. is class A private ip, so I am assuming your client is within the same network and able to talk to 192.169.130.11:31777 Or make sure you open your the respective 31777 port of 192.169.130.11 to the vm ip that has client server.
This is a small process of debugging the issue with service and pod. But the best is to use the ingress and an ingress controller, which will help you to talk to your app server with a url instead of ip address and port numbers. However, even with ingress and ingress controller the best way to debug all the parts are working as expected is following these steps.
Please feel free to let me know for any issues.

Thanks prompt answer.
Regarding Step 1,
I don't know where I could run "curl 10.128.0.229:7777/endpoint" inside cluster, but I check the status of pod via going to inside pod, port 777 is listening as expected.
$ oc rsh tcpserver-57c9b44748-k9dxg
sh-4.2$ netstat -nap | grep 7777
tcp6 0 0 127.0.0.1:7777 :::* LISTEN 1/java
Regarding Step 2,
run command "curl localhost:31777/endpoint" on Node where pod is deployed, it failed.
$ curl localhost:31777/endpoint
curl: (7) Failed to connect to localhost port 31777: Connection refused
That means, it seems that 31777 is not opened by OpenShift.
Do you have any ideas how to check why 31777 is not opened by OpenShift.
More information about service definition:
apiVersion: v1
kind: Service
metadata:
name: tcpserver-ingress
labels:
app: tcpserver
spec:
selector:
app: tcpserver
type: NodePort
ports:
- protocol: TCP
port: 7777
targetPort: 7777
nodePort: 31777
Service status:
$ oc describe svc tcpserver-ingress
Name: tcpserver-ingress
Namespace: myproject
Labels: app=tcpserver
Annotations: <none>
Selector: app=tcpserver
Type: NodePort
IP: 172.30.149.98
Port: <unset> 7777/TCP
TargetPort: 7777/TCP
NodePort: <unset> 31777/TCP
Endpoints: 10.128.0.229:7777
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

Related

Unable to interact with the application via minikube

I'm learning kubernetes via a LinkedIn learning course. A tutorial I'm doing runs this hello world application via kubectl and minikube. Everything appears in working order, but I cannot interact with the application using minikube service helloworld. The request keeps timing out.
The tutorial first asks to create a deployment using the command kubectl create -f helloworld.yaml then to expose the service via command kubectl expose deployment helloworld --type=NodePort and then it says interact with the app by doing minikube service helloworld. The diagnostics after create and expose show that everything on my end matches the tutorial's setup, but the last step fails for me whereas it launches the browser and shows the hello world app in the tutorial demo.
How would I go about debugging this error as an absolute beginner?
EDIT:
When I run kubectl describe services, I get the following output
$ kubectl describe services
Name: helloworld
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=helloworld
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.6.203
IPs: 10.96.6.203
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30433/TCP
Endpoints: 172.17.0.2:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
When I check the port 30433, by doing nc -zv <hostname> 30433, I get an error:
nc: connectx to <hostname> port 30433 (tcp) failed: Connection refused
nc: connectx to <hostname> port 30433 (tcp) failed: Network is unreachable
You can try to access your application with the help of this shortcut - it is used to fetch minikube IP and a service’s NodePort:
minikube service --url helloworld
The output of the command will display Kubernetes service URL in CLI, instead of trying to launch it in your default browser with minikube service helloworld command. Using this URL, you will be able to access the exposed service in the browser.
In general, you can check the list of all available services in your minikube cluster and their URLs by using minikube service list command.

Where do I find the host IP address for an app deployed in minikube

I'm deploying a spring boot app in minikube that connects to a database running on the host. Where do I find the IP address that the app can use to get back to the host? For docker I can use ifconfig and get the IP address from the docker0 entry. ifconfig shows another device with IP address 172.18.0.1. Would that be how my app would get back to the host?
I think I understood you correctly and this is what you are asking for.
Minikube is started as a VM on your machine. You need to know the IP which Minikube starts with. This can be done with minikube status or minikube ip, output might look like:
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.1
This will only provide you the IP address of Minikube not your application.
In order to connect to your app from outside the Minikube you need to expose it as a Service.
Example of a Service might look like this:
apiVersion: v1
kind: Service
metadata:
name: webapp
spec:
type: NodePort
ports:
- nodePort: 31317
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: webapp
You can see results:
$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
postgres ClusterIP 10.0.0.140 <none> 5432/TCP 32m app=postgres
webapp NodePort 10.0.0.235 <none> 8080:31317/TCP 2s app=webapp
You will be able to connect to the webapp from inside the Cluster using 10.0.0.235:8080 of from outside the Cluster using Minikube IP and port 31317.
I also recommend going through Hello Minikube tutorial.
It was the 172.18.0.1 IP address. I passed it to the Spring app running in minikube with a configmap like this:
kubectl create configmap springdatasourceurl --from-literal=SPRING_DATASOURCE_URL=jdbc:postgresql://172.18.0.1:5432/bookservice
The app also needed SPRING_DATASOURCE_DRIVER_CLASS_NAME to be set in a configmap and that credentials SPRING_DATASOURCE_PASSWORD and SPRING_DATASOURCE_USERNAME be set as secrets.
More information on configmap and secret are here.

Kubernetes Service not being assigned an (external) IP address

There are various answers for very similar questions around SO that all show what I expect my deployment to look like, however mine does not.
I am running Minikube 0.25, with Kubernetes 1.9 on Windows
10.
I have successfully created a node, a replication controller, and a
single pod template has been replicated 10 times.
The node is Minikube, and is assigned the IP address 10.49.106.251
The dashboard is available at 10.49.106.251:30000
I am deploying a service with a YAML file, but the service is never assigned an external IP - the result is the same if I happen to use kubectl expose.
The YAML file that I am using:
kind: Service
apiVersion: v1
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello-world
ports:
- protocol: TCP
port: 8080
I can also use the YAML file to assign an external IP - I assign it the same value as the node IP address. Either way results in no possible connection to the service. I should also point out that the 10 replicated pods all match the selector.
The result of running kubectl get svc for the default, and after updating the external IP are below:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-service NodePort 10.108.61.233 <none> 8080:32406/TCP 1m
hello-service NodePort 10.108.61.233 10.49.106.251 8080:32406/TCP 1m
The tutorial I have been following, and the other answers on SO show a result similar to:
hello-service NodePort 10.108.61.233 <nodes> 8080:32406/TCP 1m
Where the difference is that the external IP is set to <nodes>
I have encountered a number of issues when running locally - is this just another case of doing so, or has someone else identified a way to get around the external IP assignment issue?
For local development purpose, I have also met with the problem of exposing a 'public IP' for my local development cluster.
Fortunately, I have found one of the kubectl command which can help:
kubectl port-forward service/service-name 9092
Where 9092 is the container port to expose, so that I can access applications inside the cluster, on my local development environment.
The important note is that it is not a 'production' grade solution.
Works well as a temporary hack to get to the cluster insides.
Using NodePort means it will open a port on all nodes of your cluster. In your example above, the port exposed to the outside world is 32406.
In order to access hello-service (if it is http) it will be http://[ the node ip]:32406/. This will hit your minikube and the the request will be routed to your pod in roundrobin fashion.
same problem when trying to deploy a simple helloworld image locally with Kubernetes v1.9.2
After two weeks of attempts , It seems that Kubernetes expose all nginx web server applications internally in port 80 not 8080
So this should work kubectl expose deployment hello-service --type=NodePort --port=80

Cannot reach exposed external ip on google cloud [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I followed the kubernetes-engine tutorial, used local gloud in terminal. Looks everything is working, but I can't reach exposed external-ip http://104.197.4.162/ in my browser, as the tutorial said. Thank you!
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-web LoadBalancer 10.11.245.151 104.197.4.162 80:30135/TCP 1m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-web-7d4f9779bf-lw9st 1/1 Running 0 1m
$ kubectl describe svc hello-web
Name: hello-web
Namespace: default
Labels: run=hello-web
Annotations: <none>
Selector: run=hello-web
Type: LoadBalancer
IP: 10.11.245.151
LoadBalancer Ingress: 104.197.4.162
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30135/TCP
Endpoints: 10.8.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
$ curl 104.197.4.162:80
curl: (7) Failed to connect to 104.197.4.162 port 80: Connection refused
https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
I think you need opening firewall and access your deployments in compute engine instance by instance external ip address and port. You can use curl ip:port in for check it.
As per the tutorial says, and I quote it:
Note: Kubernetes Engine assigns the external IP address to the Service
resource—not the Deployment. If you want to find out the external IP
that Kubernetes Engine provisioned for your application, you can
inspect the Service with the kubectl get service command
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-web 10.3.251.122 203.0.113.0 80:30877/TCP 3d
Once you've determined the external IP address for your application,
copy the IP address. Point your browser to this URL (such as
http://203.0.113.0) to check if your application is accessible.
So, you'll need to run $ kubectl get service hello-web to know the IP address.

Accessing Kubernetes pods/services through one IP (Preferably Master Node)

I have a local Kubernetes installation with a master node and two worker nodes. Is there a way to access all services/pods that will be installed on Kubernetes through master node's ip?
What i mean is say i have a test service running on port 30001 on each worker and i want to access this service like http://master-node:30001. Every help is appreciated.
You can use "the proxy verb" to acces nodes, pods, or services through the master. Only HTTP and HTTPS can be proxied. See these docs and these docs.
There are some ways to do it:
Define a NodePort Kubernetes service
Use kubefwd or port forwarding command
Use proxy command (Only support HTTP & HTTPS)
In this answer, I explain how to define a NodePort Service.
The NodePort service is explained as below (Service - Kubernetes)
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
Here is an example of the NodePort service for PostgreSQL:
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: postgres
labels:
app: postgres
spec:
ports:
- port: 5432
type: NodePort
selector:
app: postgres
The port field stands for both service port and default target port. There is also a nodePort field that allows you to choose the port to access the service from outside of the cluster (via the node's IP and the nodePort)
To view the node's Port (if you don't specify it from the manifest), you can run the command:
kubectl get services -n postgres
The output should look similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
postgres NodePort 10.96.156.75 <none> 5432:30864/TCP 6d9h app=postgres
In this case, the nodePort is 30864, this is the port to access to the service from outside the cluster.
To find out the node's IP, the command to use is:
kubectl get nodes -o wide
The output should look similar to:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
homedev-control-plane Ready master 30d v1.19.1 172.18.0.2 <none> Ubuntu Groovy Gorilla (development branch) 5.9.1-arch1-1 containerd://1.4.0
If what you need is the IP only:
kubectl get nodes -o wide --no-headers | awk '{print $6}'
In this case, the node's IP is 172.18.0.2. Hence to connect to the Postgres in the local Kubernetes cluster from your host machine, the command would look like this:
psql -U postgres -h 172.18.0.2 -p 30864-d postgres