Cannot reach exposed external ip on google cloud [closed] - kubernetes

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I followed the kubernetes-engine tutorial, used local gloud in terminal. Looks everything is working, but I can't reach exposed external-ip http://104.197.4.162/ in my browser, as the tutorial said. Thank you!
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-web LoadBalancer 10.11.245.151 104.197.4.162 80:30135/TCP 1m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-web-7d4f9779bf-lw9st 1/1 Running 0 1m
$ kubectl describe svc hello-web
Name: hello-web
Namespace: default
Labels: run=hello-web
Annotations: <none>
Selector: run=hello-web
Type: LoadBalancer
IP: 10.11.245.151
LoadBalancer Ingress: 104.197.4.162
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30135/TCP
Endpoints: 10.8.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
$ curl 104.197.4.162:80
curl: (7) Failed to connect to 104.197.4.162 port 80: Connection refused
https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app

I think you need opening firewall and access your deployments in compute engine instance by instance external ip address and port. You can use curl ip:port in for check it.

As per the tutorial says, and I quote it:
Note: Kubernetes Engine assigns the external IP address to the Service
resource—not the Deployment. If you want to find out the external IP
that Kubernetes Engine provisioned for your application, you can
inspect the Service with the kubectl get service command
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-web 10.3.251.122 203.0.113.0 80:30877/TCP 3d
Once you've determined the external IP address for your application,
copy the IP address. Point your browser to this URL (such as
http://203.0.113.0) to check if your application is accessible.
So, you'll need to run $ kubectl get service hello-web to know the IP address.

Related

Can I access my Kubernetes Dashboard via DomainName pointing to specific server instead of localhost

Document followed
https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html
I am able to set up the dashboard and access it using the link http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#!/login
The issue with this is that "EVERY USER HAS TO FOLLOW THE SAME TO ACCESS THE DASHBOARD"
I was wondering if there was some way wherein we can access the dashboard via DomainName and everyone should be able to access it without much pre-set up required.
We have two approaches to expose the Dashboard, NodePort and in LoadBalancer.
I'll demonstrate both cases and some of it's pros and cons.
type: NodePort
This way your dashboard will be available in https://<MasterIP>:<Port>.
I'll start with Dashboard is already deployed and running as ClusterIP (just like yours).
$ kubectl get service kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.0.11.223 <none> 80/TCP 11m
We patch the service to change the ServiceType:
$ kubectl patch svc kubernetes-dashboard -n kubernetes-dashboard -p '{"spec": {"type": "NodePort"}}'
service/kubernetes-dashboard patched
Note: You can also apply in YAML format changing the field type: ClusterIP to type: Nodeport, instead I wanted to show a direct approach with kubectl patch using JSON format to patch the same field.
Now let's list to see the new port:
$ kubectl get service kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.0.11.223 <none> 443:31681/TCP 13m
Note: Before accessing from an outside cluster, you must enable the security group of the nodes to allow incoming traffic through the port exposed, or here for GKE.
Below my example creating the rule on Google Cloud, but the same concept applies to EKS.
$ gcloud compute firewall-rules create test-node-port --allow tcp:31681
Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/owilliam/global/firewalls/test-node-port].
Creating firewall...done.
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
test-node-port default INGRESS 1000 tcp:31681 False
$ kubectl get nodes --output wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
gke-cluster-1-pool-1-4776b3eb-16t7 Ready <none> 18d v1.15.8-gke.3 10.128.0.13 35.238.162.157
And I'll access it using https://35.238.162.157:31681:
type: LoadBalancer
This way your dashboard will be available in https://IP.
Using LoadBalancer your cloud provider automates the firewall rule and port forwarding assigning an IP for it. (you may be charged extra depending on your plan).
Same as before, I deleted the service and created again as clusterIP:
$ kubectl get service kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.0.2.196 <none> 443/TCP 15s
$ kubectl patch svc kubernetes-dashboard -n kubernetes-dashboard -p '{"spec": {"type": "LoadBalancer"}}'
service/kubernetes-dashboard patched
$ kubectl get service kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard LoadBalancer 10.0.2.196 <pending> 443:30870/TCP 58s
$ kubectl get service kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard LoadBalancer 10.0.2.196 35.232.133.138 443:30870/TCP 11m
Note: When you apply it, the EXTERNAL-IP will be in <pending> state, after a few minutes a Public IP should be assigned as you can see above.
You can access it using https://35.232.133.138:
Security Considerations:
Your connection to the Dashboard when exposed is always thru HTTPS, you may get a notification about the autogenerated certificate everytime you enter, unless you change it for a trusted one. You can find more here
Since the Dashboard is not meant to be much exposed, I'd suggest to keep the access using the Public IP (or custom dns name in case of aws, i.e: *****.us-west-2.elb.amazonaws.com).
If you really like to integrate to your main domain name, I'd suggest to put it behind another layer of authentication on your website.
New access will still need the Access Token, but no one will have to go thru that process to expose the Dashboard, you only have to pass the IP/DNS Address and the Token to access it.
This token has Cluster-Admin Access, so keep it safe as you'd keep a root password.
If you have any doubts, let me know!
The deep problem is authentication. If you want the dashboard to respect Kubernetes RBAC rules for the user, it needs their K8s creds and those are usually complicated. For EKS it’s based on your AWS credentials. Some people just put a static set of permissions on the dashboard and then put some other, normal web authentication in front of it.

NodePort doesn't work in OpenShift CodeReady Container

Install a latest OpenShift CodeReady Container on CentOS VM, and then run a TCP server app written by Java on OpenShift. The TCP Server is listening on port 7777.
Run app and expose it as a service with NodePort, seems that everything runs well. The pod port is 7777, and the service port is 31777.
$ oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tcpserver-57c9b44748-k9dxg 1/1 Running 0 113m 10.128.0.229 crc-2n9vw-master-0 <none> <none>
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tcpserver-ingres NodePort 172.30.149.98 <none> 7777:31777/TCP 18m
Then get node IP, the command shows as 192.168.130.11, I can ping this ip on my VM successfully.
$ oc get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
crc-2n9vw-master-0 Ready master,worker 26d v1.14.6+6ac6aa4b0 192.168.130.11 <none> Red Hat Enterprise Linux CoreOS 42.81.20191119.1 (Ootpa) 4.18.0-147.0.3.el8_1.x86_64 cri-o://1.14.11-0.24.dev.rhaos4.2.gitc41de67.el8
Now, run a client app which is located in my VM, because I can ping OpenShift Node IP, so I think I can run the client app successfully. The result is that connection time out, my client fails to connect server running on OpenShift.
Please give your advice how to troubleshoot the issue, or any ideas for the issue.
I understood your problem. As per what you described, I can see your Node port is 31777.
The best way to debug this problem is going step by step.
Step 1:
Check if you are able to access your app server using your pod IP and port i.e curl 10.128.0.229:7777/endpoint from one of your nodes within your cluster. This helps you with checking if pod is working or not. Even though kubectl describe pod gives you everything.
Step 2:
After that, on the Node which the pod is deployed i.e 192.168.130.11 on this try to access your app server using curl localhost:31777/endpoint. If this works, Nodeport is accessible i.e your service is working fine without any issues.
Step 3:
After that, try to connect to your node using curl 192.168.130.11:31777/endpoint from the vm running your client server. Just to let you know, 192. is class A private ip, so I am assuming your client is within the same network and able to talk to 192.169.130.11:31777 Or make sure you open your the respective 31777 port of 192.169.130.11 to the vm ip that has client server.
This is a small process of debugging the issue with service and pod. But the best is to use the ingress and an ingress controller, which will help you to talk to your app server with a url instead of ip address and port numbers. However, even with ingress and ingress controller the best way to debug all the parts are working as expected is following these steps.
Please feel free to let me know for any issues.
Thanks prompt answer.
Regarding Step 1,
I don't know where I could run "curl 10.128.0.229:7777/endpoint" inside cluster, but I check the status of pod via going to inside pod, port 777 is listening as expected.
$ oc rsh tcpserver-57c9b44748-k9dxg
sh-4.2$ netstat -nap | grep 7777
tcp6 0 0 127.0.0.1:7777 :::* LISTEN 1/java
Regarding Step 2,
run command "curl localhost:31777/endpoint" on Node where pod is deployed, it failed.
$ curl localhost:31777/endpoint
curl: (7) Failed to connect to localhost port 31777: Connection refused
That means, it seems that 31777 is not opened by OpenShift.
Do you have any ideas how to check why 31777 is not opened by OpenShift.
More information about service definition:
apiVersion: v1
kind: Service
metadata:
name: tcpserver-ingress
labels:
app: tcpserver
spec:
selector:
app: tcpserver
type: NodePort
ports:
- protocol: TCP
port: 7777
targetPort: 7777
nodePort: 31777
Service status:
$ oc describe svc tcpserver-ingress
Name: tcpserver-ingress
Namespace: myproject
Labels: app=tcpserver
Annotations: <none>
Selector: app=tcpserver
Type: NodePort
IP: 172.30.149.98
Port: <unset> 7777/TCP
TargetPort: 7777/TCP
NodePort: <unset> 31777/TCP
Endpoints: 10.128.0.229:7777
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

Kubernetes service showing External Ip '<pending>'. How can I enable it?

Having trouble getting a wordpress Kubertenes service to listen on my machine so that I can access it with my web browser. It just says "External IP" is pending. I'm using the Kubertenes configuration from Docker Edge v18.06 on Mac, with advanced Kube config enabled (not swarm).
Following this tutorial FROM: https://www.youtube.com/watch?time_continue=65&v=jWupQjdjLN0
And using .yaml config files from https://github.com/kubernetes/examples/tree/master/mysql-wordpress-pd
MACPRO:mysql-wordpress-pd me$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 48m
wordpress LoadBalancer 10.99.205.222 <pending> 80:30875/TCP 19m
wordpress-mysql ClusterIP None <none> 3306/TCP 19m
The commands to get things running, to see for yourself:
kubectl create -f local-volumes.yaml
kubectl create secret generic mysql-pass --from-literal=password=DockerCon
kubectl create -f mysql-deployment.yaml
kubectl create -f wordpress-deployment.yaml
kubectl get pods
kubectl get services
Start admin console to see more detailed config in your web browser:
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy
I'm hoping someone can clarify things for me here. Thank you.
For Docker for Mac, you should use your host's DNS name or IP address to access exposed services. The "external IP" field will never fill in here. (If you were in an environment like AWS or GCP where a LoadBalancer Kubernetes Service creates a cloud-hosted load balancer, the cloud provider integration would provide the load balancer's IP address here, but that doesn't make sense for single-host solutions.)
Note that I've had some trouble figuring out which port is involved; answers to that issue suggest you need to use the service port (80) but you might need to try other things.

Accessing Kubernetes pods/services through one IP (Preferably Master Node)

I have a local Kubernetes installation with a master node and two worker nodes. Is there a way to access all services/pods that will be installed on Kubernetes through master node's ip?
What i mean is say i have a test service running on port 30001 on each worker and i want to access this service like http://master-node:30001. Every help is appreciated.
You can use "the proxy verb" to acces nodes, pods, or services through the master. Only HTTP and HTTPS can be proxied. See these docs and these docs.
There are some ways to do it:
Define a NodePort Kubernetes service
Use kubefwd or port forwarding command
Use proxy command (Only support HTTP & HTTPS)
In this answer, I explain how to define a NodePort Service.
The NodePort service is explained as below (Service - Kubernetes)
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
Here is an example of the NodePort service for PostgreSQL:
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: postgres
labels:
app: postgres
spec:
ports:
- port: 5432
type: NodePort
selector:
app: postgres
The port field stands for both service port and default target port. There is also a nodePort field that allows you to choose the port to access the service from outside of the cluster (via the node's IP and the nodePort)
To view the node's Port (if you don't specify it from the manifest), you can run the command:
kubectl get services -n postgres
The output should look similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
postgres NodePort 10.96.156.75 <none> 5432:30864/TCP 6d9h app=postgres
In this case, the nodePort is 30864, this is the port to access to the service from outside the cluster.
To find out the node's IP, the command to use is:
kubectl get nodes -o wide
The output should look similar to:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
homedev-control-plane Ready master 30d v1.19.1 172.18.0.2 <none> Ubuntu Groovy Gorilla (development branch) 5.9.1-arch1-1 containerd://1.4.0
If what you need is the IP only:
kubectl get nodes -o wide --no-headers | awk '{print $6}'
In this case, the node's IP is 172.18.0.2. Hence to connect to the Postgres in the local Kubernetes cluster from your host machine, the command would look like this:
psql -U postgres -h 172.18.0.2 -p 30864-d postgres

Do Kubernetes service IPs change?

I'm very new to kubernetes/docker, so apologies if this is a silly question.
I have a pod that is accessing a few services. In my container I'm running a python script and need to access the service. Currently I'm doing this using the services' IP addresses.
Is the service IP address stable or is it better to use environment variables? If so, some tips on doing that would be great.
The opening paragraph of the Services Documentation gives a motivation for services which implies stable IP addresses, but I never see it explicitly stated:
While each Pod gets its own IP address, even those IP addresses cannot be relied upon to be stable over time. This leads to a problem: if some set of Pods (let’s call them backends) provides functionality to other Pods (let’s call them frontends) inside the Kubernetes cluster, how do those frontends find out and keep track of which backends are in that set?
Enter Services.
My pod spec for reference:
kind: Pod
apiVersion: v1
metadata:
name: fetchdataiso
labels:
name: fetchdataiso
spec:
containers:
- name: fetchdataiso
image: 192.111.1.11:5000/ncllc/fetch_data
command: ["python"]
args: ["feed/fetch_data.py", "-hf", "10.222.222.51", "-pf", "8880", "-hi", "10.223.222.173", "-pi","9101"]
The short answer is "Yes, the service IP can change"
$ kubectl apply -f test.svc.yml
service "test" created
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.12.0.1 <none> 443/TCP 10d
test 10.12.172.156 <none> 80/TCP 6s
$ kubectl delete svc test
service "test" deleted
$ kubectl apply -f test.svc.yml
service "test" created
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.12.0.1 <none> 443/TCP 10d
test 10.12.254.241 <none> 80/TCP 3s
The long answer is that if you use it right, you will have no problem with it. What is even more important in scope of your question is that ENV variables are way worse then DNS/IP coupling.
You should refer to your service by service or service.namespace or even full path like something along the lines of test.default.svc.cluster.local. This will get resolved to service ClusterIP, and in opposite to your ENVs it can get re-resolved to a new IP (which will probably never happen unless you explicitly delete and recreate service) while ENV of a running process will not be changed
The service IP address is stable. You should only need to use environment variables if you don't have a better way of discovering the IP address (e.g. DNS).
If you use the DNS cluster addon within your cluster to access your services, and your service is called foo in namespace bar, you can also access it as bar.foo, which is likely more meaningful than a plain IP address.
See http://kubernetes.io/docs/user-guide/services/#dns