kubernetes DNS - Let service contact itself via DNS - kubernetes

Pods in a kubernetes cluster can be reached by sending network requests to the dns of a service that they are a member of. Network requests have to be send to [service].[namespace].svc.cluster.local and get load balanced between all members of that service.
This works fine to let some pod reach another pod, but it fails if a pod tries to reach itself via a service that he's a member of. It always results in a timeout.
Is this a bug in Kubernetes (in my case minikube v0.35.0) or is some additional configuration required?
Here's some debug info:
Let's contact the service from some other pod. This works fine:
daemon#auth-796d88df99-twj2t:/opt/docker$ curl -v -X POST -H "Accept: application/json" --data '{}' http://message-service.message.svc.cluster.local:9000/message/get-messages
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 10.107.209.9...
* TCP_NODELAY set
* Connected to message-service.message.svc.cluster.local (10.107.209.9) port 9000 (#0)
> POST /message/get-messages HTTP/1.1
> Host: message-service.message.svc.cluster.local:9000
> User-Agent: curl/7.52.1
> Accept: application/json
> Content-Length: 2
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 2 out of 2 bytes
< HTTP/1.1 401 Unauthorized
< Referrer-Policy: origin-when-cross-origin, strict-origin-when-cross-origin
< X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
< X-Content-Type-Options: nosniff
< Content-Security-Policy: default-src 'self'
< X-Permitted-Cross-Domain-Policies: master-only
< Date: Wed, 20 Mar 2019 04:36:51 GMT
< Content-Type: text/plain; charset=UTF-8
< Content-Length: 12
<
* Curl_http_done: called premature == 0
* Connection #0 to host message-service.message.svc.cluster.local left intact
Unauthorized
Now we try to let the pod contact the service that he's a member of:
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 10.107.209.9...
* TCP_NODELAY set
* connect to 10.107.209.9 port 9000 failed: Connection timed out
* Failed to connect to message-service.message.svc.cluster.local port 9000: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to message-service.message.svc.cluster.local port 9000: Connection timed out
If I've read the curl debug log correctly, the dns resolves to the ip address 10.107.209.9. The pod can be reached from any other pod via that ip but the pod cannot use it to reach itself.
Here are the network interfaces of the pod that tries to reach itself:
daemon#message-58466bbc45-lch9j:/opt/docker$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
296: eth0#if297: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:09 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.9/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
Here is the kubernetes file deployed to minikube:
apiVersion: v1
kind: Namespace
metadata:
name: message
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: message
name: message
namespace: message
spec:
replicas: 1
selector:
matchLabels:
app: message
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: message
spec:
containers:
- name: message
image: message-impl:0.1.0-SNAPSHOT
imagePullPolicy: Never
ports:
- name: http
containerPort: 9000
protocol: TCP
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KAFKA_KUBERNETES_NAMESPACE
value: kafka
- name: KAFKA_KUBERNETES_SERVICE
value: kafka-svc
- name: CASSANDRA_KUBERNETES_NAMESPACE
value: cassandra
- name: CASSANDRA_KUBERNETES_SERVICE
value: cassandra
- name: CASSANDRA_KEYSPACE
value: service_message
---
# Service for discovery
apiVersion: v1
kind: Service
metadata:
name: message-service
namespace: message
spec:
ports:
- port: 9000
protocol: TCP
selector:
app: message
---
# Expose this service to the api gateway
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: message
namespace: message
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: api.fload.cf
http:
paths:
- path: /message
backend:
serviceName: message-service
servicePort: 9000

This is a known minikube issue. The discussion contains the following workarounds:
1) Try:
minikube ssh
sudo ip link set docker0 promisc on
2) Use a headless service: clusterIP: None

Related

haproxy cannot detect services after reboot

I have reids nodes:
NAME READY STATUS RESTARTS AGE
pod/redis-haproxy-deployment-65497cd78d-659tq 1/1 Running 0 31m
pod/redis-sentinel-node-0 3/3 Running 0 81m
pod/redis-sentinel-node-1 3/3 Running 0 80m
pod/redis-sentinel-node-2 3/3 Running 0 80m
pod/ubuntu 1/1 Running 0 85m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/redis-haproxy-balancer ClusterIP 10.43.92.106 <none> 6379/TCP 31m
service/redis-sentinel-headless ClusterIP None <none> 6379/TCP,26379/TCP 99m
service/redis-sentinel-metrics ClusterIP 10.43.72.97 <none> 9121/TCP 99m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/redis-haproxy-deployment 1/1 1 1 31m
NAME DESIRED CURRENT READY AGE
replicaset.apps/redis-haproxy-deployment-65497cd78d 1 1 1 31m
NAME READY AGE
statefulset.apps/redis-sentinel-node 3/3 99m
I connect to the master redis using the following command:
redis-cli -h redis-haproxy-balancer
redis-haproxy-balancer:6379> keys *
1) "sdf"
2) "sdf12"
3) "s4df12"
4) "s4df1"
5) "fsafsdf"
6) "!s4d!1"
7) "s4d!1"
Here is my configuration file haproxy.cfg:
global
daemon
maxconn 256
defaults REDIS
mode tcp
timeout connect 3s
timeout server 3s
timeout client 3s
frontend front_redis
bind 0.0.0.0:6379
use_backend redis_cluster
backend redis_cluster
mode tcp
option tcp-check
tcp-check comment PING\ phase
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check comment role\ check
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check comment QUIT\ phase
tcp-check send QUIT\r\n
tcp-check expect string +OK
server redis-0 redis-sentinel-node-0.redis-sentinel-headless:6379 maxconn 1024 check inter 1s
server redis-1 redis-sentinel-node-1.redis-sentinel-headless:6379 maxconn 1024 check inter 1s
server redis-2 redis-sentinel-node-2.redis-sentinel-headless:6379 maxconn 1024 check inter 1s
Here is the service I go to in order to get to the master redis - haproxy-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: redis-haproxy-balancer
spec:
type: ClusterIP
selector:
app: redis-haproxy
ports:
- protocol: TCP
port: 6379
targetPort: 6379
here is a deployment that refers to a configuration file - redis-haproxy-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-haproxy-deployment
labels:
app: redis-haproxy
spec:
replicas: 1
selector:
matchLabels:
app: redis-haproxy
template:
metadata:
labels:
app: redis-haproxy
spec:
containers:
- name: redis-haproxy
image: haproxy:lts-alpine
volumeMounts:
- name: redis-haproxy-config-volume
mountPath: /usr/local/etc/haproxy/haproxy.cfg
subPath: haproxy.cfg
ports:
- containerPort: 6379
volumes:
- name: redis-haproxy-config-volume
configMap:
name: redis-haproxy-config
items:
- key: haproxy.cfg
path: haproxy.cfg
After restarting redis I cannot connect to it with redis-haproxy-balancer...
[NOTICE] (1) : New worker (8) forked
[NOTICE] (1) : Loading success.
[WARNING] (8) : Server redis_cluster/redis-0 is DOWN, reason: Layer7 timeout, info: " at step 6 of tcp-check (expect string 'role:master')", check duration: 1000ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] (8) : Server redis_cluster/redis-1 is DOWN, reason: Layer7 timeout, info: " at step 6 of tcp-check (expect string 'role:master')", check duration: 1005ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] (8) : Server redis_cluster/redis-2 is DOWN, reason: Layer7 timeout, info: " at step 6 of tcp-check (expect string 'role:master')", check duration: 1001ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] (8) : backend 'redis_cluster' has no server available!
It only works by connecting directly: redis-sentinel-node-0.redis-sentinel-headless
What is wrong with my haproxy?
You will need to add a resolver section and point it to the kubernetes dns.
Kubernetes: DNS for Services and Pods
HAProxy: Server IP address resolution using DNS
resolvers mydns
nameserver dns1 Kubernetes-DNS-Service-ip:53
resolve_retries 3
timeout resolve 1s
timeout retry 1s
hold other 30s
hold refused 30s
hold nx 30s
hold timeout 30s
hold valid 10s
hold obsolete 30s
backend redis_cluster
mode tcp
option tcp-check
... # your other settings
server redis-0 redis-sentinel-node-0.redis-sentinel-headless:6379 resolvers mydns maxconn 1024 check inter 1s
server redis-1 redis-sentinel-node-1.redis-sentinel-headless:6379 resolvers mydns maxconn 1024 check inter 1s

service via ingress not reachable, only via nodeport:ip

Good afternoon,
i'd like to ask.
im a "little" bit upset regarding ingress and its traffic flow
i created test nginx deployment with service and ingress. ( in titaniun cloud )
i have no direct connect via browser so im using tunneling to get access via browser abd sock5 proxy in firefox.
deployment:
k describe deployments.apps dpl-nginx
Name: dpl-nginx
Namespace: xxx
CreationTimestamp: Thu, 09 Jun 2022 07:20:48 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
field.cattle.io/publicEndpoints:
[{"port":32506,"protocol":"TCP","serviceName":"xxx:xxx-svc","allNodes":true},{"addresses":["172.xx.xx.117","172.xx.xx.131","172.xx.x...
Selector: app=xxx-nginx
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=xxx-nginx
Containers:
nginx:
Image: nginx
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/usr/share/nginx/html/ from nginx-index-file (rw)
Volumes:
nginx-index-file:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: index-html-configmap
Optional: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: xxx-dpl-nginx-6ff8bcd665 (2/2 replicas created)
Events: <none>
service:
Name: xxx-svc
Namespace: xxx
Labels: <none>
Annotations: field.cattle.io/publicEndpoints: [{"port":32506,"protocol":"TCP","serviceName":"xxx:xxx-svc","allNodes":true}]
Selector: app=xxx-nginx
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.95.33
IPs: 10.43.95.33
Port: http-internal 888/TCP
TargetPort: 80/TCP
NodePort: http-internal 32506/TCP
Endpoints: 10.42.0.178:80,10.42.0.179:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
ingress:
Name: test-ingress
Namespace: xxx
Address: 172.xx.xx.117,172.xx.xx.131,172.xx.xx.132
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
test.xxx.io
/ xxx-svc:888 (10.42.0.178:80,10.42.0.179:80)
Annotations: field.cattle.io/publicEndpoints:
[{"addresses":["172.xx.xx.117","172.xx.xx.131","172.xx.xx.132"],"port":80,"protocol":"HTTP","serviceName":"xxx:xxx-svc","ingressName...
nginx.ingress.kubernetes.io/proxy-read-timeout: 3600
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9m34s (x37 over 3d21h) nginx-ingress-controller Scheduled for sync
when i try curl/wget to host / nodeIP ,direcly from cluster , both option works, i can get my custom index
wget test.xxx.io --no-proxy --no-check-certificate
--2022-06-13 10:35:12-- http://test.xxx.io/
Resolving test.xxx.io (test.xxx.io)... 172.xx.xx.132, 172.xx.xx.131, 172.xx.xx.117
Connecting to test.xxx.io (test.xxx.io)|172.xx.xx.132|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 197 [text/html]
Saving to: ‘index.html.1’
index.html.1 100%[===========================================================================================>] 197 --.-KB/s in 0s
curl:
curl test.xxx.io --noproxy '*' -I
HTTP/1.1 200 OK
Date: Mon, 13 Jun 2022 10:36:31 GMT
Content-Type: text/html
Content-Length: 197
Connection: keep-alive
Last-Modified: Thu, 09 Jun 2022 07:20:49 GMT
ETag: "62a19f51-c5"
Accept-Ranges: bytes
nslookup
nslookup,dig,ping from cluster is working as well:
nslookup test.xxx.io
Server: 127.0.0.53
Address: 127.0.0.53#53
Name: test.xxx.io
Address: 172.xx.xx.131
Name: test.xxx.io
Address: 172.xx.xx.132
Name: test.xxx.io
Address: 172.xx.xx.117
dig
dig test.xxx.io +noall +answer
test.xxx.io. 22 IN A 172.xx.xx.117
test.xxx.io. 22 IN A 172.xx.xx.132
test.xxx.io. 22 IN A 172.xx.xx.131
ping
ping test.xxx.io
PING test.xxx.io (172.xx.xx.132) 56(84) bytes of data.
64 bytes from xx-k3s-1 (172.xx.xx.132): icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from xx-k3s-1 (172.xx.xx.132): icmp_seq=2 ttl=64 time=0.042 ms
also from ingress nginx pod curl works fine...
in firefox via nodeIP:port, i can get index, but via host its not possible
seems that ingress forwarding traffic to the pod, but is this issue only something to do with browser ?
Thanks for any advice
so for clarification,
as I'm using tunneling to reach ingress from local pc via browser with SOCKS5 proxy.
ssh xxxx#100.xx.xx.xx -D 1090
solution is trivial, add
172.xx.xx.117 test.xxx.io
into /etc/hosts on jump server.

Nginx Controller in Kubernetes: Handshaking to upstream - peer closed connection in SSL handshake

On a local environment built with kubeadm. The cluster is made of Master and 2 worker nodes.
What it is failing:
Exposing Nginx-ingress Controller to external by Service type LoadBalancer, trying TLS termination to the Kubernetes cluster.
Here the exposed Service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
nginx-ingress LoadBalancer 10.101.5.75 192.168.1.82 80:30745/TCP,443:30092/TCP
web-service ClusterIP 10.101.26.176 <none> 8080/TCP
What it's working:
Able to reach the web application with HTTP port 80 from external.
Here the pods in the cluster:
NAME READY STATUS RESTARTS AGE IP NODE
nginx-ingress-7c5588544d-4mw9d 1/1 Running 0 38m 10.44.0.2 node1
web-deployment-7d84778bc6-52pq7 1/1 Running 1 19h 10.44.0.1 node1
web-deployment-7d84778bc6-9wwmn 1/1 Running 1 19h 10.36.0.2 node2
Test result for TLS termination
Client side:
curl -k https://example.com -v
* Rebuilt URL to https://example.com/
* Trying 192.168.1.82...
* Connected to example.com (192.168.1.82) port 443 (#0)
* found 127 certificates in /etc/ssl/certs/ca-certificates.crt
* found 513 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
* server certificate verification SKIPPED
* server certificate status verification SKIPPED
* common name: example.com (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #1
* subject: CN=example.com
* start date: Thu, 22 Oct 2020 16:33:49 GMT
* expire date: Fri, 22 Oct 2021 16:33:49 GMT
* issuer: CN=My Cert Authority
* compression: NULL
* ALPN, server accepted to use http/1.1
> GET / HTTP/1.1
> Host: example.com
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.19.3
Server side (logs from nginx-ingress pod):
GET / HTTP/1.1
Connection: close
Host: example.com
X-Real-IP: 10.32.0.1
X-Forwarded-For: 10.32.0.1
X-Forwarded-Host: example.com
X-Forwarded-Port: 443
X-Forwarded-Proto: https
User-Agent: curl/7.47.0
Accept: */*
2020/10/22 18:38:59 [error] 23#23: *12 peer closed connection in SSL handshake while SSL handshaking to upstream, client: 10.32.0.1, server: example.com, request: "GET / HTTP/1.1", upstream: "https://10.44.0.1:8081/", host: "example.com"
2020/10/22 18:38:59 [warn] 23#23: *12 upstream server temporarily disabled while SSL handshaking to upstream, client: 10.32.0.1, server: example.com, request: "GET / HTTP/1.1", upstream: "https://10.44.0.1:8081/", host: "example.com"
HTTP/1.1 502 Bad Gateway
Server: nginx/1.19.3
What i checked
Generated a CA and Server certificate following the link:
https://kubernetes.github.io/ingress-nginx/examples/PREREQUISITES/#tls-certificates
Checked the server certificate and the server key under /etc/nginx/secrets/default of the nginx-ingress Pod and they look correct. Here the output of the resource VirtualServer:
NAME STATE HOST IP PORTS AGE
vs-example Valid example.com 192.168.1.82 [80,443] 121m
VirtualServer:
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: vs-example
namespace: nginx-ingress
spec:
host: example.com
tls:
secret: example-tls
upstreams:
- name: example
service: web-service
port: 8080
tls:
enable: true
routes:
- path: /v2
action:
pass: example
routes:
- path: /
action:
pass: example
Secret
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: nginx-ingress
data:
tls.crt: (..omitted..)
tls.key: (..omitted..)
type: kubernetes.io/tls
Nginx.conf
Here an extract of the nginx.conf taken from the running Pod:
server {
# required to support the Websocket protocol in VirtualServer/VirtualServerRoutes
set $default_connection_header "";
listen 80 default_server;
listen 443 ssl default_server;
ssl_certificate /etc/nginx/secrets/default;
ssl_certificate_key /etc/nginx/secrets/default;
server_name _;
server_tokens "on";
Cannot find what's in going on when trying to reach the web application with HTTPS.
several errors in the VirtualServer, edited with these changes solved the problem:
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: vs-example
namespace: nginx-ingress
spec:
host: example.com
tls:
secret: example-tls
upstreams:
- name: example
service: web-service
port: 8080
# tls: --> this caused the SSL issue in the upstream
# enable: true
routes:
- path: /
action:
pass: example
- path: /v1
action:
pass: example
accessing web application with HTTPS now works

Internal mesh communication is ignoring settings from the virtual service

I'm trying to inject an HTTP status 500 fault in the bookinfo example.
I managed to inject a 500 error status when the traffic is coming from the Gateway with:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
namespace: default
spec:
gateways:
- bookinfo-gateway
hosts:
- '*'
http:
- fault:
abort:
httpStatus: 500
percent: 100
match:
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
Example:
$ curl $(minikube ip):30890/api/v1/products
fault filter abort
But, I fails to achieve this for traffic that is coming from other pods:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
namespace: default
spec:
gateways:
- mesh
hosts:
- productpage
http:
- fault:
abort:
httpStatus: 500
percent: 100
match:
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
Example:
# jump into a random pod
$ kubectl exec -ti details-v1-dasa231 -- bash
root#details $ curl productpage:9080/api/v1/products
[{"descriptionHtml": ... <- actual product list, I expect a http 500
I tried using the FQDN for the host productpage.svc.default.cluster.local but I get the same behavior.
I checked the proxy status with istioctl proxy-status everything is synced.
I tested if the istio-proxy is injected into the pods, it is:
Pods:
NAME READY STATUS RESTARTS AGE
details-v1-6764bbc7f7-bm9zq 2/2 Running 0 4h
productpage-v1-54b8b9f55-72hfb 2/2 Running 0 4h
ratings-v1-7bc85949-cfpj2 2/2 Running 0 4h
reviews-v1-fdbf674bb-5sk5x 2/2 Running 0 4h
reviews-v2-5bdc5877d6-cb86k 2/2 Running 0 4h
reviews-v3-dd846cc78-lzb5t 2/2 Running 0 4h
I'm completely stuck and not sure what to check next. I feel like I am missing something very obvious.
I would really appreciate any help on this topic.
This should work, and does when I tried. My guess is that you have other conflicting route rules for the productpage service defined.
The root cause of my issues were an improperly set up includeIPRanges in my minicloud cluster. I set up the 10.0.0.1/24 CIDR, but some services were listening on 10.35.x.x.

Trouble accessing nginx deployment externally

I can curl an exposed nginx deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
tr: frnt
template:
metadata:
labels:
app: nginx
tr: frnt
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
restartPolicy: Always
----
apiVersion: v1
kind: Service
metadata:
name: web-dep-nodeport-service
spec:
selector:
tr: frnt
ports:
- nodePort: 30000
port: 80
type: NodePort
on a node, with success:
user#gke-cluster-1-default-pool-xxxx ~ $ curl -Lvso /dev/null http://localhost:30000
* Rebuilt URL to: http://localhost:30000/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 30000 (#0)
> GET / HTTP/1.1
> Host: localhost:30000
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.9.15
< Date: Sun, 22 Apr 2018 04:40:24 GMT
< Content-Type: text/html
< Content-Length: 612
< Last-Modified: Tue, 19 Apr 2016 17:27:46 GMT
< Connection: keep-alive
< ETag: "xxxxx"
< Accept-Ranges: bytes
<
{ [612 bytes data]
* Connection #0 to host localhost left intact
But when trying the same command on an external machine, using the node EXTERNAL_IP (from gcloud compute instances list), I get:
$ curl -Lvso /dev/null http://x.x.x.x:30000 &> result.txt &
$ cat result.txt
* Rebuilt URL to: http://x.x.x.x:30000/
* Trying x.x.x.x...
* connect to x.x.x.x port 30000 failed: Connection timed out
* Failed to connect to x.x.x.x port 30000: Connection timed out
* Closing connection 0
I can ping the EXTERNAL_IP with success:
ping -c 2 x.x.x.x
PING x.x.x.x (x.x.x.x) 56(84) bytes of data.
64 bytes from x.x.x.x: icmp_seq=1 ttl=56 time=32.4 ms
64 bytes from x.x.x.x: icmp_seq=2 ttl=56 time=33.7 ms
--- x.x.x.x ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 32.456/33.099/33.742/0.643 ms
What can I do here to expose the nodePort externally?
This was solved by creating a firewall rule:
gcloud compute firewall-rules create nginx-rule --allow tcp:30000