Nginx ingress: upstream connection timeout (Operation timed out) - sockets

I have configured nginx-ingress for UDP load balancing. Below is my configuration in helm chart.
udp: {
"514":"default/syslog-service:514",
"162":"default/trapreceiver-service:162",
"123":"default/ntp-service:123"
}
And my configmap
data:
max-worker-connections: "65535"
proxy-body-size: 500m
proxy-connect-timeout: "50"
proxy-next-upstream-tries: "2"
proxy-read-timeout: "3600"
proxy-send-timeout: "120"
Ingress is not able to send UDP packets to backend due to below error
[1000::601] [09/Nov/2020:17:19:57 +0000] UDP 502 0 48 600.000
[1000::601] [09/Nov/2020:17:19:57 +0000] UDP 502 0 48 600.001
2020/11/09 17:19:57 [error] 1166#1166: *868219 upstream timed out (110: Operation timed out) while proxying connection, udp client: 1000::601, server: [::]:123, upstream: "[fc00::fcc1]:123", bytes from/to client:48/0, bytes from/to upstream:0/48
[1000::601] [09/Nov/2020:17:19:57 +0000] UDP 502 0 387 600.001
2020/11/09 17:19:57 [error] 1166#1166: *868221 upstream timed out (110: Operation timed out) while proxying connection, udp client: 1000::601, server: [::]:162, upstream: "[fc00::696e]:162", bytes from/to client:387/0, bytes from/to upstream:0/387

Related

Issue with HAProxy not retrying on retry-on

We were having issues with Apache mod_proxy getting random 502/503 errors from our backend server (that we don’t control), so we decided to give HAProxy a shot in testing. We setup HAProxy and got the same errors, so decided to try the retry-on all-retryable-errors but keep getting the same errors. We would have thought that HAProxy would have attempted retries on this but it doesn’t seem to be happening.
For testing, we do a WGET every half a second for 10000 tries. out of the 10000 tries, we get about 10 errors.
Didn’t know if someone could look at our setup and logs to help us determine why the retry isn’t occuring.
haproxy.cfg
global
log 127.0.0.1 local2 debug
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend main
bind 127.0.0.1:5000
default_backend app
mode http
backend app
balance roundrobin
http-send-name-header Host
retry-on all-retryable-errors
retries 10
http-request disable-l7-retry if METH_POST
server srv1 backend-server:443 ssl verify none
server srv2 backend-server:443 ssl verify none
server srv3 backend-server:443 ssl verify none
haproxy.log (you see the 502 error in the middle of the log)
Apr 27 19:01:29 localhost haproxy[26058]: 127.0.0.1:58028 [27/Apr/2022:19:01:29.769] main app/srv1 0/0/5/111/116 200 1932 - - ---- 1/1/0/0/0 0/0 "GET /PBI_PBI1151/Login/RemoteInitialize/053103585 HTTP/1.1"
Apr 27 19:01:30 localhost haproxy[12306]: 127.0.0.1:58032 [27/Apr/2022:19:01:30.430] main app/srv2 0/0/2/119/121 200 1932 - - ---- 1/1/0/0/0 0/0 "GET /PBI_PBI1151/Login/RemoteInitialize/053103585 HTTP/1.1"
Apr 27 19:01:31 localhost haproxy[8726]: 127.0.0.1:58036 [27/Apr/2022:19:01:31.099] main app/srv2 0/0/6/114/120 200 1932 - - ---- 1/1/0/0/0 0/0 "GET /PBI_PBI1151/Login/RemoteInitialize/053103585 HTTP/1.1"
Apr 27 19:01:33 localhost haproxy[26058]: 127.0.0.1:58040 [27/Apr/2022:19:01:31.764] main app/srv2 0/0/6/-1/1385 502 209 - - SH-- 1/1/0/0/0 0/0 "GET /PBI_PBI1151/Login/RemoteInitialize/053103585 HTTP/1.1"
Apr 27 19:01:33 localhost haproxy[8726]: 127.0.0.1:58044 [27/Apr/2022:19:01:33.695] main app/srv3 0/0/10/112/122 200 1932 - - ---- 1/1/0/0/0 0/0 "GET /PBI_PBI1151/Login/RemoteInitialize/053103585 HTTP/1.1"
Apr 27 19:01:34 localhost haproxy[26058]: 127.0.0.1:58048 [27/Apr/2022:19:01:34.362] main app/srv3 0/0/3/113/116 200 1932 - - ---- 1/1/0/0/0 0/0 "GET /PBI_PBI1151/Login/RemoteInitialize/053103585 HTTP/1.1"
Apr 27 19:01:45 localhost haproxy[8726]: 127.0.0.1:58052 [27/Apr/2022:19:01:35.023] main app/srv1 0/0/16/10552/10568 200 1932 - - ---- 1/1/0/0/0 0/0 "GET /PBI_PBI1151/Login/RemoteInitialize/053103585 HTTP/1.1"
output from the wget:
--2022-04-27 19:01:31-- http://localhost:5000/PBI_PBI1151/Login/RemoteInitialize/053103585
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:5000... connected.
HTTP request sent, awaiting response... 502 Bad Gateway
2022-04-27 19:01:33 ERROR 502: Bad Gateway.

Haproxy SSL(https) health checks without terminating ssl

so I can't figure out a proper way to do the SSL check, I am not using certificates, just need to check against a HTTPS websites url (google.com/ for example)
Trying multiple combinations at a time, without success. Maybe someone has a similar configuration,
backends using -
> check-sni google.com sni ssl_fc_sni
returns - reason: Layer7 wrong status, code: 301, info: "Moved Permanently"
check port 80 check-ssl -
reason: Layer6 invalid response, info: "SSL handshake failure"
All others just timing out. Here's the complete configuration file-
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
ssl-server-verify none
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
# An alternative list with additional directives can be obtained from
# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend myfront
bind *:8000
mode tcp
tcp-request inspect-delay 5s
default_backend backend1
listen stats
bind :444
stats enable
stats uri /
stats hide-version
stats auth test:test
backend Backends
balance roundrobin
option forwardfor
option httpchk
http-check send hdr host google.com meth GET uri /
http-check expect status 200
#http-check connect
#http-check send meth GET uri / ver HTTP/1.1 hdr host haproxy.1wt.eu
#http-check expect status 200-399
#http-check connect port 443 ssl sni haproxy.1wt.eu
#http-check send meth GET uri / ver HTTP/1.1 hdr host haproxy.1wt.eu
#http-check expect status 200-399
#http-check connect port 443 ssl sni google.com
#http-check send meth GET uri / ver HTTP/1.1 hdr host google.com
default-server fall 10 rise 1
server Node1011 192.168.0.2:1011 check inter 15s check-ssl check port 443
server Node1012 192.168.0.2:1012 check inter 15s check-ssl check port 443
server Node1015 192.168.0.2:1015 check inter 15s check port 443
server Node1017 192.168.0.2:1017 check inter 15s check-ssl check-sni google.com sni ssl_fc_sni
server Node1018 192.168.0.2:1018 check inter 15s check-ssl check-sni google.com sni ssl_fc_sni
server Node1019 192.168.0.2:1019 check inter 15s check-sni google.com sni ssl_fc_sni
server Node1020 192.168.0.2:1020 check inter 15s check port 443 check-ssl
server Node1021 192.168.0.2:1021 check inter 15s check port 443 check-ssl
server Node1027 192.168.0.2:1027 check inter 15s check port 80
server Node1028 192.168.0.2:1028 check inter 15s check port 80
server Node1029 192.168.0.2:1029 check inter 15s check port 80
server Node1030 192.168.0.2:1030 check inter 15s check port 80 check-ssl
server Node1031 192.168.0.2:1031 check inter 15s check port 80 check-ssl
server Node1033 192.168.0.2:1033 check inter 15s check port 80 check-ssl verify none
server Node1034 192.168.0.2:1034 check inter 15s check port 80 check-ssl verify none
server Node1035 192.168.0.2:1035 check inter 15s check-ssl
server Node1036 192.168.0.2:1036 check inter 15s check-ssl
server Node1048 192.168.0.2:1048 check inter 15s check-ssl verify none
server Node1049 192.168.0.2:1049 check inter 15s check-ssl verify none
P.s Found a website, which explains just what I'm trying to do(https://hodari.be/posts/2020_09_04_configure_sni_for_haproxy_backends/), but that doesn't work either, my haproxy version is 2.2.3
P.s.s I am literally trying to check against www.google.com , just to be clear.
Thank you!
That's really not an error. If you do a curl to https://google.com it does do a 301 redirect to https://www.google.com/. I snipped out some protocol details below for brevity, but you get the idea.
Either change your expect to 301, or use www.google.com.
paul:~ $ curl -vv https://google.com
* Rebuilt URL to: https://google.com/
* Trying 172.217.1.206...
-[snip]-
> GET / HTTP/2
> Host: google.com
> User-Agent: curl/7.58.0
> Accept: */*
>
-[snip]-
< HTTP/2 301
< location: https://www.google.com/
< content-type: text/html; charset=UTF-8
< date: Mon, 18 Jan 2021 03:42:04 GMT
< expires: Wed, 17 Feb 2021 03:42:04 GMT
< cache-control: public, max-age=2592000
< server: gws
< content-length: 220
< x-xss-protection: 0
< x-frame-options: SAMEORIGIN
< alt-svc: h3-29=":443"; ma=2592000,h3-T051=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
<
* TLSv1.3 (IN), TLS Unknown, Unknown (23):
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
here.
</BODY></HTML>
So, if you want to avoid the 301, use the www.google.com value in your config as thus:
http-check send hdr host www.google.com meth GET uri /

Resolving external domains from within pods does not work

What happened
Resolving an external domain from within a pod fails with SERVFAIL message. In the logs, i/o timeout error is mentioned.
What I expected to happen
External domains should be successfully resolved from the pods.
How to reproduce it
apiVersion: v1
kind: Pod
metadata:
name: dnsutils
namespace: default
spec:
containers:
- name: dnsutils
image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
Create the pod above (from Debugging DNS Resolution help page).
Run kubectl exec dnsutils -it -- nslookup google.com
pig#pig202:~$ kubectl exec dnsutils -it -- nslookup google.com
Server: 10.152.183.10
Address: 10.152.183.10#53
** server can't find google.com.mshome.net: SERVFAIL
command terminated with exit code 1
Also run kubectl exec dnsutils -it -- nslookup google.com.
pig#pig202:~$ kubectl exec dnsutils -it -- nslookup google.com.
Server: 10.152.183.10
Address: 10.152.183.10#53
** server can't find google.com: SERVFAIL
command terminated with exit code 1
Additional information
I am using microk8s environment in a Hyper-V virtual machine.
Resolving DNS from the virtual machine works, and Kubernetes is able to pull container images. It's only from within the pods that the resolution is failing meaning I cannot communicate with the Internet from within the pods.
This is OK:
pig#pig202:~$ kubectl exec dnsutils -it -- nslookup kubernetes.default
Server: 10.152.183.10
Address: 10.152.183.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.152.183.1
Environment
The version of CoreDNS
image: 'coredns/coredns:1.6.6'
Corefile (taken from ConfigMap)
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
log . {
class error
}
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
cache 30
loop
reload
loadbalance
}
Logs
pig#pig202:~$ kubectl logs --namespace=kube-system -l k8s-app=kube-dns -f
[INFO] 10.1.99.26:47204 - 29832 "AAAA IN grafana.com. udp 29 false 512" NOERROR - 0 2.0002558s
[ERROR] plugin/errors: 2 grafana.com. AAAA: read udp 10.1.99.19:52008->8.8.8.8:53: i/o timeout
[INFO] 10.1.99.26:59350 - 50446 "A IN grafana.com. udp 29 false 512" NOERROR - 0 2.0002028s
[ERROR] plugin/errors: 2 grafana.com. A: read udp 10.1.99.19:60405->8.8.8.8:53: i/o timeout
[INFO] 10.1.99.26:43050 - 13676 "AAAA IN grafana.com. udp 29 false 512" NOERROR - 0 2.0002151s
[ERROR] plugin/errors: 2 grafana.com. AAAA: read udp 10.1.99.19:45624->8.8.8.8:53: i/o timeout
[INFO] 10.1.99.26:36997 - 30359 "A IN grafana.com. udp 29 false 512" NOERROR - 0 2.0002791s
[ERROR] plugin/errors: 2 grafana.com. A: read udp 10.1.99.19:37554->8.8.4.4:53: i/o timeout
[INFO] 10.1.99.32:57927 - 53858 "A IN google.com.mshome.net. udp 39 false 512" NOERROR - 0 2.0001987s
[ERROR] plugin/errors: 2 google.com.mshome.net. A: read udp 10.1.99.19:34079->8.8.4.4:53: i/o timeout
[INFO] 10.1.99.32:38403 - 36398 "A IN google.com.mshome.net. udp 39 false 512" NOERROR - 0 2.000224s
[ERROR] plugin/errors: 2 google.com.mshome.net. A: read udp 10.1.99.19:59835->8.8.8.8:53: i/o timeout
[INFO] 10.1.99.26:57447 - 20295 "AAAA IN grafana.com.mshome.net. udp 40 false 512" NOERROR - 0 2.0001892s
[ERROR] plugin/errors: 2 grafana.com.mshome.net. AAAA: read udp 10.1.99.19:51534->8.8.8.8:53: i/o timeout
[INFO] 10.1.99.26:41052 - 56059 "A IN grafana.com.mshome.net. udp 40 false 512" NOERROR - 0 2.0001879s
[ERROR] plugin/errors: 2 grafana.com.mshome.net. A: read udp 10.1.99.19:47378->8.8.8.8:53: i/o timeout
[INFO] 10.1.99.26:56748 - 51804 "AAAA IN grafana.com.mshome.net. udp 40 false 512" NOERROR - 0 2.0003226s
[INFO] 10.1.99.26:45442 - 61916 "A IN grafana.com.mshome.net. udp 40 false 512" NOERROR - 0 2.0001922s
[ERROR] plugin/errors: 2 grafana.com.mshome.net. AAAA: read udp 10.1.99.19:35528->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 grafana.com.mshome.net. A: read udp 10.1.99.19:53568->8.8.8.8:53: i/o timeout
OS
pig#pig202:~$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04 LTS"
VERSION_ID="20.04"
Tried on Ubuntu 18.04.3 LTS, same issue.
Other
mshome.net search domain comes from Hyper-V network, I assume. Perhaps this will be of help:
pig#pig202:~$ nmcli device show eth0
GENERAL.DEVICE: eth0
GENERAL.TYPE: ethernet
GENERAL.HWADDR: 00:15:5D:88:26:02
GENERAL.MTU: 1500
GENERAL.STATE: 100 (connected)
GENERAL.CONNECTION: Wired connection 1
GENERAL.CON-PATH: /org/freedesktop/NetworkManager/ActiveConnection/1
WIRED-PROPERTIES.CARRIER: on
IP4.ADDRESS[1]: 172.19.120.188/28
IP4.GATEWAY: 172.19.120.177
IP4.ROUTE[1]: dst = 0.0.0.0/0, nh = 172.19.120.177, mt = 100
IP4.ROUTE[2]: dst = 172.19.120.176/28, nh = 0.0.0.0, mt = 100
IP4.ROUTE[3]: dst = 169.254.0.0/16, nh = 0.0.0.0, mt = 1000
IP4.DNS[1]: 172.19.120.177
IP4.DOMAIN[1]: mshome.net
IP6.ADDRESS[1]: fe80::6b4a:57e2:5f1b:f739/64
IP6.GATEWAY: --
IP6.ROUTE[1]: dst = fe80::/64, nh = ::, mt = 100
IP6.ROUTE[2]: dst = ff00::/8, nh = ::, mt = 256, table=255
Finally found the solution which was the combination of two changes. After applying both changes, my pods could finally resolve addresses properly.
Kubelet configuration
Based on known issues, change resolv-conf path for Kubelet to use.
# Add resolv-conf flag to Kubelet configuration
echo "--resolv-conf=/run/systemd/resolve/resolv.conf" >> /var/snap/microk8s/current/args/kubelet
# Restart Kubelet
sudo service snap.microk8s.daemon-kubelet restart
CoreDNS forward
Change forward address in CoreDNS config map from default (8.8.8.8 8.8.4.4) to DNS on eth0 device.
# Dump definition of CoreDNS
microk8s.kubectl get configmap -n kube-system coredns -o yaml > coredns.yaml
Partial content of coredns.yaml:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
log . {
class error
}
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
cache 30
loop
reload
loadbalance
}
Fetch DNS:
# Fetch eth0 DNS address (this will print 172.19.120.177 in my case)
nmcli dev show 2>/dev/null | grep DNS | sed 's/^.*:\s*//'
Change the following line and save:
forward . 8.8.8.8 8.8.4.4 # From this
forward . 172.19.120.177 # To this (your DNS will probably be different)
Finally apply to change CoreDNS forwarding:
microk8s.kubectl apply -f coredns.yaml

How can I access the Openshift service through ClusterIP from nodes

I am trying to access a Flask server running in one Openshift pod from other.
For that I created a service as below.
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-web-app ClusterIP 172.30.216.112 <none> 8080/TCP 8m
$ oc describe svc my-web-app
Name: my-web-app
Namespace: neo4j-sys
Labels: app=my-web-app
Annotations: openshift.io/generated-by=OpenShiftNewApp
Selector: app=my-web-app,deploymentconfig=my-web-app
Type: ClusterIP
IP: 172.30.216.112
Port: 8080-tcp 8080/TCP
TargetPort: 8080/TCP
Endpoints: 172.20.203.104:5000,172.20.49.150:5000
Session Affinity: None
Events: <none>
1)
First, I ping ed from one pod to other pod and got response.
(app-root) sh-4.2$ ping 172.20.203.104
PING 172.20.203.104 (172.20.203.104) 56(84) bytes of data.
64 bytes from 172.20.203.104: icmp_seq=1 ttl=64 time=5.53 ms
64 bytes from 172.20.203.104: icmp_seq=2 ttl=64 time=0.527 ms
64 bytes from 172.20.203.104: icmp_seq=3 ttl=64 time=3.10 ms
64 bytes from 172.20.203.104: icmp_seq=4 ttl=64 time=2.12 ms
64 bytes from 172.20.203.104: icmp_seq=5 ttl=64 time=0.784 ms
64 bytes from 172.20.203.104: icmp_seq=6 ttl=64 time=6.81 ms
64 bytes from 172.20.203.104: icmp_seq=7 ttl=64 time=18.2 ms
^C
--- 172.20.203.104 ping statistics ---
7 packets transmitted, 7 received, 0% packet loss, time 6012ms
rtt min/avg/max/mdev = 0.527/5.303/18.235/5.704 ms
But, when I tried curl, it is not responding.
(app-root) sh-4.2$ curl 172.20.203.104
curl: (7) Failed connect to 172.20.203.104:80; Connection refused
(app-root) sh-4.2$ curl 172.20.203.104:8080
curl: (7) Failed connect to 172.20.203.104:8080; Connection refused
2)
After that I tried to reach cluster IP from one pod. In this case, both ping, curl not reachable.
(app-root) sh-4.2$ ping 172.30.216.112
PING 172.30.216.112 (172.30.216.112) 56(84) bytes of data.
From 172.20.49.1 icmp_seq=1 Destination Host Unreachable
From 172.20.49.1 icmp_seq=4 Destination Host Unreachable
From 172.20.49.1 icmp_seq=2 Destination Host Unreachable
From 172.20.49.1 icmp_seq=3 Destination Host Unreachable
^C
--- 172.30.216.112 ping statistics ---
7 packets transmitted, 0 received, +4 errors, 100% packet loss, time 6002ms
pipe 4
(app-root) sh-4.2$ curl 172.30.216.112
curl: (7) Failed connect to 172.30.216.112:80; No route to host
Please let me know where I am going wrong here. Why the above cases #1, #2 failing. How to access the clusterIP services.
I am completely new to the services and accessing them and hence I might be missing some basics.
I gone through other answers How can I access the Kubernetes service through ClusterIP 1. But, it is for Nodeport which is not helping me.
Updates based on below comment from Graham Dumpleton, below are my observations.
This is the Flask server log which I am running in the pods for information.
* Serving Flask app "wsgi" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
127.0.0.1 - - [14/Nov/2019 04:54:53] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [14/Nov/2019 04:55:05] "GET / HTTP/1.1" 200 -
Is your pod listening on external interfaces on pod 8080?
If I understand question correctly, my intention here is just to communicate between pods via clusterIP service. I am not looking for accessing pods from external interfaces(other projects or through web urls as load balancer service)
If you get into the pod, can you do curl $HOSTNAME:8080?
Yes, if I am running as localhost or 127.0.0.1, I am getting response from the same pod where I run this as expected.
(app-root) sh-4.2$ curl http://127.0.0.1:5000/
Hello World!
(app-root) sh-4.2$ curl http://localhost:5000/
Hello World!
But, if I tried with my-web-app or service IP(clusterIP). I am not getting response.
(app-root) sh-4.2$ curl http://172.30.216.112:5000/
curl: (7) Failed connect to 172.30.216.112:5000; No route to host
(app-root) sh-4.2$ curl my-web-app:8080
curl: (7) Failed connect to my-web-app:8080; Connection refused
(app-root) sh-4.2$ curl http://my-web-app:8080/
curl: (7) Failed connect to my-web-app:8080; Connection refused
With pod IP also I am not getting response.
(app-root) sh-4.2$ curl http://172.20.49.150:5000/
curl: (7) Failed connect to 172.20.49.150:5000; Connection refused
(app-root) sh-4.2$ curl 172.20.49.150
curl: (7) Failed connect to 172.20.49.150:80; Connection refused
I am answering my own question. Here is how my issue got resolved based on inputs from Graham Dumpleton.
Initially, I used to connect Flask server as below.
from flask import Flask
application = Flask(__name__)
if __name__ == "__main__":
application.run()
This bind the server to http://127.0.0.1:5000/ by default.
As part of resolution I changed the bind to 0.0.0.0 as below
from flask import Flask
application = Flask(__name__)
if __name__ == "__main__":
application.run(host='0.0.0.0')
And log as below after that.
* Serving Flask app "wsgi" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
After that pods successfully communicated via clusterIP. Below are the service details(increased one more pod)
$ oc describe svc my-web-app
Name: my-web-app
Namespace: neo4j-sys
Labels: app=my-web-app
Annotations: openshift.io/generated-by=OpenShiftNewApp
Selector: app=my-web-app,deploymentconfig=my-web-app
Type: ClusterIP
IP: 172.30.4.250
Port: 8080-tcp 8080/TCP
TargetPort: 5000/TCP
Endpoints: 172.20.106.184:5000,172.20.182.118:5000,172.20.83.40:5000
Session Affinity: None
Events: <none>
Below is the successful response.
(app-root) sh-4.2$ curl http://172.30.4.250:8080 //with clusterIP which is my expectation
Hello World!
(app-root) sh-4.2$ curl http://172.20.106.184:5000 // with pod IP
Hello World!
(app-root) sh-4.2$ curl $HOSTNAME:5000 // with $HOSTNAME
Hello World!

How to solve: RPC: Port mapper failure - RPC: Unable to receive errno = Connection refused

I'm trying to set up a NFS server.
I have two programs server and client, I start the server which starts without errors, then I create a file with the client, the file is created correctly, but when I try to write something in that file I get the error:
call failed: RPC: Unable to receive; errno = Connection refused
And here is my rpcinfo -p output
# rpcinfo -p
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 662 status
100024 1 tcp 662 status
100005 1 udp 892 mountd
100005 1 tcp 892 mountd
100005 2 udp 892 mountd
100005 2 tcp 892 mountd
100005 3 udp 892 mountd
100005 3 tcp 892 mountd
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 3 tcp 2049 nfs_acl
100003 3 udp 2049 nfs
100227 3 udp 2049 nfs_acl
100021 1 udp 58383 nlockmgr
100021 3 udp 58383 nlockmgr
100021 4 udp 58383 nlockmgr
100021 1 tcp 39957 nlockmgr
100021 3 tcp 39957 nlockmgr
100021 4 tcp 39957 nlockmgr
536870913 1 udp 997
536870913 1 tcp 999
Please does anyone know how can I solve this problem ?
NOTE: I am using my laptop as server and client at the same time.
Make sure rpcbind is running. Also, it is a good idea to check if you can see the exports with "showmount -e ".