Trouble accessing nginx deployment externally - deployment

I can curl an exposed nginx deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
tr: frnt
template:
metadata:
labels:
app: nginx
tr: frnt
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
restartPolicy: Always
----
apiVersion: v1
kind: Service
metadata:
name: web-dep-nodeport-service
spec:
selector:
tr: frnt
ports:
- nodePort: 30000
port: 80
type: NodePort
on a node, with success:
user#gke-cluster-1-default-pool-xxxx ~ $ curl -Lvso /dev/null http://localhost:30000
* Rebuilt URL to: http://localhost:30000/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 30000 (#0)
> GET / HTTP/1.1
> Host: localhost:30000
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.9.15
< Date: Sun, 22 Apr 2018 04:40:24 GMT
< Content-Type: text/html
< Content-Length: 612
< Last-Modified: Tue, 19 Apr 2016 17:27:46 GMT
< Connection: keep-alive
< ETag: "xxxxx"
< Accept-Ranges: bytes
<
{ [612 bytes data]
* Connection #0 to host localhost left intact
But when trying the same command on an external machine, using the node EXTERNAL_IP (from gcloud compute instances list), I get:
$ curl -Lvso /dev/null http://x.x.x.x:30000 &> result.txt &
$ cat result.txt
* Rebuilt URL to: http://x.x.x.x:30000/
* Trying x.x.x.x...
* connect to x.x.x.x port 30000 failed: Connection timed out
* Failed to connect to x.x.x.x port 30000: Connection timed out
* Closing connection 0
I can ping the EXTERNAL_IP with success:
ping -c 2 x.x.x.x
PING x.x.x.x (x.x.x.x) 56(84) bytes of data.
64 bytes from x.x.x.x: icmp_seq=1 ttl=56 time=32.4 ms
64 bytes from x.x.x.x: icmp_seq=2 ttl=56 time=33.7 ms
--- x.x.x.x ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 32.456/33.099/33.742/0.643 ms
What can I do here to expose the nodePort externally?

This was solved by creating a firewall rule:
gcloud compute firewall-rules create nginx-rule --allow tcp:30000

Related

service via ingress not reachable, only via nodeport:ip

Good afternoon,
i'd like to ask.
im a "little" bit upset regarding ingress and its traffic flow
i created test nginx deployment with service and ingress. ( in titaniun cloud )
i have no direct connect via browser so im using tunneling to get access via browser abd sock5 proxy in firefox.
deployment:
k describe deployments.apps dpl-nginx
Name: dpl-nginx
Namespace: xxx
CreationTimestamp: Thu, 09 Jun 2022 07:20:48 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
field.cattle.io/publicEndpoints:
[{"port":32506,"protocol":"TCP","serviceName":"xxx:xxx-svc","allNodes":true},{"addresses":["172.xx.xx.117","172.xx.xx.131","172.xx.x...
Selector: app=xxx-nginx
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=xxx-nginx
Containers:
nginx:
Image: nginx
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/usr/share/nginx/html/ from nginx-index-file (rw)
Volumes:
nginx-index-file:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: index-html-configmap
Optional: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: xxx-dpl-nginx-6ff8bcd665 (2/2 replicas created)
Events: <none>
service:
Name: xxx-svc
Namespace: xxx
Labels: <none>
Annotations: field.cattle.io/publicEndpoints: [{"port":32506,"protocol":"TCP","serviceName":"xxx:xxx-svc","allNodes":true}]
Selector: app=xxx-nginx
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.95.33
IPs: 10.43.95.33
Port: http-internal 888/TCP
TargetPort: 80/TCP
NodePort: http-internal 32506/TCP
Endpoints: 10.42.0.178:80,10.42.0.179:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
ingress:
Name: test-ingress
Namespace: xxx
Address: 172.xx.xx.117,172.xx.xx.131,172.xx.xx.132
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
test.xxx.io
/ xxx-svc:888 (10.42.0.178:80,10.42.0.179:80)
Annotations: field.cattle.io/publicEndpoints:
[{"addresses":["172.xx.xx.117","172.xx.xx.131","172.xx.xx.132"],"port":80,"protocol":"HTTP","serviceName":"xxx:xxx-svc","ingressName...
nginx.ingress.kubernetes.io/proxy-read-timeout: 3600
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9m34s (x37 over 3d21h) nginx-ingress-controller Scheduled for sync
when i try curl/wget to host / nodeIP ,direcly from cluster , both option works, i can get my custom index
wget test.xxx.io --no-proxy --no-check-certificate
--2022-06-13 10:35:12-- http://test.xxx.io/
Resolving test.xxx.io (test.xxx.io)... 172.xx.xx.132, 172.xx.xx.131, 172.xx.xx.117
Connecting to test.xxx.io (test.xxx.io)|172.xx.xx.132|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 197 [text/html]
Saving to: ‘index.html.1’
index.html.1 100%[===========================================================================================>] 197 --.-KB/s in 0s
curl:
curl test.xxx.io --noproxy '*' -I
HTTP/1.1 200 OK
Date: Mon, 13 Jun 2022 10:36:31 GMT
Content-Type: text/html
Content-Length: 197
Connection: keep-alive
Last-Modified: Thu, 09 Jun 2022 07:20:49 GMT
ETag: "62a19f51-c5"
Accept-Ranges: bytes
nslookup
nslookup,dig,ping from cluster is working as well:
nslookup test.xxx.io
Server: 127.0.0.53
Address: 127.0.0.53#53
Name: test.xxx.io
Address: 172.xx.xx.131
Name: test.xxx.io
Address: 172.xx.xx.132
Name: test.xxx.io
Address: 172.xx.xx.117
dig
dig test.xxx.io +noall +answer
test.xxx.io. 22 IN A 172.xx.xx.117
test.xxx.io. 22 IN A 172.xx.xx.132
test.xxx.io. 22 IN A 172.xx.xx.131
ping
ping test.xxx.io
PING test.xxx.io (172.xx.xx.132) 56(84) bytes of data.
64 bytes from xx-k3s-1 (172.xx.xx.132): icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from xx-k3s-1 (172.xx.xx.132): icmp_seq=2 ttl=64 time=0.042 ms
also from ingress nginx pod curl works fine...
in firefox via nodeIP:port, i can get index, but via host its not possible
seems that ingress forwarding traffic to the pod, but is this issue only something to do with browser ?
Thanks for any advice
so for clarification,
as I'm using tunneling to reach ingress from local pc via browser with SOCKS5 proxy.
ssh xxxx#100.xx.xx.xx -D 1090
solution is trivial, add
172.xx.xx.117 test.xxx.io
into /etc/hosts on jump server.

Container from POD can't connect to other ports when Istio enabled

I have next ClusterIP SVC
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper
com.tibco.pulsar.service: zookeepers
name: zookeepers
namespace: pulsar-poc-v2
spec:
clusterIP: None
clusterIPs:
- None
ports:
- name: client
port: 2181
protocol: TCP
targetPort: 2181
- name: peer
port: 2888
protocol: TCP
targetPort: 2888
- name: leader-election
port: 3888
protocol: TCP
targetPort: 3888
- name: rest
port: 8001
protocol: TCP
targetPort: 8001
selector:
com.tibco.pulsar.service: zookeepers
sessionAffinity: None
type: ClusterIP
And have a broker pods with initContainers that validates zookeepers pods are up before broker startup.
echo "Verify zookeeper pulsar cluster config"; until /opt/tibco/apd/core/bin/bookkeeper org.apache.zookeeper.ZooKeeperMain -server zookeepers:2181 get /admin/clusters/pocclusterv2; do
echo "pulsar cluster isn't initialized yet ... check in 3 seconds ..." && sleep 3;
done;
When istio is not enabled, initContainer is able to successfully connect using the zookeeper, but if we enable istio on the namespace is not able to connect.
I tried connecting to the container when validation is failing (istio enabled):
nc -vz zookeepers 2181
Ncat: Version 7.70 ( https://nmap.org/ncat )
Ncat: Connection to 10.42.30.34 failed: Connection refused.
[root#broker-2 volume]# curl -v zookeepers:2181
* Rebuilt URL to: zookeepers:2181/
* Trying 10.42.52.177...
* TCP_NODELAY set
* connect to 10.42.52.177 port 2181 failed: Connection refused
* Trying 10.42.166.24...
* TCP_NODELAY set
* connect to 10.42.166.24 port 2181 failed: Connection refused
* Trying 10.42.30.34...
* TCP_NODELAY set
* connect to 10.42.30.34 port 2181 failed: Connection refused
* Failed to connect to zookeepers port 2181: Connection refused
* Closing connection 0
curl: (7) Failed to connect to zookeepers port 2181: Connection refused

Nginx Controller in Kubernetes: Handshaking to upstream - peer closed connection in SSL handshake

On a local environment built with kubeadm. The cluster is made of Master and 2 worker nodes.
What it is failing:
Exposing Nginx-ingress Controller to external by Service type LoadBalancer, trying TLS termination to the Kubernetes cluster.
Here the exposed Service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
nginx-ingress LoadBalancer 10.101.5.75 192.168.1.82 80:30745/TCP,443:30092/TCP
web-service ClusterIP 10.101.26.176 <none> 8080/TCP
What it's working:
Able to reach the web application with HTTP port 80 from external.
Here the pods in the cluster:
NAME READY STATUS RESTARTS AGE IP NODE
nginx-ingress-7c5588544d-4mw9d 1/1 Running 0 38m 10.44.0.2 node1
web-deployment-7d84778bc6-52pq7 1/1 Running 1 19h 10.44.0.1 node1
web-deployment-7d84778bc6-9wwmn 1/1 Running 1 19h 10.36.0.2 node2
Test result for TLS termination
Client side:
curl -k https://example.com -v
* Rebuilt URL to https://example.com/
* Trying 192.168.1.82...
* Connected to example.com (192.168.1.82) port 443 (#0)
* found 127 certificates in /etc/ssl/certs/ca-certificates.crt
* found 513 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
* server certificate verification SKIPPED
* server certificate status verification SKIPPED
* common name: example.com (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #1
* subject: CN=example.com
* start date: Thu, 22 Oct 2020 16:33:49 GMT
* expire date: Fri, 22 Oct 2021 16:33:49 GMT
* issuer: CN=My Cert Authority
* compression: NULL
* ALPN, server accepted to use http/1.1
> GET / HTTP/1.1
> Host: example.com
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 502 Bad Gateway
< Server: nginx/1.19.3
Server side (logs from nginx-ingress pod):
GET / HTTP/1.1
Connection: close
Host: example.com
X-Real-IP: 10.32.0.1
X-Forwarded-For: 10.32.0.1
X-Forwarded-Host: example.com
X-Forwarded-Port: 443
X-Forwarded-Proto: https
User-Agent: curl/7.47.0
Accept: */*
2020/10/22 18:38:59 [error] 23#23: *12 peer closed connection in SSL handshake while SSL handshaking to upstream, client: 10.32.0.1, server: example.com, request: "GET / HTTP/1.1", upstream: "https://10.44.0.1:8081/", host: "example.com"
2020/10/22 18:38:59 [warn] 23#23: *12 upstream server temporarily disabled while SSL handshaking to upstream, client: 10.32.0.1, server: example.com, request: "GET / HTTP/1.1", upstream: "https://10.44.0.1:8081/", host: "example.com"
HTTP/1.1 502 Bad Gateway
Server: nginx/1.19.3
What i checked
Generated a CA and Server certificate following the link:
https://kubernetes.github.io/ingress-nginx/examples/PREREQUISITES/#tls-certificates
Checked the server certificate and the server key under /etc/nginx/secrets/default of the nginx-ingress Pod and they look correct. Here the output of the resource VirtualServer:
NAME STATE HOST IP PORTS AGE
vs-example Valid example.com 192.168.1.82 [80,443] 121m
VirtualServer:
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: vs-example
namespace: nginx-ingress
spec:
host: example.com
tls:
secret: example-tls
upstreams:
- name: example
service: web-service
port: 8080
tls:
enable: true
routes:
- path: /v2
action:
pass: example
routes:
- path: /
action:
pass: example
Secret
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: nginx-ingress
data:
tls.crt: (..omitted..)
tls.key: (..omitted..)
type: kubernetes.io/tls
Nginx.conf
Here an extract of the nginx.conf taken from the running Pod:
server {
# required to support the Websocket protocol in VirtualServer/VirtualServerRoutes
set $default_connection_header "";
listen 80 default_server;
listen 443 ssl default_server;
ssl_certificate /etc/nginx/secrets/default;
ssl_certificate_key /etc/nginx/secrets/default;
server_name _;
server_tokens "on";
Cannot find what's in going on when trying to reach the web application with HTTPS.
several errors in the VirtualServer, edited with these changes solved the problem:
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: vs-example
namespace: nginx-ingress
spec:
host: example.com
tls:
secret: example-tls
upstreams:
- name: example
service: web-service
port: 8080
# tls: --> this caused the SSL issue in the upstream
# enable: true
routes:
- path: /
action:
pass: example
- path: /v1
action:
pass: example
accessing web application with HTTPS now works

How can I access the Openshift service through ClusterIP from nodes

I am trying to access a Flask server running in one Openshift pod from other.
For that I created a service as below.
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-web-app ClusterIP 172.30.216.112 <none> 8080/TCP 8m
$ oc describe svc my-web-app
Name: my-web-app
Namespace: neo4j-sys
Labels: app=my-web-app
Annotations: openshift.io/generated-by=OpenShiftNewApp
Selector: app=my-web-app,deploymentconfig=my-web-app
Type: ClusterIP
IP: 172.30.216.112
Port: 8080-tcp 8080/TCP
TargetPort: 8080/TCP
Endpoints: 172.20.203.104:5000,172.20.49.150:5000
Session Affinity: None
Events: <none>
1)
First, I ping ed from one pod to other pod and got response.
(app-root) sh-4.2$ ping 172.20.203.104
PING 172.20.203.104 (172.20.203.104) 56(84) bytes of data.
64 bytes from 172.20.203.104: icmp_seq=1 ttl=64 time=5.53 ms
64 bytes from 172.20.203.104: icmp_seq=2 ttl=64 time=0.527 ms
64 bytes from 172.20.203.104: icmp_seq=3 ttl=64 time=3.10 ms
64 bytes from 172.20.203.104: icmp_seq=4 ttl=64 time=2.12 ms
64 bytes from 172.20.203.104: icmp_seq=5 ttl=64 time=0.784 ms
64 bytes from 172.20.203.104: icmp_seq=6 ttl=64 time=6.81 ms
64 bytes from 172.20.203.104: icmp_seq=7 ttl=64 time=18.2 ms
^C
--- 172.20.203.104 ping statistics ---
7 packets transmitted, 7 received, 0% packet loss, time 6012ms
rtt min/avg/max/mdev = 0.527/5.303/18.235/5.704 ms
But, when I tried curl, it is not responding.
(app-root) sh-4.2$ curl 172.20.203.104
curl: (7) Failed connect to 172.20.203.104:80; Connection refused
(app-root) sh-4.2$ curl 172.20.203.104:8080
curl: (7) Failed connect to 172.20.203.104:8080; Connection refused
2)
After that I tried to reach cluster IP from one pod. In this case, both ping, curl not reachable.
(app-root) sh-4.2$ ping 172.30.216.112
PING 172.30.216.112 (172.30.216.112) 56(84) bytes of data.
From 172.20.49.1 icmp_seq=1 Destination Host Unreachable
From 172.20.49.1 icmp_seq=4 Destination Host Unreachable
From 172.20.49.1 icmp_seq=2 Destination Host Unreachable
From 172.20.49.1 icmp_seq=3 Destination Host Unreachable
^C
--- 172.30.216.112 ping statistics ---
7 packets transmitted, 0 received, +4 errors, 100% packet loss, time 6002ms
pipe 4
(app-root) sh-4.2$ curl 172.30.216.112
curl: (7) Failed connect to 172.30.216.112:80; No route to host
Please let me know where I am going wrong here. Why the above cases #1, #2 failing. How to access the clusterIP services.
I am completely new to the services and accessing them and hence I might be missing some basics.
I gone through other answers How can I access the Kubernetes service through ClusterIP 1. But, it is for Nodeport which is not helping me.
Updates based on below comment from Graham Dumpleton, below are my observations.
This is the Flask server log which I am running in the pods for information.
* Serving Flask app "wsgi" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
127.0.0.1 - - [14/Nov/2019 04:54:53] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [14/Nov/2019 04:55:05] "GET / HTTP/1.1" 200 -
Is your pod listening on external interfaces on pod 8080?
If I understand question correctly, my intention here is just to communicate between pods via clusterIP service. I am not looking for accessing pods from external interfaces(other projects or through web urls as load balancer service)
If you get into the pod, can you do curl $HOSTNAME:8080?
Yes, if I am running as localhost or 127.0.0.1, I am getting response from the same pod where I run this as expected.
(app-root) sh-4.2$ curl http://127.0.0.1:5000/
Hello World!
(app-root) sh-4.2$ curl http://localhost:5000/
Hello World!
But, if I tried with my-web-app or service IP(clusterIP). I am not getting response.
(app-root) sh-4.2$ curl http://172.30.216.112:5000/
curl: (7) Failed connect to 172.30.216.112:5000; No route to host
(app-root) sh-4.2$ curl my-web-app:8080
curl: (7) Failed connect to my-web-app:8080; Connection refused
(app-root) sh-4.2$ curl http://my-web-app:8080/
curl: (7) Failed connect to my-web-app:8080; Connection refused
With pod IP also I am not getting response.
(app-root) sh-4.2$ curl http://172.20.49.150:5000/
curl: (7) Failed connect to 172.20.49.150:5000; Connection refused
(app-root) sh-4.2$ curl 172.20.49.150
curl: (7) Failed connect to 172.20.49.150:80; Connection refused
I am answering my own question. Here is how my issue got resolved based on inputs from Graham Dumpleton.
Initially, I used to connect Flask server as below.
from flask import Flask
application = Flask(__name__)
if __name__ == "__main__":
application.run()
This bind the server to http://127.0.0.1:5000/ by default.
As part of resolution I changed the bind to 0.0.0.0 as below
from flask import Flask
application = Flask(__name__)
if __name__ == "__main__":
application.run(host='0.0.0.0')
And log as below after that.
* Serving Flask app "wsgi" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
After that pods successfully communicated via clusterIP. Below are the service details(increased one more pod)
$ oc describe svc my-web-app
Name: my-web-app
Namespace: neo4j-sys
Labels: app=my-web-app
Annotations: openshift.io/generated-by=OpenShiftNewApp
Selector: app=my-web-app,deploymentconfig=my-web-app
Type: ClusterIP
IP: 172.30.4.250
Port: 8080-tcp 8080/TCP
TargetPort: 5000/TCP
Endpoints: 172.20.106.184:5000,172.20.182.118:5000,172.20.83.40:5000
Session Affinity: None
Events: <none>
Below is the successful response.
(app-root) sh-4.2$ curl http://172.30.4.250:8080 //with clusterIP which is my expectation
Hello World!
(app-root) sh-4.2$ curl http://172.20.106.184:5000 // with pod IP
Hello World!
(app-root) sh-4.2$ curl $HOSTNAME:5000 // with $HOSTNAME
Hello World!

kubernetes DNS - Let service contact itself via DNS

Pods in a kubernetes cluster can be reached by sending network requests to the dns of a service that they are a member of. Network requests have to be send to [service].[namespace].svc.cluster.local and get load balanced between all members of that service.
This works fine to let some pod reach another pod, but it fails if a pod tries to reach itself via a service that he's a member of. It always results in a timeout.
Is this a bug in Kubernetes (in my case minikube v0.35.0) or is some additional configuration required?
Here's some debug info:
Let's contact the service from some other pod. This works fine:
daemon#auth-796d88df99-twj2t:/opt/docker$ curl -v -X POST -H "Accept: application/json" --data '{}' http://message-service.message.svc.cluster.local:9000/message/get-messages
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 10.107.209.9...
* TCP_NODELAY set
* Connected to message-service.message.svc.cluster.local (10.107.209.9) port 9000 (#0)
> POST /message/get-messages HTTP/1.1
> Host: message-service.message.svc.cluster.local:9000
> User-Agent: curl/7.52.1
> Accept: application/json
> Content-Length: 2
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 2 out of 2 bytes
< HTTP/1.1 401 Unauthorized
< Referrer-Policy: origin-when-cross-origin, strict-origin-when-cross-origin
< X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
< X-Content-Type-Options: nosniff
< Content-Security-Policy: default-src 'self'
< X-Permitted-Cross-Domain-Policies: master-only
< Date: Wed, 20 Mar 2019 04:36:51 GMT
< Content-Type: text/plain; charset=UTF-8
< Content-Length: 12
<
* Curl_http_done: called premature == 0
* Connection #0 to host message-service.message.svc.cluster.local left intact
Unauthorized
Now we try to let the pod contact the service that he's a member of:
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 10.107.209.9...
* TCP_NODELAY set
* connect to 10.107.209.9 port 9000 failed: Connection timed out
* Failed to connect to message-service.message.svc.cluster.local port 9000: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to message-service.message.svc.cluster.local port 9000: Connection timed out
If I've read the curl debug log correctly, the dns resolves to the ip address 10.107.209.9. The pod can be reached from any other pod via that ip but the pod cannot use it to reach itself.
Here are the network interfaces of the pod that tries to reach itself:
daemon#message-58466bbc45-lch9j:/opt/docker$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
296: eth0#if297: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:09 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.9/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
Here is the kubernetes file deployed to minikube:
apiVersion: v1
kind: Namespace
metadata:
name: message
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: message
name: message
namespace: message
spec:
replicas: 1
selector:
matchLabels:
app: message
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: message
spec:
containers:
- name: message
image: message-impl:0.1.0-SNAPSHOT
imagePullPolicy: Never
ports:
- name: http
containerPort: 9000
protocol: TCP
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KAFKA_KUBERNETES_NAMESPACE
value: kafka
- name: KAFKA_KUBERNETES_SERVICE
value: kafka-svc
- name: CASSANDRA_KUBERNETES_NAMESPACE
value: cassandra
- name: CASSANDRA_KUBERNETES_SERVICE
value: cassandra
- name: CASSANDRA_KEYSPACE
value: service_message
---
# Service for discovery
apiVersion: v1
kind: Service
metadata:
name: message-service
namespace: message
spec:
ports:
- port: 9000
protocol: TCP
selector:
app: message
---
# Expose this service to the api gateway
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: message
namespace: message
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: api.fload.cf
http:
paths:
- path: /message
backend:
serviceName: message-service
servicePort: 9000
This is a known minikube issue. The discussion contains the following workarounds:
1) Try:
minikube ssh
sudo ip link set docker0 promisc on
2) Use a headless service: clusterIP: None