I'm currently deploying a new kubernetes cluster and I want to expose a mongodb service from outside the cluster using an nginx-ingress.
I know that nginx-ingress is usually for layer 7 applications but also capable to work on layer 4 (TCP/UDP) according to the official documentation.
https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
My mongodb service is a ClusterIP serivce which is accssible on port 11717 (internal namespace):
kubectl get svc -n internal
mongodb ClusterIP 10.97.63.154 <none> 11717/TCP 3d20h
telnet 10.97.63.154 11717
Trying 10.97.63.154...
Connected to 10.97.63.154.
I literally tried every possible combination to achieve this goal but with no success.
I'm using the nginx-ingress helm chart (daemonset type).
My nginx-ingress/templates/controller-daemonset.yaml file:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress-nginx-ingress
namespace: default
labels:
app.kubernetes.io/name: nginx-ingress-nginx-ingress
helm.sh/chart: nginx-ingress-0.13.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: nginx-ingress
spec:
selector:
matchLabels:
app: nginx-ingress-nginx-ingress
template:
metadata:
labels:
app: nginx-ingress-nginx-ingress
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9113"
prometheus.io/scheme: "http"
spec:
serviceAccountName: nginx-ingress-nginx-ingress
terminationGracePeriodSeconds: 30
hostNetwork: false
containers:
- name: nginx-ingress-nginx-ingress
image: "nginx/nginx-ingress:2.2.0"
imagePullPolicy: "IfNotPresent"
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: mongodb
containerPort: 11717
hostPort: 11717
- name: prometheus
containerPort: 9113
- name: readiness-port
containerPort: 8081
readinessProbe:
httpGet:
path: /nginx-ready
port: readiness-port
periodSeconds: 1
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
resources:
{}
args:
- /nginx-ingress-controller
- -nginx-plus=false
- -nginx-reload-timeout=60000
- -enable-app-protect=false
- -tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- -publish-service=$(POD_NAMESPACE)/ingress-nginx
- -annotations-prefix=nginx.ingress.kubernetes.io
- -enable-app-protect-dos=false
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-ingress-nginx-ingress
- -default-server-tls-secret=$(POD_NAMESPACE)/nginx-ingress-nginx-ingress-default-server-tls
- -ingress-class=nginx
- -health-status=false
- -health-status-uri=/nginx-health
- -nginx-debug=false
- -v=1
- -nginx-status=true
- -nginx-status-port=8080
- -nginx-status-allow-cidrs=127.0.0.1
- -report-ingress-status
- -external-service=nginx-ingress-nginx-ingress
- -enable-leader-election=true
- -leader-election-lock-name=nginx-ingress-nginx-ingress-leader-election
- -enable-prometheus-metrics=true
- -prometheus-metrics-listen-port=9113
- -prometheus-tls-secret=
- -enable-custom-resources=true
- -enable-snippets=false
- -enable-tls-passthrough=false
- -enable-preview-policies=false
- -enable-cert-manager=false
- -enable-oidc=false
- -ready-status=true
- -ready-status-port=8081
- -enable-latency-metrics=false
My nginx-ingress/templates/controller-service.yaml file:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-nginx-ingress
namespace: default
labels:
app.kubernetes.io/name: nginx-ingress-nginx-ingress
helm.sh/chart: nginx-ingress-0.13.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: nginx-ingress
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
- name: mongodb
port: 11717
targetPort: 11717
protocol: TCP
selector:
app: nginx-ingress-nginx-ingress
My nginx-ingress/templates/tcp-services.yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: default
data:
"11717": internal/mongodb:11717
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ingress-nginx-ingress-d5vms 1/1 Running 0 61m
nginx-ingress-nginx-ingress-kcs4p 1/1 Running 0 61m
nginx-ingress-nginx-ingress-mnnn2 1/1 Running 0 61m
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d1h <none>
nginx-ingress-nginx-ingress LoadBalancer 10.99.176.220 <pending> 80:31700/TCP,443:31339/TCP,11717:31048/TCP 61m app=nginx-ingress-nginx-ingress
telnet 10.99.176.220 80
Trying 10.99.176.220...
Connected to 10.99.176.220.
Escape character is '^]'.
telnet 10.99.176.220 11717
Trying 10.99.176.220...
telnet: Unable to connect to remote host: Connection refused
I can't understand why the connection is getting refused on port 11717.
How can I achieve this scenario:
mongo.myExternalDomain:11717 --> nginx-ingress service --> nginx-ingress pod --> mongodb service --> mongodb pod
Thanks in advance!
I would appreciate any kind of help!
I had simmiliar issue. Maybe this will help you. In my case it was in tcp-services configmap:
Shortly. Instead of this:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: default
data:
"11717": internal/mongodb:11717
please change to:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: default
data:
"11717": internal/mongodb:11717:PROXY
Details:
Edit the 'tcp-services' configmap to add a tcp .service 8000: namespace/service:8000.
edit the nginx-controller service to add a port (port:8000 --> targetPort:8000) for the tcp service in step1
Check /etc/nginx/nginx.conf in nginx controller pod and confirm it contains a 'server' block with correct listen 8000; directive for the tcp/8000 service.
Edit the 'tcp-services' configmap again to add the proxy-protocol decode directive and now the k/v for the tcp/8000 service becomes 8000: namespace/service:8000:PROXY
Check /etc/nginx/nginx.conf in nginx controller pod and there isn't any change comparing that from step3, it is still listen 8000;
Edit some ingress rule (make some change like updating the host)
Check /etc/nginx/nginx.conf in nginx controller pod again and now the listen directive for the tcp/8000 service becomes listen 8000 proxy_protocol; which is correct.
I setup my raspberry PI cluster and have installed metallb. I have the following Wordpress services running. I am confused why I cannot get this working via browser or wget process
pi#master:~ $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d3h
mysql ClusterIP None <none> 3306/TCP 41m
wordpress LoadBalancer 10.101.63.209 192.168.1.50 80:32499/TCP 4m35s
When I try to do a wget to my website it keeps on trying to go out thru port 30820
What am I doing wrong here?
pi#master:~ $ wget 192.168.1.50
--2020-04-05 03:29:56-- http://192.168.1.50/
Connecting to 192.168.1.50:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://192.168.1.50:30820/ [following]
--2020-04-05 03:29:57-- http://192.168.1.50:30820/
Connecting to 192.168.1.50:30820... failed: No route to host.
Here is my deployment. Does this look OK?
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
#namespace: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
replicas: 3
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
imagePullPolicy: IfNotPresent
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass # generated before in secret.yml
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: "/var/www/html" # which data will be stored
resources:
limits:
cpu: '1'
memory: '512Mi'
requests:
cpu: '500m'
memory: '256Mi'
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wordpress-persistent-storage
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
---
apiVersion: v1
kind: Service
metadata:
name: wordpress
#namespace: wordpress
labels:
app: wordpress
tier: frontend
spec:
selector:
app: wordpress
ports:
- protocol: 'TCP'
port: 80
targetPort: 80
#externalTrafficPolicy: Local
type: LoadBalancer
It may be your Wordpress settings which implies redirect. Neither MetalLB nor k8s Service don't have redirect functionality, as both work on network level.
I'm deploying ELK stack (oss) to kubernetes cluster. Elasticsearch deployment and service starts correctly and API is reacheble. Kibana deployment starts but can't access elasticsearch:
From Kibana container logs:
{"type":"log","#timestamp":"2019-05-08T22:49:26Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
Both deployments are in the same namespace "observability". I also tried to reference elasticsearch container as elasticsearch.observability.svc.cluster.local but it's not working too.
What I'am doing wrong? How to reference elasticsearch container from kibana container?
More info:
kubectl --context=19team-observability-admin-context -n observability get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-9d495b84f-j2297 1/1 Running 0 15s
kibana-65bc7f9c4-s9cv4 1/1 Running 0 15s
kubectl --context=19team-observability-admin-context -n observability get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch NodePort 10.104.250.175 <none> 9200:30083/TCP,9300:30059/TCP 1m
kibana NodePort 10.102.124.171 <none> 5601:30124/TCP 1m
I start my containers with command
kubectl --context=19team-observability-admin-context -n observability apply -f .\elasticsearch.yaml -f .\kibana.yaml
elasticsearch.yaml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: observability
spec:
type: NodePort
ports:
- name: "9200"
port: 9200
targetPort: 9200
- name: "9300"
port: 9300
targetPort: 9300
selector:
app: elasticsearch
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elasticsearch
namespace: observability
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: set-vm-max-map-count
image: busybox
imagePullPolicy: IfNotPresent
command: ['sysctl', '-w', 'vm.max_map_count=262144']
securityContext:
privileged: true
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "724Mi"
cpu: "1"
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.7.1
ports:
- containerPort: 9200
- containerPort: 9300
resources:
requests:
memory: "3Gi"
cpu: "1"
limits:
memory: "3Gi"
cpu: "1"
kibana.yaml
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: observability
spec:
type: NodePort
ports:
- name: "5601"
port: 5601
targetPort: 5601
selector:
app: observability_platform_kibana
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: observability_platform_kibana
name: kibana
namespace: observability
spec:
replicas: 1
template:
metadata:
labels:
app: observability_platform_kibana
spec:
containers:
- env:
# THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: SERVER_NAME
value: kibana
image: docker.elastic.co/kibana/kibana-oss:6.7.1
name: kibana
ports:
- containerPort: 5601
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "724Mi"
cpu: "1"
restartPolicy: Always
UPDATE 1
As gonzalesraul proposed I've created second service for elastic with ClusterIP type:
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch-local
namespace: observability
spec:
type: ClusterIP
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
Service is created:
kubectl --context=19team-observability-admin-context -n observability get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch NodePort 10.106.5.94 <none> 9200:31598/TCP,9300:32018/TCP 26s
elasticsearch-local ClusterIP 10.101.178.13 <none> 9200/TCP 26s
kibana NodePort 10.99.73.118 <none> 5601:30004/TCP 26s
And reference elastic as "http://elasticsearch-local:9200"
Unfortunately it does not work, in kibana container:
{"type":"log","#timestamp":"2019-05-09T10:13:54Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-local:9200/"}
Do not use a NodePort service, instead use a ClusterIP. If you need to expose as a Nodeport your service, create a second service besides, for instance:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch-local
namespace: observability
spec:
type: ClusterIP
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
Then update the kibana manifest to point to the ClusterIP service:
# ...
# THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch-local:9200
# ...
The nodePort services do not create a 'dns entry' (ex. elasticsearch.observability.svc.cluster.local) on kubernetes
Edit the server name value in kibana.yaml and set it to kibana:5601.
I think if you don't do this, by default it is trying to go to port 80.
This is what looks like now kibana.yaml:
...
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: SERVER_NAME
value: kibana:5601
image: docker.elastic.co/kibana/kibana-oss:6.7.1
imagePullPolicy: IfNotPresent
name: kibana
...
And this is the output now:
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:console#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:interpreter#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:metrics#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:tile_map#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:timelion#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:elasticsearch#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-05-09T10:37:17Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
UPDATE
I just tested it on a bare metal cluster (bootstraped through kubeadm), and worked again.
This is the output:
{"type":"log","#timestamp":"2019-05-09T11:09:59Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","#timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","#timestamp":"2019-05-09T11:10:04Z","tags":["status","plugin:elasticsearch#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","#timestamp":"2019-05-09T11:10:04Z","tags":["info","migrations"],"pid":1,"message":"Creating index .kibana_1."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Pointing alias .kibana to .kibana_1."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Finished in 2417ms."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
Note that it passed from "No Living Connections" to "Running". I am running the nodes on GCP. I had to open the firewalls for it to work. What's your environment?
I'm migrating an application to Docker/Kubernetes. This application has 20+ well-known ports it needs to be accessed on. It needs to be accessed from outside the kubernetes cluster. For this, the application writes its public accessible IP to a database so the outside service knows how to access it. The IP is taken from the downward API (status.hostIP).
One solution is defining the well-known ports as (static) nodePorts in the service, but I don't want this, because it will limit the usability of the node: if another service has started and incidentally taken one of the known ports the application will not be able to start. Also, because Kubernetes opens the ports on all nodes in the cluster, I can only run 1 instance of the application per cluster.
Now I want to make the application aware of the port mappings done by the NodePort-service. How can this be done? As I don't see a hard link between the Service and the Statefulset object in Kubernetes.
Here is my (simplified) Kubernetes config:
apiVersion: v1
kind: Service
metadata:
name: my-app-svc
labels:
app: my-app
spec:
ports:
- port: 6000
targetPort: 6000
protocol: TCP
name: debug-port
- port: 6789
targetPort: 6789
protocol: TCP
name: traffic-port-1
selector:
app: my-app
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-app-sf
spec:
serviceName: my-app-svc
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-repo/myapp/my-app:latest
imagePullPolicy: Always
env:
- name: K8S_ServiceAccountName
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: K8S_ServerIP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: serverName
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- name: debug
containerPort: 6000
- name: traffic1
containerPort: 6789
This can be done with an initContainer.
You can define an initContainer to get the nodeport and save into a directory that shared with the container, then container can get the nodeport from that directory later, a simple demo like this:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: busybox
command: ["sh", "-c", "cat /data/port; while true; do sleep 3600; done"]
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: tutum/curl
command: ["sh", "-c", "TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`; curl -kD - -H \"Authorization: Bearer $TOKEN\" https://kubernetes.default:443/api/v1/namespaces/test/services/app 2>/dev/null | grep nodePort | awk '{print $2}' > /data/port"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}
I have kubernetes Cluster v1.10 Over Centos 7
I installed kubernetes by hard-way
I have installed Kong ingress controller using helm
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install stable/kong
and this output
NOTES:
1. Kong Admin can be accessed inside the cluster using:
DNS=guiding-wombat-kong-admin.default.svc.cluster.local
PORT=8444
To connect from outside the K8s cluster:
HOST=$(kubectl get nodes --namespace default -o jsonpath='{.items[0].status.addresses[0].address}')
PORT=$(kubectl get svc --namespace default guiding-wombat-kong-admin -o jsonpath='{.spec.ports[0].nodePort}')
2. Kong Proxy can be accessed inside the cluster using:
DNS=guiding-wombat-kong-proxy.default.svc.cluster.local
PORT=8443
To connect from outside the K8s cluster:
HOST=$(kubectl get nodes --namespace default -o jsonpath='{.items[0].status.addresses[0].address}')
PORT=$(kubectl get svc --namespace default guiding-wombat-kong-proxy -o jsonpath='{.spec.ports[0].nodePort}')
and I deployed dummy file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: http-svc
spec:
replicas: 1
selector:
matchLabels:
app: http-svc
template:
metadata:
labels:
app: http-svc
spec:
containers:
- name: http-svc
image: gcr.io/google_containers/echoserver:1.8
ports:
- containerPort: 8080
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
---
apiVersion: v1
kind: Service
metadata:
name: http-svc
labels:
app: http-svc
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: http-svc
---
and I deployed ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo-bar
spec:
rules:
- host: foo.bar
http:
paths:
- path: /
backend:
serviceName: http-svc
servicePort: 80
and when I run :
kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
foo-bar foo.bar 80 1m
and when I browse
https://node-IP:controller-admin
{"next":null,"data":[]}
How can I troubleshoot this issue and find the solution?
Thank you :D
I recommend installing it using this guide only not using minikube.
It work for me on AWS:
$ curl -H 'Host: foo.bar' http://35.162.32.30
Hostname: http-svc-66ffffc458-jkxsl
Pod Information:
node name: ip-x-x-x-x.us-west-2.compute.internal
pod name: http-svc-66ffffc458-jkxsl
pod namespace: default
pod IP: 192.168.x.x
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=192.168.x.x
method=GET
real path=/
query=
request_version=1.1
request_uri=http://192.168.x.x:8080/
Request Headers:
accept=*/*
connection=keep-alive
host=192.168.x.x:8080
user-agent=curl/7.58.0
x-forwarded-for=172.x.x.x
x-forwarded-host=foo.bar
x-forwarded-port=8000
x-forwarded-proto=http
x-real-ip=172.x.x.x
Request Body:
-no body in request-