nginx-ingress tcp services - connection refused - kubernetes

I'm currently deploying a new kubernetes cluster and I want to expose a mongodb service from outside the cluster using an nginx-ingress.
I know that nginx-ingress is usually for layer 7 applications but also capable to work on layer 4 (TCP/UDP) according to the official documentation.
https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
My mongodb service is a ClusterIP serivce which is accssible on port 11717 (internal namespace):
kubectl get svc -n internal
mongodb ClusterIP 10.97.63.154 <none> 11717/TCP 3d20h
telnet 10.97.63.154 11717
Trying 10.97.63.154...
Connected to 10.97.63.154.
I literally tried every possible combination to achieve this goal but with no success.
I'm using the nginx-ingress helm chart (daemonset type).
My nginx-ingress/templates/controller-daemonset.yaml file:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress-nginx-ingress
namespace: default
labels:
app.kubernetes.io/name: nginx-ingress-nginx-ingress
helm.sh/chart: nginx-ingress-0.13.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: nginx-ingress
spec:
selector:
matchLabels:
app: nginx-ingress-nginx-ingress
template:
metadata:
labels:
app: nginx-ingress-nginx-ingress
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9113"
prometheus.io/scheme: "http"
spec:
serviceAccountName: nginx-ingress-nginx-ingress
terminationGracePeriodSeconds: 30
hostNetwork: false
containers:
- name: nginx-ingress-nginx-ingress
image: "nginx/nginx-ingress:2.2.0"
imagePullPolicy: "IfNotPresent"
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: mongodb
containerPort: 11717
hostPort: 11717
- name: prometheus
containerPort: 9113
- name: readiness-port
containerPort: 8081
readinessProbe:
httpGet:
path: /nginx-ready
port: readiness-port
periodSeconds: 1
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
resources:
{}
args:
- /nginx-ingress-controller
- -nginx-plus=false
- -nginx-reload-timeout=60000
- -enable-app-protect=false
- -tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- -publish-service=$(POD_NAMESPACE)/ingress-nginx
- -annotations-prefix=nginx.ingress.kubernetes.io
- -enable-app-protect-dos=false
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-ingress-nginx-ingress
- -default-server-tls-secret=$(POD_NAMESPACE)/nginx-ingress-nginx-ingress-default-server-tls
- -ingress-class=nginx
- -health-status=false
- -health-status-uri=/nginx-health
- -nginx-debug=false
- -v=1
- -nginx-status=true
- -nginx-status-port=8080
- -nginx-status-allow-cidrs=127.0.0.1
- -report-ingress-status
- -external-service=nginx-ingress-nginx-ingress
- -enable-leader-election=true
- -leader-election-lock-name=nginx-ingress-nginx-ingress-leader-election
- -enable-prometheus-metrics=true
- -prometheus-metrics-listen-port=9113
- -prometheus-tls-secret=
- -enable-custom-resources=true
- -enable-snippets=false
- -enable-tls-passthrough=false
- -enable-preview-policies=false
- -enable-cert-manager=false
- -enable-oidc=false
- -ready-status=true
- -ready-status-port=8081
- -enable-latency-metrics=false
My nginx-ingress/templates/controller-service.yaml file:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-nginx-ingress
namespace: default
labels:
app.kubernetes.io/name: nginx-ingress-nginx-ingress
helm.sh/chart: nginx-ingress-0.13.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: nginx-ingress
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
- name: mongodb
port: 11717
targetPort: 11717
protocol: TCP
selector:
app: nginx-ingress-nginx-ingress
My nginx-ingress/templates/tcp-services.yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: default
data:
"11717": internal/mongodb:11717
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ingress-nginx-ingress-d5vms 1/1 Running 0 61m
nginx-ingress-nginx-ingress-kcs4p 1/1 Running 0 61m
nginx-ingress-nginx-ingress-mnnn2 1/1 Running 0 61m
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d1h <none>
nginx-ingress-nginx-ingress LoadBalancer 10.99.176.220 <pending> 80:31700/TCP,443:31339/TCP,11717:31048/TCP 61m app=nginx-ingress-nginx-ingress
telnet 10.99.176.220 80
Trying 10.99.176.220...
Connected to 10.99.176.220.
Escape character is '^]'.
telnet 10.99.176.220 11717
Trying 10.99.176.220...
telnet: Unable to connect to remote host: Connection refused
I can't understand why the connection is getting refused on port 11717.
How can I achieve this scenario:
mongo.myExternalDomain:11717 --> nginx-ingress service --> nginx-ingress pod --> mongodb service --> mongodb pod
Thanks in advance!
I would appreciate any kind of help!

I had simmiliar issue. Maybe this will help you. In my case it was in tcp-services configmap:
Shortly. Instead of this:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: default
data:
"11717": internal/mongodb:11717
please change to:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: default
data:
"11717": internal/mongodb:11717:PROXY
Details:
Edit the 'tcp-services' configmap to add a tcp .service 8000: namespace/service:8000.
edit the nginx-controller service to add a port (port:8000 --> targetPort:8000) for the tcp service in step1
Check /etc/nginx/nginx.conf in nginx controller pod and confirm it contains a 'server' block with correct listen 8000; directive for the tcp/8000 service.
Edit the 'tcp-services' configmap again to add the proxy-protocol decode directive and now the k/v for the tcp/8000 service becomes 8000: namespace/service:8000:PROXY
Check /etc/nginx/nginx.conf in nginx controller pod and there isn't any change comparing that from step3, it is still listen 8000;
Edit some ingress rule (make some change like updating the host)
Check /etc/nginx/nginx.conf in nginx controller pod again and now the listen directive for the tcp/8000 service becomes listen 8000 proxy_protocol; which is correct.

Related

Service Endpoint not created although container port is online

I have a simple Service that connects to a port from a container inside a pod.
All pretty straight forward.
This was working too but out of nothing, the endpoint is not created for port 18080.
So I began to investigate and looked at this question but nothing that helped there.
The container is up, no errors/events, all green.
I can also call the request with the pods ip:18080 from an internal container, so the endpoint should be reachable for the service.
I can't see errors in:
journalctl -u snap.microk8s.daemon-*
I am using microk8s v1.20.
Where else can I debug this situation?
I am out of tools.
Service:
kind: Service
apiVersion: v1
metadata:
name: aedi-service
spec:
selector:
app: server
ports:
- name: aedi-host-ws #-port
port: 51056
protocol: TCP
targetPort: host-ws-port
- name: aedi-http
port: 18080
protocol: TCP
targetPort: fcs-http
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
labels:
app: server
spec:
replicas: 1
selector:
matchLabels:
app: server
template:
metadata:
labels:
app: server
srv: os-port-mapping
name: dns-service
spec:
hostname: fcs
containers:
- name: fcs
image: '{{$fcsImage}}'
imagePullPolicy: {{$pullPolicy}}
ports:
- containerPort: 18080
Service Description:
Name: aedi-service
Namespace: fcs-only
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: fcs-only
meta.helm.sh/release-namespace: fcs-only
Selector: app=server
Type: ClusterIP
IP Families: <none>
IP: 10.152.183.247
IPs: 10.152.183.247
Port: aedi-host-ws 51056/TCP
TargetPort: host-ws-port/TCP
Endpoints: 10.1.116.70:51056
Port: aedi-http 18080/TCP
TargetPort: fcs-http/TCP
Endpoints:
Session Affinity: None
Events: <none>
Pod Info:
NAME READY STATUS RESTARTS AGE LABELS
server-deployment-76b5789754-q48xl 6/6 Running 0 23m app=server,name=dns-service,pod-template-hash=76b5789754,srv=os-port-mapping
kubectl get svc aedi-service -o wide:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
aedi-service ClusterIP 10.152.183.247 <none> 443/TCP,1884/TCP,51052/TCP,51051/TCP,51053/TCP,51056/TCP,18080/TCP,51055/TCP 34m app=server
Your service spec refer to a port named "fcs-http" but it was not declared in the deployment. Try:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
...
ports:
- containerPort: 18080
name: fcs-http # <-- add the name here
...
Wrong service configuration
- name: aedi-http
port: 18080 -----> which expose service, it has not related with container port.
protocol: TCP
targetPort: fcs-http -----> Here should be 18080, correspond to container port
If you still want to use name instead of port number, you should define name too in deployment yaml, like below:
containers:
- name: fcs
image: '{{$fcsImage}}'
imagePullPolicy: {{$pullPolicy}}
ports:
- containerPort: 18080
name: fcs-http

Ingress fanout gets Error: Server Error The server encountered a temporary error and could not complete your request. Please try again in 30 seconds

I have 2 services. I want to create ingress fanout for them. 1 service runs properly, the other one says
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
I opened all firewall settings. here are some details
bahaddin#b k get ingress -n ingress-nginx
NAME CLASS HOSTS ADDRESS PORTS AGE
fanout-ingress <none> * 35.201.67.49 80 2m57s
bahaddin#bahaddin-ThinkPad-E15-Gen-2:~/projects/personal/exposer/k8s-ingress$ k get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.12.1.185 34.134.200.199 80:30803/TCP,443:32306/TCP 10m
ingress-nginx-controller-admission ClusterIP 10.12.0.83 <none> 443/TCP 10m
web NodePort 10.12.12.129 <none> 8080:30702/TCP 7m43s
web2 NodePort 10.12.5.55 <none> 8080:30160/TCP 7m42s
Here you see that external IP of ingress-nginx-controller (34.134.200.199) is different than fanout-ingress address. but when firstly the fanout-ingress has created , it was the same IP as ingress-nginx-controller (34.134.200.199). I cannot understand why, after some seconds it has gained the newer IP (35.201.67.49). I would be happy to get answer this as well.
The main question is , when I curl http://35.201.67.49/v2/ I get the result properly. But when I curl http://35.201.67.49/web1/hello - which is another service and defined on service, ingress and deployment files below - is not reachable. When I curl 35.201.67.49/web1/hello i got an error
Error: Server Error The server encountered a temporary error and could
not complete your request.
Please try again in 30 seconds.
fanput-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: fanout-ingress
namespace: ingress-nginx
spec:
rules:
- http:
paths:
- path: /web1/*
backend:
serviceName: web
servicePort: 8080
- path: /v2/*
backend:
serviceName: web2
servicePort: 8080
deployment and services
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: ingress-nginx
spec:
selector:
matchLabels:
run: web
template:
metadata:
labels:
run: web
spec:
containers:
- image: bago1/web1:latest
imagePullPolicy: IfNotPresent
name: web
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web2
namespace: ingress-nginx
spec:
selector:
matchLabels:
run: web2
template:
metadata:
labels:
run: web2
spec:
containers:
- image: gcr.io/google-samples/hello-app:2.0
imagePullPolicy: IfNotPresent
name: web2
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: web
namespace: ingress-nginx
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: web
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: web2
namespace: ingress-nginx
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: web2
type: NodePort
Here

Accessing an external InfluxDb Database from a microk8s pod using selectorless service and manual endpoint?

Gist: I am struggling to get a pod to connect to a service outside the cluster.
Basically the pod manages to resolve the ClusterIp of the selectorless service, but traffic does not go through. Traffic does go through if i hit the ClusterIp of the selectorless service from the cluster host.
I'm fairly new with microk8s and k8s in general. I hope i am making some sense though...
Background:
I am attempting to move parts of my infrastructure from a docker-compose setup on one virtual machine, to a microk8s cluster (with 2 nodes).
In the docker compose, i have a Grafana Container, connecting to an InfluxDb container.
kubectl version:
Client Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.2-3+9ad9ee77396805", GitCommit:"9ad9ee77396805781cd0ae076d638b9da93477fd", GitTreeState:"clean", BuildDate:"2021-09-30T09:52:57Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
I now want to setup a Grafana container on the microk8s cluster, and have it connect to the InfluxDb that is still running on the docker-compose vm.
All of these VM's are running on an ESXi host.
InfluxDb is exposed at 10.1.2.220:8086
microk8s-master has ip 10.1.2.50
microk8s-slave-1 has ip 10.1.2.51
I have enabled ingress and dns. I have also enabled metallb, though i don't intend to use it here.
I have configured a selectorless service, a remote endpoint and an egress Network Policy (currently allowing all).
From microk8s-master and slave-1, i can
telnet directly to 10.1.2.220:8086 successfully
telnet to the ClusterIP(10.152.183.26):8086 of the service, successfully reaching influxdb
wget ClusterIp:8086
Inside the Pod, if i do a wget to influxdb-service:8086, it will resolve to the ClusterIP, but after that it times out.
I can however reach (wget), services pointing to other pods in the same namespace
Update:
I have been able to get it to work through a workaround, but i dont think this is the correct way.
My temporary solution is to expose the selectorless service on metallb, then use that exposed ip inside the pod.
Service and Endpoints for InfluxDb
---
apiVersion: v1
kind: Service
metadata:
name: influxdb-service
labels:
app: grafana
spec:
ports:
- protocol: TCP
port: 8086
targetPort: 8086
---
apiVersion: v1
kind: Endpoints
metadata:
name: influxdb-service
subsets:
- addresses:
- ip: 10.1.2.220
ports:
- port: 8086
The service and endpoint shows up fine
eso#microk8s-master:~/k8s-grafana$ microk8s.kubectl get endpoints
NAME ENDPOINTS AGE
neo4j-service-lb 10.1.166.176:7687,10.1.166.176:7474 25h
influxdb-service 10.1.2.220:8086 127m
questrest-service 10.1.166.178:80 5d
kubernetes 10.1.2.50:16443,10.1.2.51:16443 26d
grafana-service 10.1.237.120:3000 3h11m
eso#microk8s-master:~/k8s-grafana$ microk8s.kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 26d
questrest-service ClusterIP 10.152.183.56 <none> 80/TCP 5d
neo4j-service-lb LoadBalancer 10.152.183.166 10.1.2.60 7474:31974/TCP,7687:32688/TCP 25h
grafana-service ClusterIP 10.152.183.75 <none> 3000/TCP 3h13m
influxdb-service ClusterIP 10.152.183.26 <none> 8086/TCP 129m
eso#microk8s-master:~/k8s-grafana$ microk8s.kubectl get networkpolicy
NAME POD-SELECTOR AGE
grafana-allow-egress-influxdb app=grafana 129m
test-egress-influxdb app=questrest 128m
Describe:
eso#microk8s-master:~/k8s-grafana$ microk8s.kubectl describe svc influxdb-service
Name: influxdb-service
Namespace: default
Labels: app=grafana
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.152.183.26
IPs: 10.152.183.26
Port: <unset> 8086/TCP
TargetPort: 8086/TCP
Endpoints: 10.1.2.220:8086
Session Affinity: None
Events: <none>
eso#microk8s-master:~/k8s-grafana$ microk8s.kubectl describe endpoints influxdb-service
Name: influxdb-service
Namespace: default
Labels: <none>
Annotations: <none>
Subsets:
Addresses: 10.1.2.220
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 8086 TCP
Events: <none>
eso#microk8s-master:~/k8s-grafana$ microk8s.kubectl describe networkpolicy grafana-allow-egress-influxdb
Name: grafana-allow-egress-influxdb
Namespace: default
Created on: 2021-11-03 20:53:00 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=grafana
Not affecting ingress traffic
Allowing egress traffic:
To Port: <any> (traffic allowed to all ports)
To: <any> (traffic not restricted by destination)
Policy Types: Egress
Grafana.yml:
eso#microk8s-master:~/k8s-grafana$ cat grafana.yml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: grafana-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
storageClassName: ""
claimRef:
name: grafana-pvc
namespace: default
persistentVolumeReclaimPolicy: Retain
nfs:
path: /mnt/MainVol/grafana
server: 10.2.0.1
readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: grafana-pv
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
spec:
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
securityContext:
fsGroup: 472
supplementalGroups:
- 0
containers:
- name: grafana
image: grafana/grafana:7.5.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: http-grafana
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /robots.txt
port: 3000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 3000
timeoutSeconds: 1
resources:
requests:
cpu: 250m
memory: 750Mi
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-pv
volumes:
- name: grafana-pv
persistentVolumeClaim:
claimName: grafana-pvc
---
apiVersion: v1
kind: Service
metadata:
name: grafana-service
spec:
ports:
- port: 3000
protocol: TCP
targetPort: http-grafana
selector:
app: grafana
#sessionAffinity: None
#type: LoadBalancer
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: "g2.some.domain.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: grafana-service
port:
number: 3000
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: grafana-allow-egress-influxdb
namespace: default
spec:
podSelector:
matchLabels:
app: grafana
ingress:
- {}
egress:
- {}
policyTypes:
- Egress
As I haven't gotten much response, i'll answer the question with my "workaround". I am still not sure this is the best way to do it though.
I got it to work by exposing the selectorless service on metallb, then using that exposed ip inside grafana
kind: Service
apiVersion: v1
metadata:
name: influxdb-service-lb
#namespace: ingress
spec:
type: LoadBalancer
loadBalancerIP: 10.1.2.61
# selector:
# app: grafana
ports:
- name: http
protocol: TCP
port: 8086
targetPort: 8086
---
apiVersion: v1
kind: Endpoints
metadata:
name: influxdb-service-lb
subsets:
- addresses:
- ip: 10.1.2.220
ports:
- name: influx
protocol: TCP
port: 8086
I then use the loadbalancer ip in grafana (10.1.2.61)
Update October 2022
As a response to a comment above, I have added a diagram of how i believe this to work

Two kubernetes deployments in the same namespace are not able to communicate

I'm deploying ELK stack (oss) to kubernetes cluster. Elasticsearch deployment and service starts correctly and API is reacheble. Kibana deployment starts but can't access elasticsearch:
From Kibana container logs:
{"type":"log","#timestamp":"2019-05-08T22:49:26Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
Both deployments are in the same namespace "observability". I also tried to reference elasticsearch container as elasticsearch.observability.svc.cluster.local but it's not working too.
What I'am doing wrong? How to reference elasticsearch container from kibana container?
More info:
kubectl --context=19team-observability-admin-context -n observability get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-9d495b84f-j2297 1/1 Running 0 15s
kibana-65bc7f9c4-s9cv4 1/1 Running 0 15s
kubectl --context=19team-observability-admin-context -n observability get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch NodePort 10.104.250.175 <none> 9200:30083/TCP,9300:30059/TCP 1m
kibana NodePort 10.102.124.171 <none> 5601:30124/TCP 1m
I start my containers with command
kubectl --context=19team-observability-admin-context -n observability apply -f .\elasticsearch.yaml -f .\kibana.yaml
elasticsearch.yaml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: observability
spec:
type: NodePort
ports:
- name: "9200"
port: 9200
targetPort: 9200
- name: "9300"
port: 9300
targetPort: 9300
selector:
app: elasticsearch
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elasticsearch
namespace: observability
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: set-vm-max-map-count
image: busybox
imagePullPolicy: IfNotPresent
command: ['sysctl', '-w', 'vm.max_map_count=262144']
securityContext:
privileged: true
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "724Mi"
cpu: "1"
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.7.1
ports:
- containerPort: 9200
- containerPort: 9300
resources:
requests:
memory: "3Gi"
cpu: "1"
limits:
memory: "3Gi"
cpu: "1"
kibana.yaml
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: observability
spec:
type: NodePort
ports:
- name: "5601"
port: 5601
targetPort: 5601
selector:
app: observability_platform_kibana
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: observability_platform_kibana
name: kibana
namespace: observability
spec:
replicas: 1
template:
metadata:
labels:
app: observability_platform_kibana
spec:
containers:
- env:
# THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: SERVER_NAME
value: kibana
image: docker.elastic.co/kibana/kibana-oss:6.7.1
name: kibana
ports:
- containerPort: 5601
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "724Mi"
cpu: "1"
restartPolicy: Always
UPDATE 1
As gonzalesraul proposed I've created second service for elastic with ClusterIP type:
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch-local
namespace: observability
spec:
type: ClusterIP
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
Service is created:
kubectl --context=19team-observability-admin-context -n observability get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch NodePort 10.106.5.94 <none> 9200:31598/TCP,9300:32018/TCP 26s
elasticsearch-local ClusterIP 10.101.178.13 <none> 9200/TCP 26s
kibana NodePort 10.99.73.118 <none> 5601:30004/TCP 26s
And reference elastic as "http://elasticsearch-local:9200"
Unfortunately it does not work, in kibana container:
{"type":"log","#timestamp":"2019-05-09T10:13:54Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-local:9200/"}
Do not use a NodePort service, instead use a ClusterIP. If you need to expose as a Nodeport your service, create a second service besides, for instance:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch-local
namespace: observability
spec:
type: ClusterIP
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
Then update the kibana manifest to point to the ClusterIP service:
# ...
# THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch-local:9200
# ...
The nodePort services do not create a 'dns entry' (ex. elasticsearch.observability.svc.cluster.local) on kubernetes
Edit the server name value in kibana.yaml and set it to kibana:5601.
I think if you don't do this, by default it is trying to go to port 80.
This is what looks like now kibana.yaml:
...
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: SERVER_NAME
value: kibana:5601
image: docker.elastic.co/kibana/kibana-oss:6.7.1
imagePullPolicy: IfNotPresent
name: kibana
...
And this is the output now:
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:console#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:interpreter#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:metrics#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:tile_map#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:timelion#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:elasticsearch#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-05-09T10:37:17Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
UPDATE
I just tested it on a bare metal cluster (bootstraped through kubeadm), and worked again.
This is the output:
{"type":"log","#timestamp":"2019-05-09T11:09:59Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","#timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","#timestamp":"2019-05-09T11:10:04Z","tags":["status","plugin:elasticsearch#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","#timestamp":"2019-05-09T11:10:04Z","tags":["info","migrations"],"pid":1,"message":"Creating index .kibana_1."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Pointing alias .kibana to .kibana_1."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Finished in 2417ms."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
Note that it passed from "No Living Connections" to "Running". I am running the nodes on GCP. I had to open the firewalls for it to work. What's your environment?

Azure Kubernetes nginx-Ingress: preserve client IP

I try to preserve the client IP with proxy protocol. Unfortunately it does not work.
Azure LB => nginx Ingress => Service
I end up with the Ingress Service Pod IP.
Ingress Controller Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
# hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
# however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
# that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
# like with kubeadm
# hostNetwork: true
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.5
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=default/nginx-ingress-controller
Ingress Controller Service:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: kube-system
annotations:
service.beta.kubernetes.io/external-traffic: "OnlyLocal"
spec:
type: LoadBalancer
ports:
- port: 80
name: http
- port: 443
name: https
selector:
k8s-app: nginx-ingress-lb
nginx config map:
apiVersion: v1
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
kind: ConfigMap
Got it to work.
In Ingress Controller Deployment I changed the image to
gcr.io/google_containers/nginx-ingress-controller:0.8.3
and removed the configmap.
I am using ingress to forward to a pod with a dotnet core api.
Adding
var options = new ForwardedHeadersOptions()
{
ForwardedHeaders = Microsoft.AspNetCore.HttpOverrides.ForwardedHeaders.All,
RequireHeaderSymmetry = false,
ForwardLimit = null
};
//add known proxy network(s) here
options.KnownNetworks.Add(network)
app.UseForwardedHeaders(options);
to Startup did the trick