Two kubernetes deployments in the same namespace are not able to communicate - kubernetes

I'm deploying ELK stack (oss) to kubernetes cluster. Elasticsearch deployment and service starts correctly and API is reacheble. Kibana deployment starts but can't access elasticsearch:
From Kibana container logs:
{"type":"log","#timestamp":"2019-05-08T22:49:26Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2019-05-08T22:50:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
Both deployments are in the same namespace "observability". I also tried to reference elasticsearch container as elasticsearch.observability.svc.cluster.local but it's not working too.
What I'am doing wrong? How to reference elasticsearch container from kibana container?
More info:
kubectl --context=19team-observability-admin-context -n observability get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-9d495b84f-j2297 1/1 Running 0 15s
kibana-65bc7f9c4-s9cv4 1/1 Running 0 15s
kubectl --context=19team-observability-admin-context -n observability get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch NodePort 10.104.250.175 <none> 9200:30083/TCP,9300:30059/TCP 1m
kibana NodePort 10.102.124.171 <none> 5601:30124/TCP 1m
I start my containers with command
kubectl --context=19team-observability-admin-context -n observability apply -f .\elasticsearch.yaml -f .\kibana.yaml
elasticsearch.yaml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: observability
spec:
type: NodePort
ports:
- name: "9200"
port: 9200
targetPort: 9200
- name: "9300"
port: 9300
targetPort: 9300
selector:
app: elasticsearch
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elasticsearch
namespace: observability
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: set-vm-max-map-count
image: busybox
imagePullPolicy: IfNotPresent
command: ['sysctl', '-w', 'vm.max_map_count=262144']
securityContext:
privileged: true
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "724Mi"
cpu: "1"
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.7.1
ports:
- containerPort: 9200
- containerPort: 9300
resources:
requests:
memory: "3Gi"
cpu: "1"
limits:
memory: "3Gi"
cpu: "1"
kibana.yaml
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: observability
spec:
type: NodePort
ports:
- name: "5601"
port: 5601
targetPort: 5601
selector:
app: observability_platform_kibana
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: observability_platform_kibana
name: kibana
namespace: observability
spec:
replicas: 1
template:
metadata:
labels:
app: observability_platform_kibana
spec:
containers:
- env:
# THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: SERVER_NAME
value: kibana
image: docker.elastic.co/kibana/kibana-oss:6.7.1
name: kibana
ports:
- containerPort: 5601
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "724Mi"
cpu: "1"
restartPolicy: Always
UPDATE 1
As gonzalesraul proposed I've created second service for elastic with ClusterIP type:
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch-local
namespace: observability
spec:
type: ClusterIP
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
Service is created:
kubectl --context=19team-observability-admin-context -n observability get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch NodePort 10.106.5.94 <none> 9200:31598/TCP,9300:32018/TCP 26s
elasticsearch-local ClusterIP 10.101.178.13 <none> 9200/TCP 26s
kibana NodePort 10.99.73.118 <none> 5601:30004/TCP 26s
And reference elastic as "http://elasticsearch-local:9200"
Unfortunately it does not work, in kibana container:
{"type":"log","#timestamp":"2019-05-09T10:13:54Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch-local:9200/"}

Do not use a NodePort service, instead use a ClusterIP. If you need to expose as a Nodeport your service, create a second service besides, for instance:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch-local
namespace: observability
spec:
type: ClusterIP
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
Then update the kibana manifest to point to the ClusterIP service:
# ...
# THIS IS WHERE WE SET CONNECTION BETWEEN KIBANA AND ELASTIC
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch-local:9200
# ...
The nodePort services do not create a 'dns entry' (ex. elasticsearch.observability.svc.cluster.local) on kubernetes

Edit the server name value in kibana.yaml and set it to kibana:5601.
I think if you don't do this, by default it is trying to go to port 80.
This is what looks like now kibana.yaml:
...
spec:
containers:
- env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: SERVER_NAME
value: kibana:5601
image: docker.elastic.co/kibana/kibana-oss:6.7.1
imagePullPolicy: IfNotPresent
name: kibana
...
And this is the output now:
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:console#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:interpreter#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:metrics#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:tile_map#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:timelion#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","#timestamp":"2019-05-09T10:37:16Z","tags":["status","plugin:elasticsearch#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","#timestamp":"2019-05-09T10:37:17Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
UPDATE
I just tested it on a bare metal cluster (bootstraped through kubeadm), and worked again.
This is the output:
{"type":"log","#timestamp":"2019-05-09T11:09:59Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","#timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2019-05-09T11:10:01Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","#timestamp":"2019-05-09T11:10:04Z","tags":["status","plugin:elasticsearch#6.7.1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch."}
{"type":"log","#timestamp":"2019-05-09T11:10:04Z","tags":["info","migrations"],"pid":1,"message":"Creating index .kibana_1."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Pointing alias .kibana to .kibana_1."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["info","migrations"],"pid":1,"message":"Finished in 2417ms."}
{"type":"log","#timestamp":"2019-05-09T11:10:06Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}
Note that it passed from "No Living Connections" to "Running". I am running the nodes on GCP. I had to open the firewalls for it to work. What's your environment?

Related

nginx-ingress tcp services - connection refused

I'm currently deploying a new kubernetes cluster and I want to expose a mongodb service from outside the cluster using an nginx-ingress.
I know that nginx-ingress is usually for layer 7 applications but also capable to work on layer 4 (TCP/UDP) according to the official documentation.
https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
My mongodb service is a ClusterIP serivce which is accssible on port 11717 (internal namespace):
kubectl get svc -n internal
mongodb ClusterIP 10.97.63.154 <none> 11717/TCP 3d20h
telnet 10.97.63.154 11717
Trying 10.97.63.154...
Connected to 10.97.63.154.
I literally tried every possible combination to achieve this goal but with no success.
I'm using the nginx-ingress helm chart (daemonset type).
My nginx-ingress/templates/controller-daemonset.yaml file:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress-nginx-ingress
namespace: default
labels:
app.kubernetes.io/name: nginx-ingress-nginx-ingress
helm.sh/chart: nginx-ingress-0.13.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: nginx-ingress
spec:
selector:
matchLabels:
app: nginx-ingress-nginx-ingress
template:
metadata:
labels:
app: nginx-ingress-nginx-ingress
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9113"
prometheus.io/scheme: "http"
spec:
serviceAccountName: nginx-ingress-nginx-ingress
terminationGracePeriodSeconds: 30
hostNetwork: false
containers:
- name: nginx-ingress-nginx-ingress
image: "nginx/nginx-ingress:2.2.0"
imagePullPolicy: "IfNotPresent"
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: mongodb
containerPort: 11717
hostPort: 11717
- name: prometheus
containerPort: 9113
- name: readiness-port
containerPort: 8081
readinessProbe:
httpGet:
path: /nginx-ready
port: readiness-port
periodSeconds: 1
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
resources:
{}
args:
- /nginx-ingress-controller
- -nginx-plus=false
- -nginx-reload-timeout=60000
- -enable-app-protect=false
- -tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- -publish-service=$(POD_NAMESPACE)/ingress-nginx
- -annotations-prefix=nginx.ingress.kubernetes.io
- -enable-app-protect-dos=false
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-ingress-nginx-ingress
- -default-server-tls-secret=$(POD_NAMESPACE)/nginx-ingress-nginx-ingress-default-server-tls
- -ingress-class=nginx
- -health-status=false
- -health-status-uri=/nginx-health
- -nginx-debug=false
- -v=1
- -nginx-status=true
- -nginx-status-port=8080
- -nginx-status-allow-cidrs=127.0.0.1
- -report-ingress-status
- -external-service=nginx-ingress-nginx-ingress
- -enable-leader-election=true
- -leader-election-lock-name=nginx-ingress-nginx-ingress-leader-election
- -enable-prometheus-metrics=true
- -prometheus-metrics-listen-port=9113
- -prometheus-tls-secret=
- -enable-custom-resources=true
- -enable-snippets=false
- -enable-tls-passthrough=false
- -enable-preview-policies=false
- -enable-cert-manager=false
- -enable-oidc=false
- -ready-status=true
- -ready-status-port=8081
- -enable-latency-metrics=false
My nginx-ingress/templates/controller-service.yaml file:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-nginx-ingress
namespace: default
labels:
app.kubernetes.io/name: nginx-ingress-nginx-ingress
helm.sh/chart: nginx-ingress-0.13.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: nginx-ingress
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
- name: mongodb
port: 11717
targetPort: 11717
protocol: TCP
selector:
app: nginx-ingress-nginx-ingress
My nginx-ingress/templates/tcp-services.yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: default
data:
"11717": internal/mongodb:11717
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ingress-nginx-ingress-d5vms 1/1 Running 0 61m
nginx-ingress-nginx-ingress-kcs4p 1/1 Running 0 61m
nginx-ingress-nginx-ingress-mnnn2 1/1 Running 0 61m
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d1h <none>
nginx-ingress-nginx-ingress LoadBalancer 10.99.176.220 <pending> 80:31700/TCP,443:31339/TCP,11717:31048/TCP 61m app=nginx-ingress-nginx-ingress
telnet 10.99.176.220 80
Trying 10.99.176.220...
Connected to 10.99.176.220.
Escape character is '^]'.
telnet 10.99.176.220 11717
Trying 10.99.176.220...
telnet: Unable to connect to remote host: Connection refused
I can't understand why the connection is getting refused on port 11717.
How can I achieve this scenario:
mongo.myExternalDomain:11717 --> nginx-ingress service --> nginx-ingress pod --> mongodb service --> mongodb pod
Thanks in advance!
I would appreciate any kind of help!
I had simmiliar issue. Maybe this will help you. In my case it was in tcp-services configmap:
Shortly. Instead of this:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: default
data:
"11717": internal/mongodb:11717
please change to:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: default
data:
"11717": internal/mongodb:11717:PROXY
Details:
Edit the 'tcp-services' configmap to add a tcp .service 8000: namespace/service:8000.
edit the nginx-controller service to add a port (port:8000 --> targetPort:8000) for the tcp service in step1
Check /etc/nginx/nginx.conf in nginx controller pod and confirm it contains a 'server' block with correct listen 8000; directive for the tcp/8000 service.
Edit the 'tcp-services' configmap again to add the proxy-protocol decode directive and now the k/v for the tcp/8000 service becomes 8000: namespace/service:8000:PROXY
Check /etc/nginx/nginx.conf in nginx controller pod and there isn't any change comparing that from step3, it is still listen 8000;
Edit some ingress rule (make some change like updating the host)
Check /etc/nginx/nginx.conf in nginx controller pod again and now the listen directive for the tcp/8000 service becomes listen 8000 proxy_protocol; which is correct.

Accessing an external InfluxDb Database from a microk8s pod using selectorless service and manual endpoint?

Gist: I am struggling to get a pod to connect to a service outside the cluster.
Basically the pod manages to resolve the ClusterIp of the selectorless service, but traffic does not go through. Traffic does go through if i hit the ClusterIp of the selectorless service from the cluster host.
I'm fairly new with microk8s and k8s in general. I hope i am making some sense though...
Background:
I am attempting to move parts of my infrastructure from a docker-compose setup on one virtual machine, to a microk8s cluster (with 2 nodes).
In the docker compose, i have a Grafana Container, connecting to an InfluxDb container.
kubectl version:
Client Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.2-3+9ad9ee77396805", GitCommit:"9ad9ee77396805781cd0ae076d638b9da93477fd", GitTreeState:"clean", BuildDate:"2021-09-30T09:52:57Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
I now want to setup a Grafana container on the microk8s cluster, and have it connect to the InfluxDb that is still running on the docker-compose vm.
All of these VM's are running on an ESXi host.
InfluxDb is exposed at 10.1.2.220:8086
microk8s-master has ip 10.1.2.50
microk8s-slave-1 has ip 10.1.2.51
I have enabled ingress and dns. I have also enabled metallb, though i don't intend to use it here.
I have configured a selectorless service, a remote endpoint and an egress Network Policy (currently allowing all).
From microk8s-master and slave-1, i can
telnet directly to 10.1.2.220:8086 successfully
telnet to the ClusterIP(10.152.183.26):8086 of the service, successfully reaching influxdb
wget ClusterIp:8086
Inside the Pod, if i do a wget to influxdb-service:8086, it will resolve to the ClusterIP, but after that it times out.
I can however reach (wget), services pointing to other pods in the same namespace
Update:
I have been able to get it to work through a workaround, but i dont think this is the correct way.
My temporary solution is to expose the selectorless service on metallb, then use that exposed ip inside the pod.
Service and Endpoints for InfluxDb
---
apiVersion: v1
kind: Service
metadata:
name: influxdb-service
labels:
app: grafana
spec:
ports:
- protocol: TCP
port: 8086
targetPort: 8086
---
apiVersion: v1
kind: Endpoints
metadata:
name: influxdb-service
subsets:
- addresses:
- ip: 10.1.2.220
ports:
- port: 8086
The service and endpoint shows up fine
eso#microk8s-master:~/k8s-grafana$ microk8s.kubectl get endpoints
NAME ENDPOINTS AGE
neo4j-service-lb 10.1.166.176:7687,10.1.166.176:7474 25h
influxdb-service 10.1.2.220:8086 127m
questrest-service 10.1.166.178:80 5d
kubernetes 10.1.2.50:16443,10.1.2.51:16443 26d
grafana-service 10.1.237.120:3000 3h11m
eso#microk8s-master:~/k8s-grafana$ microk8s.kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 26d
questrest-service ClusterIP 10.152.183.56 <none> 80/TCP 5d
neo4j-service-lb LoadBalancer 10.152.183.166 10.1.2.60 7474:31974/TCP,7687:32688/TCP 25h
grafana-service ClusterIP 10.152.183.75 <none> 3000/TCP 3h13m
influxdb-service ClusterIP 10.152.183.26 <none> 8086/TCP 129m
eso#microk8s-master:~/k8s-grafana$ microk8s.kubectl get networkpolicy
NAME POD-SELECTOR AGE
grafana-allow-egress-influxdb app=grafana 129m
test-egress-influxdb app=questrest 128m
Describe:
eso#microk8s-master:~/k8s-grafana$ microk8s.kubectl describe svc influxdb-service
Name: influxdb-service
Namespace: default
Labels: app=grafana
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.152.183.26
IPs: 10.152.183.26
Port: <unset> 8086/TCP
TargetPort: 8086/TCP
Endpoints: 10.1.2.220:8086
Session Affinity: None
Events: <none>
eso#microk8s-master:~/k8s-grafana$ microk8s.kubectl describe endpoints influxdb-service
Name: influxdb-service
Namespace: default
Labels: <none>
Annotations: <none>
Subsets:
Addresses: 10.1.2.220
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 8086 TCP
Events: <none>
eso#microk8s-master:~/k8s-grafana$ microk8s.kubectl describe networkpolicy grafana-allow-egress-influxdb
Name: grafana-allow-egress-influxdb
Namespace: default
Created on: 2021-11-03 20:53:00 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=grafana
Not affecting ingress traffic
Allowing egress traffic:
To Port: <any> (traffic allowed to all ports)
To: <any> (traffic not restricted by destination)
Policy Types: Egress
Grafana.yml:
eso#microk8s-master:~/k8s-grafana$ cat grafana.yml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: grafana-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
storageClassName: ""
claimRef:
name: grafana-pvc
namespace: default
persistentVolumeReclaimPolicy: Retain
nfs:
path: /mnt/MainVol/grafana
server: 10.2.0.1
readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: grafana-pv
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
spec:
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
securityContext:
fsGroup: 472
supplementalGroups:
- 0
containers:
- name: grafana
image: grafana/grafana:7.5.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: http-grafana
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /robots.txt
port: 3000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 3000
timeoutSeconds: 1
resources:
requests:
cpu: 250m
memory: 750Mi
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-pv
volumes:
- name: grafana-pv
persistentVolumeClaim:
claimName: grafana-pvc
---
apiVersion: v1
kind: Service
metadata:
name: grafana-service
spec:
ports:
- port: 3000
protocol: TCP
targetPort: http-grafana
selector:
app: grafana
#sessionAffinity: None
#type: LoadBalancer
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: "g2.some.domain.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: grafana-service
port:
number: 3000
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: grafana-allow-egress-influxdb
namespace: default
spec:
podSelector:
matchLabels:
app: grafana
ingress:
- {}
egress:
- {}
policyTypes:
- Egress
As I haven't gotten much response, i'll answer the question with my "workaround". I am still not sure this is the best way to do it though.
I got it to work by exposing the selectorless service on metallb, then using that exposed ip inside grafana
kind: Service
apiVersion: v1
metadata:
name: influxdb-service-lb
#namespace: ingress
spec:
type: LoadBalancer
loadBalancerIP: 10.1.2.61
# selector:
# app: grafana
ports:
- name: http
protocol: TCP
port: 8086
targetPort: 8086
---
apiVersion: v1
kind: Endpoints
metadata:
name: influxdb-service-lb
subsets:
- addresses:
- ip: 10.1.2.220
ports:
- name: influx
protocol: TCP
port: 8086
I then use the loadbalancer ip in grafana (10.1.2.61)
Update October 2022
As a response to a comment above, I have added a diagram of how i believe this to work

Unable to connect to Cockroach pod in Kubernetes

I am developing a simple web app with web service and persistent layer. Web persistent layer is Cockroach db. I am trying to deploy my app with single command:
kubectl apply -f my-app.yaml
App is deployed successfully. However when backend has to store something in db the following error appears:
dial tcp: lookup web-service-cockroach on 192.168.65.1:53: no such host
When I start my app I provide the following connection string to cockroach db and connection is successful but when I try to store something in db the above error appears:
postgresql://root#web-service-db:26257/defaultdb?sslmode=disable
For some reason web pod can not talk with db pod. My whole configuration is:
# Service for web application
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web-service
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: http
nodePort: 30103
externalIPs:
- 192.168.1.9 # < - my local ip
---
# Deployment of web app
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
selector:
matchLabels:
app: web-service
replicas: 1
template:
metadata:
labels:
app: web-service
spec:
hostNetwork: true
containers:
- name: web-service
image: my-local-img:latest
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
hostPort: 8080
env:
- name: DB_CONNECT_STRING
value: "postgresql://root#web-service-db:26257/defaultdb?sslmode=disable"
---
### Kubernetes official doc PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: cockroach-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/my-local-volueme"
---
### Kubernetes official doc PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cockroach-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
---
# Cockroach used by web-service
apiVersion: v1
kind: Service
metadata:
name: web-service-cockroach
labels:
app: web-service-cockroach
spec:
selector:
app: web-service-cockroach
type: NodePort
ports:
- protocol: TCP
port: 26257
targetPort: 26257
nodePort: 30104
---
# Cockroach stateful set used to deploy locally
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web-service-cockroach
spec:
serviceName: web-service-cockroach
replicas: 1
selector:
matchLabels:
app: web-service-cockroach
template:
metadata:
labels:
app: web-service-cockroach
spec:
volumes:
- name: cockroach-pv-storage
persistentVolumeClaim:
claimName: cockroach-pv-claim
containers:
- name: web-service-cockroach
image: cockroachdb/cockroach:latest
command:
- /cockroach/cockroach.sh
- start
- --insecure
volumeMounts:
- mountPath: "/tmp/my-local-volume"
name: cockroach-pv-storage
ports:
- containerPort: 26257
After deployment everything looks good.
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 50m
web-service NodePort 10.111.85.64 192.168.1.9 8080:30103/TCP 6m17s
webs-service-cockroach NodePort 10.96.42.121 <none> 26257:30104/TCP 6m8s
kubectl get pods
NAME READY STATUS RESTARTS AGE
web-service-6cc74b5f54-jlvd6 1/1 Running 0 24m
web-service-cockroach-0 1/1 Running 0 24m
Thanks in advance!
Looks like you have a problem with DNS.
dial tcp: lookup web-service-cockroach on 192.168.65.1:53: no such host
Address 192.168.65.1 does not like a kube-dns service ip.
This could be explaind if you where using host network, and surprisingly you do.
When using hostNetwork: true, the default dns server used is the server that the host uses and that never is a kube-dns.
To solve it set:
spec:
dnsPolicy: ClusterFirstWithHostNet
It sets the dns server to the k8s one for the pod.
Have a look at kubernetes documentaion for more information about Pod's DNS Policy.

Kubernetes: The service manifest doesn't provide an endpoint to access the application

This yaml tries to deploy a simple Arangodb architecture in k8s, I know there are operators for ArangoDB, but it is a simple PoC to understand k8s pieces and iterate this db with other apps.
The problem is this YAML file executes correctly but I don't get any IP:PORT to connect, however when I execute that docker image in local it works.
# create: kubectl apply -f ./arango.yaml
# delete: kubectl delete -f ./arango.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: nms
name: arangodb-deployment
spec:
replicas: 1
selector:
matchLabels:
app: arangodb-pod
template:
metadata:
labels:
app: arangodb-pod
spec:
containers:
- name: arangodb
image: arangodb/arangodb:3.5.3
env:
- name: ARANGO_ROOT_PASSWORD
value: "pass"
ports:
- name: http
containerPort: 8529
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
namespace: nms
name: arangodb-svc
spec:
type: LoadBalancer
selector:
app: arangodb-pod
ports:
- targetPort: 8529
protocol: TCP
port: 8529
targetPort: http
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: nms
name: arango-storage
labels:
app: arangodb-pod
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
Description seems clear:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
arangodb-svc LoadBalancer 10.0.150.245 51.130.11.13 8529/TCP 14m
I am executing kubectl apply -f arango.yaml from AKS but I cannot access to any IP:8529. Some recommendations?
I would like to simulate these commands:
docker run -p 8529:8529 -e ARANGO_ROOT_PASSWORD=pass -d --name arangodb-instance arangodb/arangodb:3.5.3
docker start arangodb-instance
You must allow the NodePort 31098 at NSG level from your VNet configuration and attach that NSG rule to AKS cluster.
Also please try and update the service manifest with the changes that you went through with the help in comments.
- targetPort: 8529
protocol: TCP
port: 8529
targetPort: http --< **Its completely wrong field, the manifest wont be parsed.**
The above manifest is wrong, for NodePort (--service-node-port-range=30000-32767) the manifest should look something like this:
spec:
type: NodePort
selector:
app: arangodb-pod
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- name: http
port: 8529
targetPort: 8529
# Optional field
nodePort: 31044
You can connect at public-NODE-IP:NodePort from outside AKS.
For service type loadbalancer, your manifest should look like:
spec:
type: LoadBalancer
selector:
app: arangodb-pod
ports:
- name: http
protocol: TCP
port: 8529
targetPort: 8529
For LoadBalancer you can connect with LoadBalancer-External-IP:external-port
However, in both the above cases NSG whitelist rule should be there. You should whitelist your local machine's IP or the IP of the machine from wherever you are accessing it.
you have to ingress controller or you could also go with loadbalancer type as service assiging a static ip which you prefer. Both will work

Kubernetes nodeport not working

I've created an YAML file with three images in one pod (they need to communicate with eachother over 127.0.0.1) It seems that it's all working. I've defined a nodeport in the yaml file.
There is one deployment defined applications it contains three images:
contacts-db (A MySQL database)
front-end (An Angular website)
net-core (An API)
I've defined three services, one for every container. In there I've defined the type NodePort to access it.
So I retrieved the services to get the port numbers:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contacts-db 10.103.67.74 <nodes> 3306:30241/TCP 1d
front-end 10.107.226.176 <nodes> 80:32195/TCP 1d
net-core 10.108.146.87 <nodes> 5000:30245/TCP 1d
And I navigate in my browser to http://:32195 and it just keeps loading. It's not connecting. This is the complete Yaml file:
---
apiVersion: v1
kind: Namespace
metadata:
name: three-tier
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: applications
labels:
name: applications
namespace: three-tier
spec:
replicas: 1
template:
metadata:
labels:
name: applications
spec:
containers:
- name: contacts-db
image: mysql/mysql-server #TBD
env:
- name: MYSQL_ROOT_PASSWORD
value: quintor
- name: MYSQL_DATABASE
value: quintor #TBD
ports:
- name: mysql
containerPort: 3306
- name: front-end
image: xanvier/angularfrontend #TBD
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
- name: net-core
image: xanvier/contactsapi #TBD
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: contacts-db
labels:
name: contacts-db
namespace: three-tier
spec:
type: NodePort
ports:
# the port that this service should serve on
- port: 3306
targetPort: 3306
selector:
name: contacts-db
---
apiVersion: v1
kind: Service
metadata:
name: front-end
labels:
name: front-end
namespace: three-tier
spec:
type: NodePort
ports:
- port: 80
targetPort: 80 #nodePort: 30001
selector:
name: front-end
---
apiVersion: v1
kind: Service
metadata:
name: net-core
labels:
name: net-core
namespace: three-tier
spec:
type: NodePort
ports:
- port: 5000
targetPort: 5000 #nodePort: 30001
selector:
name: net-core
---
The selector of a service is matching the labels of your pod. In your case the defined selectors point to the containers which leads into nothing when choosing pods.
You'd have to redefine your services to use one selector or split up your containers to different Deployments / Pods.
To see whether a selector defined for a services would work, you can check them with:
kubectl get pods -l key=value
If the result is empty, your services will run into the void too.