wget : can't connect to remote host when connecting to kubernetes service in separate namespace - kubernetes

I have setup my API for Prometheus monitoring using client instrumentation but when I check the endpoint in PromUI it is in DOWN status as below :
I checked for connectivity issues by exec into the prometheus pod and trying a wget. When I do so I get :
/prometheus $ wget metrics-clusterip.labs.svc:9216
Connecting to metrics-clusterip.labs.svc:9216 (10.XXX.XX.XXX:9216)
wget: can't connect to remote host (10.XXX.XX.XX): Connection refused
I have an existing MongoDB instance with a Prometheus exporter sidecar and its working perfectly (ie its endpoint is UP in PromUI). As a check I tried connecting to this MongoDB instance's service and it turns out the prometheus pod can indeed connect (as expected) :
wget mongodb-metrics.labs.svc:9216
Connecting to mongodb-metrics.labs.svc:9216 (10.XXX.XX.X:9216)
wget: can't open 'index.html': File exists
This is what I have also tried :
I tried both nslookup metrics-clusterip.svc.labs and nslookup metrics-clusterip.svc.labs:9216 from the prometheus pod but I got same error :
nslookup metrics-clusterip.svc.labs:9216
Server: 169.XXX.XX.XX
Address: 169.XXX.XX.XX:XX
** server can't find metrics-clusterip.svc.labs:9216: NXDOMAIN
*** Can't find metrics-clusterip.svc.labs:9216: No answer
However, when I port-forward the service I can successfully query the metrics endpoint and this shows that the metrics endpoint is up in the API container :
kubectl port-forward svc/metrics-clusterip 9216
NB:Both the API routes and the metrics endpoint are using the same port (9216)
Check if DNS pod is running
kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-XXXXXXXXX-XXXXX 1/1 Running 0 294d
coredns-XXXXXXXXX-XXXXX 1/1 Running 0 143d
This is the configuration for my deployment :
apiVersion: apps/v1
kind: Deployment
metadata:
name: reg
labels:
app: reg
namespace: labs
spec:
replicas: 1
selector:
matchLabels:
app: reg
release: loki
template:
metadata:
labels:
app: reg
release: loki
spec:
containers:
- name: reg
image: xxxxxx/sre-ops:dev-latest
imagePullPolicy: Always
ports:
- name: reg
containerPort: 9216
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 128Mi
nodeSelector:
kubernetes.io/hostname: xxxxxxxxxxxx
imagePullSecrets:
- name: xxxx
---
apiVersion: v1
kind: Service
metadata:
name: metrics-clusterip
namespace: labs
labels:
app: reg
release: loki
annotations:
prometheus.io/path: /metrics
prometheus.io/port: '9216'
prometheus.io/scrape: "true"
spec:
type: ClusterIP
selector:
app: reg
release: loki
ports:
- port: 9216
targetPort: reg
protocol: TCP
name: reg
And the ServiceMonitor :
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: reg
namespace: loki
labels:
app: reg
release: loki
spec:
selector:
matchLabels:
app: reg
release: loki
endpoints:
- port: reg
path: /metrics
interval: 15s
namespaceSelector:
matchNames:
- "labs"
I have also tried to pipe the DNS pod logs to a file but I am not sure what I should be looking to get more detail :
kubectl logs --namespace=kube-system coredns-XXXXXX-XXXX >> logs.txt
What am I missing?

Related

Why my Nodeport service change its port number?

I am trying to install the velero for k8s. During the installation when try to install mini.io I changes its service type from cluster IP to Node Port. My Pods run successfully and also I can see the node Port services is up and running.
master-k8s#masterk8s-virtual-machine:~/velero-v1.9.5-linux-amd64$ kubectl get pods -n velero -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
minio-8649b94fb5-vk7gv 1/1 Running 0 16m 10.244.1.102 node1k8s-virtual-machine <none> <none>
master-k8s#masterk8s-virtual-machine:~/velero-v1.9.5-linux-amd64$ kubectl get svc -n velero NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio NodePort 10.111.72.207 <none> 9000:31481/TCP 53m
When I try to access my services port number changes from 31481 to 45717 by it self. Every time when I correct port number and hit enter it changes back to new port and I am not able to access my application.
These are my codes from mini.io service file.
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
type: NodePort
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
component: minio
What I have done so far?
I look for the log and everything show successful No error. I also try it with Load balancer service. With Load balancer port not not changes but I am not able to access the application.
Noting found on google about this issue.
I also check all the namespaces pods and services to check if these Port numbers are being used. No services use these ports.
What Do I want?
Can you please help me to find out what cause my application to change its port. Where is the issue and how to fix it.? How can I access application dashbord?
Update Question
This is the full codes file. It may help to find my mistake.
apiVersion: v1
kind: Namespace
metadata:
name: velero
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
strategy:
type: Recreate
selector:
matchLabels:
component: minio
template:
metadata:
labels:
component: minio
spec:
volumes:
- name: storage
emptyDir: {}
- name: config
emptyDir: {}
containers:
- name: minio
image: minio/minio:latest
imagePullPolicy: IfNotPresent
args:
- server
- /storage
- --config-dir=/config
env:
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9002
volumeMounts:
- name: storage
mountPath: "/storage"
- name: config
mountPath: "/config"
---
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
# ClusterIP is recommended for production environments.
# Change to NodePort if needed per documentation,
# but only if you run Minio in a test/trial environment, for example with Minikube.
type: NodePort
ports:
- port: 9002
nodePort: 31482
targetPort: 9002
protocol: TCP
selector:
component: minio
---
apiVersion: batch/v1
kind: Job
metadata:
namespace: velero
name: minio-setup
labels:
component: minio
spec:
template:
metadata:
name: minio-setup
spec:
restartPolicy: OnFailure
volumes:
- name: config
emptyDir: {}
containers:
- name: mc
image: minio/mc:latest
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- "mc --config-dir=/config config host add velero http://minio:9000 minio minio123 && mc --config-dir=/config mb -p velero/velero"
volumeMounts:
- name: config
mountPath: "/config"
Edit2 Logs Of Pod
WARNING: MINIO_ACCESS_KEY and MINIO_SECRET_KEY are deprecated.
Please use MINIO_ROOT_USER and MINIO_ROOT_PASSWORD
Formatting 1st pool, 1 set(s), 1 drives per set.
WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
MinIO Object Storage Server
Copyright: 2015-2023 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2023-01-25T00-19-54Z (go1.19.4 linux/amd64)
Status: 1 Online, 0 Offline.
API: http://10.244.1.108:9000 http://127.0.0.1:9000
Console: http://10.244.1.108:33045 http://127.0.0.1:33045
Documentation: https://min.io/docs/minio/linux/index.html
Warning: The standard parity is set to 0. This can lead to data loss.
Edit 3 Logs of Pod
master-k8s#masterk8s-virtual-machine:~/velero-1.9.5$ kubectl logs minio-8649b94fb5-qvzfh -n velero
WARNING: MINIO_ACCESS_KEY and MINIO_SECRET_KEY are deprecated.
Please use MINIO_ROOT_USER and MINIO_ROOT_PASSWORD
Formatting 1st pool, 1 set(s), 1 drives per set.
WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
MinIO Object Storage Server
Copyright: 2015-2023 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2023-01-25T00-19-54Z (go1.19.4 linux/amd64)
Status: 1 Online, 0 Offline.
API: http://10.244.2.131:9000 http://127.0.0.1:9000
Console: http://10.244.2.131:36649 http://127.0.0.1:36649
Documentation: https://min.io/docs/minio/linux/index.html
Warning: The standard parity is set to 0. This can lead to data loss.
You can set the nodePort number inside the port config so that it won't be automatically set.
Try this Service:
apiVersion: v1
kind: Service
metadata:
namespace: velero
name: minio
labels:
component: minio
spec:
type: NodePort
ports:
- port: 9000
nodePort: 31481
targetPort: 9000
protocol: TCP
selector:
component: minio

Pod with Mongo reachable via localhost but not service name

This problem only arises on a cluster created with default Minikube settings, but not on a remote cluster created via Kops.
I run this setup on a Minikube cluster in a virtual machine. I have a Pod running MongoDB in my namespace and a service pointing to it:
kind: Deployment
apiVersion: apps/v1
metadata:
name: mongo
namespace: platform
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: root
- name: MONGO_INITDB_ROOT_PASSWORD
value: password
---
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: platform
spec:
selector:
app: mongo
ports:
- protocol: TCP
port: 27017
When I run a shell inside a mongo container, I can connect to the database with
mongo mongodb://root:password#localhost:27017
but it does not work with
$ mongo mongodb://root:password#mongo:27017
MongoDB shell version v4.4.1
connecting to: mongodb://mongo:27017/?compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server mongo:27017, connection attempt failed: SocketException: Error connecting to mongo:27017 (10.110.155.65:27017) :: caused by :: Connection timed out :
connect#src/mongo/shell/mongo.js:374:17
#(connect):2:6
exception: connect failed
exiting with code 1
I checked the service and it points to the correct address:
$ kubectl describe service mongo --namespace platform
Name: mongo
Namespace: platform
Labels: <none>
Annotations: <none>
Selector: app=mongo
Type: ClusterIP
IP: 10.110.155.65
Port: <unset> 27017/TCP
TargetPort: 27017/TCP
Endpoints: 172.17.0.9:27017
Session Affinity: None
Events: <none>
$ kubectl describe pod mongo-6c79f887-5khnc | grep IP
IP: 172.17.0.9
And using this endpoint directly works, too:
mongo mongodb://root:password#172.17.0.9:27017
As a sidenote, I have other pods+services running webservers in the same namespace, which work as expected through the service.
Addition
I run the mongo ... connection commands from within the MongoDB container/pod. I also tried all these connection commands from other pods inside the same namespace. Always with the same result: Using the IP works, using mongo for dns resolution does not work.
As a test, I replicated the exact same pod/service configuration but with an nginx server instead of mongodb:
...
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx
namespace: debug-mongo
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: debug-mongo
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
And curl nginx:80 returns the default nginx page successfully.
Okay forgot what I said before. You need to set your services clusterIP to none.
spec:
clusterIP: None
K8s service with cluster ip acts like loadbalancer and mongo cant work like that i suppose.

Kubernetes Networkpolicy dosen't block traffic

i've a namespace called: test, and containing 3 pods: frontend, backend and database.
this is the manifest of pods:
kind: Pod
apiVersion: v1
metadata:
name: frontend
namespace: test
labels:
app: todo
tier: frontend
spec:
containers:
- name: frontend
image: nginx
---
kind: Pod
apiVersion: v1
metadata:
name: backend
namespace: test
labels:
app: todo
tier: backend
spec:
containers:
- name: backend
image: nginx
---
kind: Pod
apiVersion: v1
metadata:
name: database
namespace: test
labels:
app: todo
tier: database
spec:
containers:
- name: database
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: example
I would implement a network policy , that allow only allow incoming traffic from the backend to the database but disallow incoming traffic from the frontend.
this my network policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-allow
namespace: test
spec:
podSelector:
matchLabels:
app: todo
tier: database
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: todo
tier: backend
ports:
- protocol: TCP
port: 3306
- protocol: UDP
port: 3306
This is the output of kubectl get pods -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
backend 1/1 Running 0 28m 172.17.0.5 minikube <none> <none>
database 1/1 Running 0 28m 172.17.0.4 minikube <none> <none>
frontend 1/1 Running 0 28m 172.17.0.3 minikube <none> <none>
This is the output of kubectl get networkpolicy -n test -o wide
NAME POD-SELECTOR AGE
app-allow app=todo,tier=database 21m
when i execute telnet #ip-of-mysql-pod 3306 from the frontend pod , the connection look be established and the network policy is not working
kubectl exec -it pod/frontend bash -n test
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root#frontend:/# telnet 172.17.0.4 3306
Trying 172.17.0.4...
Connected to 172.17.0.4.
Escape character is '^]'.
J
8.0.25 k{%J\�#(t%~qI%7caching_sha2_password
there are something i missing ?
Thanks
It seems that you forgot to add "default deny" policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
The default behavior of NetworkPolicy is to allow all connections between pod unless explicitly denied.
More details here: https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-ingress-traffic

What is the Userid and password for the mongo db

I am doing this exercise on my vagrant built bare metal cluster on a windows machine.
Was able to successfully run the app.
But I am not able to connect to the database to see the data, say from mongo db compass.
What should be the user id or password for this?
After a bit of research, I used the following steps to get into the mongo container and verify the data. But I want to connect to the database using a client like compass.
Used the following command to find where the mongo db backend database pod is deployed.
vagrant#kmasterNew:~/GuestBookMonog$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
frontend-848d88c7c-95db6 1/1 Running 0 4m51s 192.168.55.11 kworkernew2 <none> <none>
mongo-75f59d57f4-klmm6 1/1 Running 0 4m54s 192.168.55.10 kworkernew2 <none> <none>
Then ssh into that node and did
docker container ls
to find the mongo db container
It looks something like this. I removed irrelevant data.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1ba3d05168ca dc77715107a9 "docker-entrypoint.s…" 53 minutes ago Up 53 minutes k8s_mongo_mongo-75f59d57f4-5tw5b_default_eeddf81b-8dde-4c3e-8505-e08229f97c8b_0
A reference from SO
docker exec -it 1ba3d05168ca bash
Another reference from SO in this context
mongo
show dbs
use guestbook
show collections
db.messages.find()
Finally I was able to verify the data
> db.messages.find()
{ "_id" : ObjectId("6097f6c28088bc17f61bdc32"), "message" : ",message1" }
{ "_id" : ObjectId("6097f6c58088bc17f61bdc33"), "message" : ",message1,message2" }
But the question is how can I see this data from mongo db compass? I am exposing the both the frontend as well as the backend services using NodePort type. You can see them below.
The follow are the k8s manifest files for the deployment that I got from the above example.
Front end deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app.kubernetes.io/name: guestbook
app.kubernetes.io/component: frontend
spec:
selector:
matchLabels:
app.kubernetes.io/name: guestbook
app.kubernetes.io/component: frontend
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: guestbook
app.kubernetes.io/component: frontend
spec:
containers:
- name: guestbook
image: paulczar/gb-frontend:v5
# image: gcr.io/google-samples/gb-frontend:v4
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 80
The front end service.
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app.kubernetes.io/name: guestbook
app.kubernetes.io/component: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
type: NodePort
ports:
- port: 80
nodePort: 30038
# - targetPort: 80
# port: 80
# nodePort: 30008
selector:
app.kubernetes.io/name: guestbook
app.kubernetes.io/component: frontend
Next the mongo db deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
labels:
app.kubernetes.io/name: mongo
app.kubernetes.io/component: backend
spec:
selector:
matchLabels:
app.kubernetes.io/name: mongo
app.kubernetes.io/component: backend
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: mongo
app.kubernetes.io/component: backend
spec:
containers:
- name: mongo
image: mongo:4.2
args:
- --bind_ip
- 0.0.0.0
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 27017
Finally the mongo service
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
app.kubernetes.io/name: mongo
app.kubernetes.io/component: backend
spec:
ports:
- port: 27017
targetPort: 27017
nodePort: 30068
type: NodePort
selector:
app.kubernetes.io/name: mongo
app.kubernetes.io/component: backend
Short answer: there isn't one.
Long answer: you are using the mongo image, do you can pull up the readme for that on https://hub.docker.com/_/mongo. That shows that authentication is disabled by default and must be manually enabled via --auth as a command line argument. When doing that, you can specific the initial auth configuration via environment variables and then more complex stuff in the referenced .d/ folder.
Found it. Its pretty simple.
The connection string would be
mongodb://192.62.62.100:30068
And this is how it looks.
We need to select Authentication none. Here 30068 is the node port of the mongo db service.

Unable to connect to Cockroach pod in Kubernetes

I am developing a simple web app with web service and persistent layer. Web persistent layer is Cockroach db. I am trying to deploy my app with single command:
kubectl apply -f my-app.yaml
App is deployed successfully. However when backend has to store something in db the following error appears:
dial tcp: lookup web-service-cockroach on 192.168.65.1:53: no such host
When I start my app I provide the following connection string to cockroach db and connection is successful but when I try to store something in db the above error appears:
postgresql://root#web-service-db:26257/defaultdb?sslmode=disable
For some reason web pod can not talk with db pod. My whole configuration is:
# Service for web application
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web-service
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: http
nodePort: 30103
externalIPs:
- 192.168.1.9 # < - my local ip
---
# Deployment of web app
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
selector:
matchLabels:
app: web-service
replicas: 1
template:
metadata:
labels:
app: web-service
spec:
hostNetwork: true
containers:
- name: web-service
image: my-local-img:latest
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
hostPort: 8080
env:
- name: DB_CONNECT_STRING
value: "postgresql://root#web-service-db:26257/defaultdb?sslmode=disable"
---
### Kubernetes official doc PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: cockroach-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/my-local-volueme"
---
### Kubernetes official doc PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cockroach-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
---
# Cockroach used by web-service
apiVersion: v1
kind: Service
metadata:
name: web-service-cockroach
labels:
app: web-service-cockroach
spec:
selector:
app: web-service-cockroach
type: NodePort
ports:
- protocol: TCP
port: 26257
targetPort: 26257
nodePort: 30104
---
# Cockroach stateful set used to deploy locally
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web-service-cockroach
spec:
serviceName: web-service-cockroach
replicas: 1
selector:
matchLabels:
app: web-service-cockroach
template:
metadata:
labels:
app: web-service-cockroach
spec:
volumes:
- name: cockroach-pv-storage
persistentVolumeClaim:
claimName: cockroach-pv-claim
containers:
- name: web-service-cockroach
image: cockroachdb/cockroach:latest
command:
- /cockroach/cockroach.sh
- start
- --insecure
volumeMounts:
- mountPath: "/tmp/my-local-volume"
name: cockroach-pv-storage
ports:
- containerPort: 26257
After deployment everything looks good.
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 50m
web-service NodePort 10.111.85.64 192.168.1.9 8080:30103/TCP 6m17s
webs-service-cockroach NodePort 10.96.42.121 <none> 26257:30104/TCP 6m8s
kubectl get pods
NAME READY STATUS RESTARTS AGE
web-service-6cc74b5f54-jlvd6 1/1 Running 0 24m
web-service-cockroach-0 1/1 Running 0 24m
Thanks in advance!
Looks like you have a problem with DNS.
dial tcp: lookup web-service-cockroach on 192.168.65.1:53: no such host
Address 192.168.65.1 does not like a kube-dns service ip.
This could be explaind if you where using host network, and surprisingly you do.
When using hostNetwork: true, the default dns server used is the server that the host uses and that never is a kube-dns.
To solve it set:
spec:
dnsPolicy: ClusterFirstWithHostNet
It sets the dns server to the k8s one for the pod.
Have a look at kubernetes documentaion for more information about Pod's DNS Policy.