Kubernetes how to access application in one namespace from another - mongodb

I have the following components up and running in a kubernetes cluster
A GoLang Application writing data to a mongodb statefulset replicaset in namespace app1
A mongodb replicaset (1 replica) running as a statefulset in the namespace ng-mongo
What I need to do is, I need to access the mongodb database by the golang application for write/read opeations, so what I did was;
Create a headless service for the mongodb in the ng-mongo namespace as below:
# Source: mongo/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: ng-mongo
labels:
app: mongo
spec:
ports:
- port: 27017
targetPort: 27017
name: mongo
clusterIP: None
selector:
role: mongo
And then I deployed the mongodb statefulset and initialized the replicaset as below:
kubectl exec -it mongo-0 -n ng-mongo mongosh
rs.initiate({_id: "rs0",members: [{_id: 0, host: "mongo-0"}]})
// gives output
{ ok: 1 }
Then I created an ExternalName service in the app1 namespace linking the above mongo service in step 1, look below:
# Source: app/templates/svc.yaml
kind: Service
apiVersion: v1
metadata:
name: app1
namespace: app1
spec:
type: ExternalName
externalName: mongo.ng-mongo.svc.cluster.local
ports:
- port: 27017
And at last, I instrumented my golang application as follows;
// Connection URI
const mongo_uri = "mongodb://app1" <-- Here I used the app1, as the ExternalName service's name is `app1`
<RETRACTED-CODE>
And then I ran the application, and checked the logs. Here is what I found:
2022/11/22 12:49:47 server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: mongo-0:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: dial tcp: lookup mongo-0 on 10.96.0.10:53: no such host }, ] }
Update: I haven't set any usernames or passwords for the mongodb
Can someone help me why this is happening?

After some digging, I was able to find the issue.
When specifying the host entry for the rs.initiate({}), I should provide the FQDN of the relevant mongodb instance (in my case it is the mongo-0 pod). Therefore, my initialisation command should look like this;
rs.initiate({_id: "rs0",members: [{_id: 0, host: "mongo-0.mongo.ng-mongo.svc.cluster.local:27017"}]})

From my understanding of what you are trying to do,
Your Pod(golang application) and app1 Service are already in the same namespace.
However, looking at the log,
2022/11/22 12:49:47 server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: mongo-0:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: dial tcp: lookup mongo-0 on 10.96.0.10:53: no such host }, ] }
The log means that the domain named 'mongo-0' could not be found in DNS. (Note that 10.96.0.10 IP is probably kube-dns)
Your application tries to connect to the domain mongo-0, but the domain mongo-0 does not exist in DNS (It means there is no service named mongo-0 on app1 namespace).
What is the 'mongo-0' that your Application trying to access?
(Obviously the log shows an attempt to access the domain mongo-0 and your golang applications mongo_uri indicates mongodb://app1)
Finding out why your application are trying to connect to the mongo-0 domain will help solve the problem.
Hope this helps you.

Related

How do I connect to MongoDB NodePort service from Compass

I have an Mongo deploy on k8s which is exposed as a NodePort. The database is running on a remote Kubernetes cluster and I am trying to connect from my PC using Compass (and my private internet connection).
When I attempt to connect from Compass using the following I am getting an error connect ETIMEDOUT XXX.XXX.XXX.XXX:27017 :
mongodb://root:password#XXX.XXX.XXX.XXX:27017/TestDb?authSource=admin
I have checked the service with
kubectl -n labs get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongodb-service NodePort 10.XXX.XXX.XXX <none> 27017:31577/TCP 172m
and using the url mongodb://root:password#XXX.XXX.XXX:31577/TestDb?authSource=admin yields the same connect timeout error also.
The service is defined as :
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
namespace: demos
spec:
type: NodePort
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
I have tried using the URL that displays in shell access as :
mongodb://XXX.XX.XX.XX:31577/?compressors=disabled&gssapiServiceName=mongodb
but I get error error-option-gssapiservicename-is-not-supported.
I am able to login and access the database from the command line (kubectl -n demos exec -it podname -- sh) and after login I get :
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
My understanding is that I should be using the Node IP for my k8s cluster as something like
https://XXX.XXX.XXX.XXX:31577/?compressors=disabled&gssapiServiceName=mongodb
but this also complains with error Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
Using either of these URIs :
mongodb://root:password#mongodb-service:27017/
mongodb://root:password#mongodb-service:31562/
also gives an error getaddrinfo ENOTFOUND mongodb-service
What am I missing ?

MongoDB Community Kubernetes Operator Connection

I'm trying to deploy a MongoDB replica set by using the MongoDB Community Kubernetes Operator in Minikube. To view the content of the database I would like to connect to the MongoDB replica set through Mongo Compass.
I followed the instructions on the official GitHub, so:
Install the CRD
Install the necessary roles and role-bindings
Install the Operator
Deploy the Replicaset
The yaml file used for the replica set deployment is the following one:
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongo-rs
namespace: mynamespace
spec:
members: 3
type: ReplicaSet
version: "4.4.0"
persistent: true
security:
authentication:
modes: ["SCRAM"]
users:
- name: user
db: admin
passwordSecretRef:
name: user
roles:
- name: userAdminAnyDatabase
db: admin
scramCredentialsSecretName: user-scram
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
---
apiVersion: v1
kind: Secret
metadata:
name: user
type: Opaque
stringData:
password: password
The MongoDB resource deployed and the mongo-rs pods are all running. Also, I'm able to connect to the replica set directly through the mongo shell within the Kubernetes cluster.
Anyway, I'd like to connect to the MongoDB replicaset also from outside the Kubernetes cluster, so in addition I've created a LoadBalancer service like the following one:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
namespace: mynamespace
spec:
type: LoadBalancer
selector:
app: mongo-rs-svc
ports:
- port: 27017
nodePort: 30017
The pods (namely mongo-rs-0, mongo-rs-1, mongo-rs-2) are correctly binded to the service. On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On minikube, the LoadBalancer type makes the Service accessible through the minikube service command, which generates a tunnel for the service mongodb-service (for instance: 127.0.0.1:34873), but if I try to connect to the MongoDB replica set through the Mongo Compass client using the connection string:
mongodb://user:password#127.0.0.1:34873/?authSource=admin&replicaSet=mongo-rs&readPreference=primary
The client cannon't connect to mongodb, returning the error:
getaddrinfo ENOTFOUND
mongo-rs-0.mongo-rs-svc.mynamespace.svc.cluster.local
Any suggestions on how to access the replica set from outside kubernetes?
Thanks in advance!
Edit: I know it's possible to connect from the outside using port forwarding, but I'd be interested in a more production-oriented approach.
minikube is a development app. So for you it may be sufficent to connect from your host(desktop) via localhost.
So first you can't use the type LoadBalancer, because this round robins between the mongodb instances, but only the primary in the replica set can write.
Normaly the mongo client with the right connection string will select the primary.
So take nodePort and you will get a connection to one mongodb instance.
Make a kubectl port-forward <resource-type/resource-name> [local_port]:<pod_port>` to the service
Before that you may change the mongodb-service. As far as I know it's not nodePort, its targetPort
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
namespace: mynamespace
spec:
type: nodePort
selector:
app: mongo-rs-svc
ports:
- port: 27017
targetPort: 27017
So something like that
kubectl port-forward mongodb-service 27017:27017
Aftert that you may connect you mongodb from localhost:27017 to the kubernetes service, which connects you to the mongo replica.
mongo localhost:27017
should connect. Edit the ports on your needs
Hope this idea helps,

How to connect mongo replicaset on kubernetes cluster from another cluster pods

I have a mongo db replicaset running on kubernetes cluster ( on AWS EKS ) , say cluster-1. This is running within VPC-1 having cidr 192.174.0.0/16.
I have another cluster in a separate VPC , say VPC-2, where I'll be running some applications on top of the mongo cluster. This VPC cidr range is 192.176.0.0/16. All VPC peering and security group ingress/egress rules are working fine and I am able to ping cluster nodes across the two VPCs.
I am using NodePort type service and StatefulSet for the mongo cluster :
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
name: mongodb
spec:
selector:
role: mongo
type: NodePort
ports:
- port: 26017
targetPort: 27017
nodePort: 30017
Here are the nodes & pods in mongo cluster, cluster-1 :
ubuntu#ip-192-174-5-253:/st_config/kubeobj$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-192-174-187-133.ap-south-1.compute.internal Ready <none> 19h v1.16.8-eks-e16311 192.174.187.133 13.232.195.39 Amazon Linux 2 4.14.181-140.257.amzn2.x86_64 docker://19.3.6
ip-192-174-23-229.ap-south-1.compute.internal Ready <none> 19h v1.16.8-eks-e16311 192.174.23.229 13.234.111.139 Amazon Linux 2 4.14.181-140.257.amzn2.x86_64 docker://19.3.6
ubuntu#ip-192-174-5-253:/st_config/kubeobj$
ubuntu#ip-192-174-5-253:/st_config/kubeobj$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mongod-0 1/1 Running 0 45m 192.174.8.10 ip-192-174-23-229.ap-south-1.compute.internal <none> <none>
mongod-1 1/1 Running 0 44m 192.174.133.136 ip-192-174-187-133.ap-south-1.compute.internal <none> <none>
ubuntu#ip-192-174-5-253:/st_config/kubeobj$
If I try to connect using a specific node address, OR both node addresses, kubernetes is perhaps load-balancing or rotating the connection in a round robin fashion:
ubuntu#ip-192-176-42-206:~$ mongo mongodb://192.174.23.229:30017
MongoDB shell version v3.6.3
connecting to: mongodb://192.174.23.229:30017
MongoDB server version: 3.4.24
WARNING: shell and server versions do not match
test_rs0:PRIMARY>
ubuntu#ip-192-176-42-206:~$ mongo mongodb://192.174.23.229:30017
MongoDB shell version v3.6.3
connecting to: mongodb://192.174.23.229:30017
MongoDB server version: 3.4.24
WARNING: shell and server versions do not match
test_rs0:SECONDARY>
ubuntu#ip-192-176-42-206:~$ mongo mongodb://192.174.23.229:30017,192.174.187.133:30017
MongoDB shell version v3.6.3
connecting to: mongodb://192.174.23.229:30017,192.174.187.133:30017
MongoDB server version: 3.4.24
WARNING: shell and server versions do not match
test_rs0:PRIMARY>
I wish to leverage the replicaset features. So when I used the the connect string as - mongodb://192.174.23.229:30017,192.174.187.133:30017/?replicaSet=test_rs0 , it is actually getting the FQDNs of the pods which are not resolved from the node in cluster-2 nodes/pods in VPC-2.
ubuntu#ip-192-176-42-206:~$ mongo mongodb://192.174.23.229:30017,192.174.187.133:30017/?replicaSet=test_rs0
MongoDB shell version v3.6.3
connecting to: mongodb://192.174.23.229:30017,192.174.187.133:30017/?replicaSet=test_rs0
2020-06-23T15:59:07.407+0000 I NETWORK [thread1] Starting new replica set monitor for test_rs0/192.174.23.229:30017,192.174.187.133:30017
2020-06-23T15:59:07.409+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to 192.174.23.229:30017 (1 connections now open to 192.174.23.229:30017 with a 5 second timeout)
2020-06-23T15:59:07.409+0000 I NETWORK [thread1] Successfully connected to 192.174.187.133:30017 (1 connections now open to 192.174.187.133:30017 with a 5 second timeout)
2020-06-23T15:59:07.410+0000 I NETWORK [thread1] changing hosts to test_rs0/mongod-0.mongodb-service.default.svc.cluster.local:27017,mongod-1.mongodb-service.default.svc.cluster.local:27017 from test_rs0/192.174.187.133:30017,192.174.23.229:30017
2020-06-23T15:59:07.415+0000 I NETWORK [thread1] getaddrinfo("mongod-1.mongodb-service.default.svc.cluster.local") failed: Name or service not known
2020-06-23T15:59:07.415+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] getaddrinfo("mongod-0.mongodb-service.default.svc.cluster.local") failed: Name or service not known
2020-06-23T15:59:07.917+0000 I NETWORK [thread1] getaddrinfo("mongod-0.mongodb-service.default.svc.cluster.local") failed: Name or service not known
2020-06-23T15:59:07.918+0000 I NETWORK [thread1] getaddrinfo("mongod-1.mongodb-service.default.svc.cluster.local") failed: Name or service not known
2020-06-23T15:59:07.918+0000 W NETWORK [thread1] Unable to reach primary for set test_rs0
2020-06-23T15:59:07.918+0000 I NETWORK [thread1] Cannot reach any nodes for set test_rs0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.
Do I need to have some additional DNS service so that the names get resolved in VPC-2 nodes ? What would be the best approach ?
Also how can I use connect string can be based on the service name eg. mongodb://mongodb-service.default.svc.cluster.local:/?replicaSet=test_rs0 from any node in VPC-2 ? It works from any pod in VPC-1. But I need to get this working from pods in cluster in VPC-2 so that I don't have to specify specific pod/node IP in connect string. All my kubernetes objects are in default namespace.
Really appreciate some help here.
**Please Note: I am NOT using helm **
Kubernetes has coredns to connect to each pod.
If i didn't wrong you use StatefulSet deployment.
The best approch for connection each of your mongo cluster is use ClusterIP to communicate each other.
If you use same namespaces in mongo, you could connect use :
mongod-0.app_name:27017,mongod-1.app_name:27017
for each of your application
Note : app_name=mongod
Here some example :
apiVersion: v1
kind: Service
metadata:
namespace: mongo-cluster
name: mongo
labels:
app: mongo
name: mongo
spec:
type: ClusterIP
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: mongo-cluster
name: mongo
spec:
serviceName: "mongo"
replicas: 3
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--replSet"
- "MainSetRep"
- "--bind_ip"
- "0.0.0.0"
- "--auth"
- "--clusterAuthMode"
- "keyFile"
- "--keyFile"
- "/etc/secrets-volume/mongodb-keyfile"
- "--setParameter"
- "authenticationMechanisms=SCRAM-SHA-1"
ports:
- containerPort: 27017
volumeMounts:
- name: data
mountPath: /data/db
volumeMounts:
- name: mongo-key
mountPath: "/etc/secrets-volume"
readOnly: true
volumes:
- name: mongo-key
secret:
defaultMode: 0400
secretName: mongo-key
volumeClaimTemplates:
- metadata:
name: data
spec:
storageClassName: openebs-hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20G

How to Configure Kubernetes in Hairpin Mode

I'm trying to enable hairpin connections on my Kubernetes service, on GKE.
I've tried to follow the instructions here: https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/ to configure my kubelet config to enable hairpin mode, but it looks like my configs are never saved, even though the edit command returns without error.
Here is what I try to set when I edit node:
spec:
podCIDR: 10.4.1.0/24
providerID: gce://staging/us-east4-b/gke-cluster-staging-highmem-f36fb529-cfnv
configSource:
configMap:
name: my-node-config-4kbd7d944d
namespace: kube-system
kubeletConfigKey: kubelet
Here is my node config when I describe it
Name: my-node-config-4kbd7d944d
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
kubelet_config:
----
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"hairpinMode": "hairpin-veth"
}
I've tried both using "edit node" and "patch". Same result in that nothing is saved. Patch returns "no changes made."
Here is the patch command from the tutorial:
kubectl patch node ${NODE_NAME} -p "{\"spec\":{\"configSource\":{\"configMap\":{\"name\":\"${CONFIG_MAP_NAME}\",\"namespace\":\"kube-system\",\"kubeletConfigKey\":\"kubelet\"}}}}"
I also can't find any resource on where the "hairpinMode" attribute is supposed to be set.
Any help is appreciated!
------------------- edit ----------------
here is why I think hairpinning isn't working.
root#668cb9686f-dzcx8:/app# nslookup tasks-staging.[my-domain].com
Server: 10.0.32.10
Address: 10.0.32.10#53
Non-authoritative answer:
Name: tasks-staging.[my-domain].com
Address: 34.102.170.43
root#668cb9686f-dzcx8:/app# curl https://[my-domain].com/python/healthz
hello
root#668cb9686f-dzcx8:/app# nslookup my-service.default
Server: 10.0.32.10
Address: 10.0.32.10#53
Name: my-service.default.svc.cluster.local
Address: 10.0.38.76
root#668cb9686f-dzcx8:/app# curl https://my-service.default.svc.cluster.local/python/healthz
curl: (7) Failed to connect to my-service.default.svc.cluster.local port 443: Connection timed out
also if I issue a request to localhost from my service (not curl), it gets a "connection refused." Issuing requests to the external domain, which should get routed to the same pod, is fine though.
I only have one service, one node, one pod, and two listening ports at the moment.
--------------------- including deployment yaml -----------------
Deployment
spec:
replicas: 1
spec:
containers:
- name: my-app
ports:
- containerPort: 8080
- containerPort: 50001
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTPS
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
spec:
backend:
serviceName: my-service
servicePort: 60000
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-service
servicePort: 60000
- path: /python/*
backend:
serviceName: my-service
servicePort: 60001
service
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- name: port
port: 60000
targetPort: 8080
- name: python-port
port: 60001
targetPort: 50001
type: NodePort
I'm trying to set up a multi-port application where the main program trigger a script to run through issuing a request on the local machine on a different port. (I need to run something in python but the main app is in golang.)
It's a simple script and I'd like to avoid exposing the python endpoints with the external domain, so I don't have to worry about authentication, etc.
-------------- requests sent from my-service in golang -------------
https://[my-domain]/health: success
https://[my-domain]/python/healthz: success
http://my-service.default:60000/healthz: dial tcp: lookup my-service.default on 169.254.169.254:53: no such host
http://my-service.default/python/healthz: dial tcp: lookup my-service.default on 169.254.169.254:53: no such host
http://my-service.default:60001/python/healthz: dial tcp: lookup my-service.default on 169.254.169.254:53: no such host
http://localhost:50001/healthz: dial tcp 127.0.0.1:50001: connect: connection refused
http://localhost:50001/python/healthz: dial tcp 127.0.0.1:50001: connect: connection refused
Kubelet reconfiguration in GKE
You should not reconfigure kubelet in cloud managed Kubernetes clusters like GKE. It's not supported and it can lead to errors and failures.
Hairpinning in GKE
Hairpinning is enabled by default in GKE provided clusters. You can check if it's enabled by invoking below command on one of the GKE nodes:
ifconfig cbr0 |grep PROMISC
The output should look like that:
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1460 Metric:1
Where the PROMISC will indicate that the hairpinning is enabled.
Please refer to official documentation about debugging services: Kubernetes.io: Debug service: a pod fails to reach itself via the service ip
Workload
Basing only on service definition you provided, you should have an access to your python application on port 50001 with a pod hosting it with:
localhost:50001
ClusterIP:60001
my-service:60001
NodeIP:nodeport-port (check $ kubectl get svc my-service for this port)
I tried to run your Ingress resource and it failed to create. Please check how Ingress definition should look like.
Please take a look on official documentation where whole deployment process is explained with examples:
Kubernetes.io: Connect applications service
Cloud.google.com: Kubernetes engine: Ingress
Cloud.google.com: Kubernetes engine: Load balance ingress
Additionally please check other StackOverflow answers like:
Stackoverflow.com: Kubernetes how to access service if nodeport is random - it describes how you can access application in your pod
Stackoverflow.com: What is the purpose of kubectl proxy - it describes what happen when you create your service object.
Please let me know if you have any questions to that.

Access SQL Server database from Kubernetes Pod

My deployed Spring boot application to trying to connect to an external SQL Server database from Kubernetes Pod. But every time it fails with error
Failed to initialize pool: The TCP/IP connection to the host <>, port 1443 has failed.
Error: "Connection timed out: no further information.
Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.
I have tried to exec into the Pod and successfully ping the DB server without any issues
Below are the solutions I have tried:
Created a Service and Endpoint and provided the DB IP in configuration file tried to bring up the application in the Pod
Tried using the Internal IP from Endpoint instead of DB IP in configuration to see Internal IP is resolved to DB IP
But both these cases gave the same result. Below is the yaml I am using the create the Service and Endpoint.
---
apiVersion: v1
kind: Service
metadata:
name: mssql
namespace: cattle
spec:
type: ClusterIP
ports:
- port: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: mssql
namespace: cattle
subsets:
- addresses:
- ip: <<DB IP>>
ports:
- port: 1433
Please let me know if I am wrong or missing in this setup.
Additional information the K8s setup
It is clustered master with external etcd cluster topology
OS on the nodes is CentOS
Able to ping the server from all nodes and the pods that are created
For this scenario a headless service is very useful. You will redirect traffic to this ip without defining an endpoint.
kind: "Service"
apiVersion: "v1"
metadata:
namespace: "your-namespace"
name: "ftp"
spec:
type: ExternalName
externalName: your-ip
The issue was resolved by updating the deployment yaml with IP address. Since all the servers were in same subnet, I did not need the to create a service or endpoint to access the DB. Thank you for all the inputs on the post