I have a mongo db replicaset running on kubernetes cluster ( on AWS EKS ) , say cluster-1. This is running within VPC-1 having cidr 192.174.0.0/16.
I have another cluster in a separate VPC , say VPC-2, where I'll be running some applications on top of the mongo cluster. This VPC cidr range is 192.176.0.0/16. All VPC peering and security group ingress/egress rules are working fine and I am able to ping cluster nodes across the two VPCs.
I am using NodePort type service and StatefulSet for the mongo cluster :
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
name: mongodb
spec:
selector:
role: mongo
type: NodePort
ports:
- port: 26017
targetPort: 27017
nodePort: 30017
Here are the nodes & pods in mongo cluster, cluster-1 :
ubuntu#ip-192-174-5-253:/st_config/kubeobj$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-192-174-187-133.ap-south-1.compute.internal Ready <none> 19h v1.16.8-eks-e16311 192.174.187.133 13.232.195.39 Amazon Linux 2 4.14.181-140.257.amzn2.x86_64 docker://19.3.6
ip-192-174-23-229.ap-south-1.compute.internal Ready <none> 19h v1.16.8-eks-e16311 192.174.23.229 13.234.111.139 Amazon Linux 2 4.14.181-140.257.amzn2.x86_64 docker://19.3.6
ubuntu#ip-192-174-5-253:/st_config/kubeobj$
ubuntu#ip-192-174-5-253:/st_config/kubeobj$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mongod-0 1/1 Running 0 45m 192.174.8.10 ip-192-174-23-229.ap-south-1.compute.internal <none> <none>
mongod-1 1/1 Running 0 44m 192.174.133.136 ip-192-174-187-133.ap-south-1.compute.internal <none> <none>
ubuntu#ip-192-174-5-253:/st_config/kubeobj$
If I try to connect using a specific node address, OR both node addresses, kubernetes is perhaps load-balancing or rotating the connection in a round robin fashion:
ubuntu#ip-192-176-42-206:~$ mongo mongodb://192.174.23.229:30017
MongoDB shell version v3.6.3
connecting to: mongodb://192.174.23.229:30017
MongoDB server version: 3.4.24
WARNING: shell and server versions do not match
test_rs0:PRIMARY>
ubuntu#ip-192-176-42-206:~$ mongo mongodb://192.174.23.229:30017
MongoDB shell version v3.6.3
connecting to: mongodb://192.174.23.229:30017
MongoDB server version: 3.4.24
WARNING: shell and server versions do not match
test_rs0:SECONDARY>
ubuntu#ip-192-176-42-206:~$ mongo mongodb://192.174.23.229:30017,192.174.187.133:30017
MongoDB shell version v3.6.3
connecting to: mongodb://192.174.23.229:30017,192.174.187.133:30017
MongoDB server version: 3.4.24
WARNING: shell and server versions do not match
test_rs0:PRIMARY>
I wish to leverage the replicaset features. So when I used the the connect string as - mongodb://192.174.23.229:30017,192.174.187.133:30017/?replicaSet=test_rs0 , it is actually getting the FQDNs of the pods which are not resolved from the node in cluster-2 nodes/pods in VPC-2.
ubuntu#ip-192-176-42-206:~$ mongo mongodb://192.174.23.229:30017,192.174.187.133:30017/?replicaSet=test_rs0
MongoDB shell version v3.6.3
connecting to: mongodb://192.174.23.229:30017,192.174.187.133:30017/?replicaSet=test_rs0
2020-06-23T15:59:07.407+0000 I NETWORK [thread1] Starting new replica set monitor for test_rs0/192.174.23.229:30017,192.174.187.133:30017
2020-06-23T15:59:07.409+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to 192.174.23.229:30017 (1 connections now open to 192.174.23.229:30017 with a 5 second timeout)
2020-06-23T15:59:07.409+0000 I NETWORK [thread1] Successfully connected to 192.174.187.133:30017 (1 connections now open to 192.174.187.133:30017 with a 5 second timeout)
2020-06-23T15:59:07.410+0000 I NETWORK [thread1] changing hosts to test_rs0/mongod-0.mongodb-service.default.svc.cluster.local:27017,mongod-1.mongodb-service.default.svc.cluster.local:27017 from test_rs0/192.174.187.133:30017,192.174.23.229:30017
2020-06-23T15:59:07.415+0000 I NETWORK [thread1] getaddrinfo("mongod-1.mongodb-service.default.svc.cluster.local") failed: Name or service not known
2020-06-23T15:59:07.415+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] getaddrinfo("mongod-0.mongodb-service.default.svc.cluster.local") failed: Name or service not known
2020-06-23T15:59:07.917+0000 I NETWORK [thread1] getaddrinfo("mongod-0.mongodb-service.default.svc.cluster.local") failed: Name or service not known
2020-06-23T15:59:07.918+0000 I NETWORK [thread1] getaddrinfo("mongod-1.mongodb-service.default.svc.cluster.local") failed: Name or service not known
2020-06-23T15:59:07.918+0000 W NETWORK [thread1] Unable to reach primary for set test_rs0
2020-06-23T15:59:07.918+0000 I NETWORK [thread1] Cannot reach any nodes for set test_rs0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.
Do I need to have some additional DNS service so that the names get resolved in VPC-2 nodes ? What would be the best approach ?
Also how can I use connect string can be based on the service name eg. mongodb://mongodb-service.default.svc.cluster.local:/?replicaSet=test_rs0 from any node in VPC-2 ? It works from any pod in VPC-1. But I need to get this working from pods in cluster in VPC-2 so that I don't have to specify specific pod/node IP in connect string. All my kubernetes objects are in default namespace.
Really appreciate some help here.
**Please Note: I am NOT using helm **
Kubernetes has coredns to connect to each pod.
If i didn't wrong you use StatefulSet deployment.
The best approch for connection each of your mongo cluster is use ClusterIP to communicate each other.
If you use same namespaces in mongo, you could connect use :
mongod-0.app_name:27017,mongod-1.app_name:27017
for each of your application
Note : app_name=mongod
Here some example :
apiVersion: v1
kind: Service
metadata:
namespace: mongo-cluster
name: mongo
labels:
app: mongo
name: mongo
spec:
type: ClusterIP
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: mongo-cluster
name: mongo
spec:
serviceName: "mongo"
replicas: 3
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--replSet"
- "MainSetRep"
- "--bind_ip"
- "0.0.0.0"
- "--auth"
- "--clusterAuthMode"
- "keyFile"
- "--keyFile"
- "/etc/secrets-volume/mongodb-keyfile"
- "--setParameter"
- "authenticationMechanisms=SCRAM-SHA-1"
ports:
- containerPort: 27017
volumeMounts:
- name: data
mountPath: /data/db
volumeMounts:
- name: mongo-key
mountPath: "/etc/secrets-volume"
readOnly: true
volumes:
- name: mongo-key
secret:
defaultMode: 0400
secretName: mongo-key
volumeClaimTemplates:
- metadata:
name: data
spec:
storageClassName: openebs-hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20G
Related
I'm trying to deploy a MongoDB replica set by using the MongoDB Community Kubernetes Operator in Minikube. To view the content of the database I would like to connect to the MongoDB replica set through Mongo Compass.
I followed the instructions on the official GitHub, so:
Install the CRD
Install the necessary roles and role-bindings
Install the Operator
Deploy the Replicaset
The yaml file used for the replica set deployment is the following one:
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongo-rs
namespace: mynamespace
spec:
members: 3
type: ReplicaSet
version: "4.4.0"
persistent: true
security:
authentication:
modes: ["SCRAM"]
users:
- name: user
db: admin
passwordSecretRef:
name: user
roles:
- name: userAdminAnyDatabase
db: admin
scramCredentialsSecretName: user-scram
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
---
apiVersion: v1
kind: Secret
metadata:
name: user
type: Opaque
stringData:
password: password
The MongoDB resource deployed and the mongo-rs pods are all running. Also, I'm able to connect to the replica set directly through the mongo shell within the Kubernetes cluster.
Anyway, I'd like to connect to the MongoDB replicaset also from outside the Kubernetes cluster, so in addition I've created a LoadBalancer service like the following one:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
namespace: mynamespace
spec:
type: LoadBalancer
selector:
app: mongo-rs-svc
ports:
- port: 27017
nodePort: 30017
The pods (namely mongo-rs-0, mongo-rs-1, mongo-rs-2) are correctly binded to the service. On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On minikube, the LoadBalancer type makes the Service accessible through the minikube service command, which generates a tunnel for the service mongodb-service (for instance: 127.0.0.1:34873), but if I try to connect to the MongoDB replica set through the Mongo Compass client using the connection string:
mongodb://user:password#127.0.0.1:34873/?authSource=admin&replicaSet=mongo-rs&readPreference=primary
The client cannon't connect to mongodb, returning the error:
getaddrinfo ENOTFOUND
mongo-rs-0.mongo-rs-svc.mynamespace.svc.cluster.local
Any suggestions on how to access the replica set from outside kubernetes?
Thanks in advance!
Edit: I know it's possible to connect from the outside using port forwarding, but I'd be interested in a more production-oriented approach.
minikube is a development app. So for you it may be sufficent to connect from your host(desktop) via localhost.
So first you can't use the type LoadBalancer, because this round robins between the mongodb instances, but only the primary in the replica set can write.
Normaly the mongo client with the right connection string will select the primary.
So take nodePort and you will get a connection to one mongodb instance.
Make a kubectl port-forward <resource-type/resource-name> [local_port]:<pod_port>` to the service
Before that you may change the mongodb-service. As far as I know it's not nodePort, its targetPort
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
namespace: mynamespace
spec:
type: nodePort
selector:
app: mongo-rs-svc
ports:
- port: 27017
targetPort: 27017
So something like that
kubectl port-forward mongodb-service 27017:27017
Aftert that you may connect you mongodb from localhost:27017 to the kubernetes service, which connects you to the mongo replica.
mongo localhost:27017
should connect. Edit the ports on your needs
Hope this idea helps,
I have a few remote virtual machines, on which I want to deploy some Mongodb instances and then make them accessible remotely, but for some reason I can't seem to make this work.
These are the steps I took:
I started a Kubernetes pod running Mongodb on a remote virtual machine.
Then I exposed it through a Kubernetes NodePort service.
Then I tried to connect to the Mongodb instance from my laptop, but it
didn't work.
Here is the command I used to try to connect:
$ mongo host:NodePort
(by "host" I mean the Kubernetes master).
And here is its output:
MongoDB shell version v4.0.3
connecting to: mongodb://host:NodePort/test
2018-10-24T21:43:41.462+0200 E QUERY [js] Error: couldn't connect to server host:NodePort, connection attempt failed: SocketException:
Error connecting to host:NodePort :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:257:13
#(connect):1:6
exception: connect failed
From the Kubernetes master, I made sure that the Mongodb pod was running. Then I ran a shell in the container and checked that the Mongodb server was working properly. Moreover, I had previously granted remote access to the Mongodb server, by specifying the "--bind-ip=0.0.0.0" option in its yaml description. To make sure that this option had been applied, I ran this command inside the Mongodb instance, from the same shell:
db._adminCommand( {getCmdLineOpts: 1}
And here is the output:
{
"argv" : [
"mongod",
"--bind_ip",
"0.0.0.0"
],
"parsed" : {
"net" : {
"bindIp" : "0.0.0.0"
}
},
"ok" : 1
}
So the Mongodb server should actually be accessible remotely.
I can't figure out whether the problem is caused by Kubernetes or by Mongodb.
As a test, I followed exactly the same steps by using MySQL instead, and that worked (that is, I ran a MySQL pod and exposed it with a Kubernetes service, to make it accessible remotely, and then I successfully connected to it from my laptop). This would lead me to think that the culprit is Mongodb here, but I'm not sure. Maybe I'm just making a silly mistake somewhere.
Could someone help me shed some light on this? Or tell me how to debug this problem?
EDIT:
Here is the output of the kubectl describe deployment <mongo-deployment> command, as per your request:
Name: mongo-remote
Namespace: default
CreationTimestamp: Thu, 25 Oct 2018 06:31:24 +0000
Labels: name=mongo-remote
Annotations: deployment.kubernetes.io/revision=1
Selector: name=mongo-remote
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: name=mongo-remote
Containers:
mongocontainer:
Image: mongo:4.0.3
Port: 5000/TCP
Host Port: 0/TCP
Command:
mongod
--bind_ip
0.0.0.0
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: mongo-remote-655478448b (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 15m deployment-controller Scaled up replica set mongo-remote-655478448b to 1
For the sake of completeness, here is the yaml description of the deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo-remote
spec:
replicas: 1
template:
metadata:
labels:
name: mongo-remote
spec:
containers:
- name: mongocontainer
image: mongo:4.0.3
imagePullPolicy: Always
command:
- "mongod"
- "--bind_ip"
- "0.0.0.0"
ports:
- containerPort: 5000
name: mongocontainer
nodeSelector:
kubernetes.io/hostname: xxx
I found the mistake (and as I suspected, it was a silly one).
The problem was in the yaml description of the deployment. As no port was specified in the mongod command, mongodb was listening on the default port (27017), but the container was listening on another specified port (5000).
So the solution is to either set the containerPort as the default port of mongodb, like so:
command:
- "mongod"
- "--bind_ip"
- "0.0.0.0"
ports:
- containerPort: 27017
name: mongocontainer
Or to set the port of mongodb as the one of the containerPort, like so:
command:
- "mongod"
- "--bind_ip"
- "0.0.0.0"
- "--port"
- "5000"
ports:
- containerPort: 5000
name: mongocontainer
I have multiple instances of Mongo db deployed inside my kubernetes cluster through helm packages.
They are running as a service, in NodePort.
How do I connect to those Mongo db instances through UI tools like MongoDB Compass and RoboMongo from outside the cluster?
Any help is appreciated.
You can use kubectl port-forward to connect to MongoDB from outside the cluster.
Run kubectl port-forward << name of a mongodb pod >> --namespace << mongodb namespace >> 27018:27018.
Now point your UI tool to localhost:27018 and kubectl will forward all connections to the pod inside the cluster.
Starting with Kubernetes 1.10+ you can also use this syntax to connect to a service (you don't have to find a pod name first):
kubectl port-forward svc/<< mongodb service name >> 27018:27018 --namespace << mongodb namespace>>
If it is not your production database you can expose it through a NodePort service:
# find mongo pod name
kubectl get pods
kubectl expose pod <<pod name>> --type=NodePort
# find new mongo service
kubectl get services
Last command will output something like
mongodb-0 10.0.0.45 <nodes> 27017:32151/TCP 30s
Now you can access your mongo instance with mongo <<node-ip>>:32151
Fetch the service associated with the mongo db:
kubectl get services -n <namespace>
Port forward using:
kubectl port-forward service/<service_name> -n <namespace> 27018:27017
Open Robomongo on localhost:27018
If not resolved, expose your mongo workload as a load balancer and use the IP provided by the service. Copy the LB IP and use the same in the robo3T. If it requires authentication, check my YAML file below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
volumeMounts:
- name: data
mountPath: "/data/db"
subPath: "mongodb_data"
ports:
- containerPort: 27017
protocol: TCP
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: xxxx
- name: MONGO_INITDB_ROOT_PASSWORD
value: xxxx
imagePullSecrets:
- name: xxxx
volumes:
- name: data
persistentVolumeClaim:
claimName: xxx
Set the same values in the authentication tab in ROBO3T
NOTE: I haven't mentioned the service section in the YAML since I directly exposed as an LB in the GCP UI itself.
I have deployed my application on Google gcloud container engine. My application required MySQL. Application is running fine and connecting to MySQL correctly.
But I want to connect MySQL database from my local machine using MySQL Client (Workbench, or command line), Can some one help me how to expose this to local machine? and how can I open MySQL command line on gcloud shell ?
I have run below command but external ip is not there :
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
app-mysql 1 1 1 1 2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
app-mysql-3323704556-nce3w 1/1 Running 0 2m
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-mysql 11.2.145.79 <none> 3306/TCP 23h
EDIT
I am using below yml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-mysql
spec:
replicas: 1
template:
metadata:
labels:
app: app-mysql
spec:
volumes:
- name: data
emptyDir: {}
containers:
- name: mysql
image: mysql:5.6.22
env:
- name: MYSQL_USER
value: root
- name: MYSQL_DATABASE
value: appdb
ports:
- containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql/
---
apiVersion: v1
kind: Service
metadata:
name: app-mysql
spec:
selector:
app: app-mysql
ports:
- port: 3306
Try the kubectl port-forward command.
In your case; kubectl port-forward app-mysql-3323704556-nce3w 3306:3306
See The documentation for all available options.
There are 2 steps involved:
1 ) You first perform port forwarding from localhost to your pod:
kubectl port-forward <your-mysql-pod-name> 3306:3306 -n <your-namespace>
2 ) Connect to database:
mysql -u root -h 127.0.0.1 -p <your-password>
Notice that you might need to change 127.0.0.1 to localhost - depends on your setup.
If host is set to:
localhost - then a socket or pipe is used.
127.0.0.1 - then the client is forced to use TCP/IP.
You can check if your database is listening for TCP connections with netstat -nlp.
Read more in:
Cant connect to local mysql server through socket tmp mysql sock
Can not connect to server
To add to the above answer, when you add --address 0.0.0.0 kubectl should open 3306 port to the INTERNET too (not only localhost)!
kubectl port-forward POD_NAME 3306:3306 --address 0.0.0.0
Use it with caution for short debugging sessions only, on development systems at most. I used it in the following situation:
colleague who uses Windows
didn't have ssh key ready
environment was a playground I was not afraid to expose to the world
You need to add a service to your deployment. The service will add a load balancer with a public ip in front of your pod, so it can be accessed over the public internet.
See the documentation on how to add a service to a Kubernetes deployment. Use the following code to add a service to your app-mysql deployment:
kubectl expose deployment/app-mysql
You may also need to configure your MySQL service so it allows remote connections. See this link on how to enable remote access on MySQL server:
I'm also trying to expose a mysql server instance on a local kubernetes installation(1 master and one node, both on oracle linux) but I not being able to access to the pod.
The pod configuration is this:
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- resources:
limits :
cpu: 1
image: docker.io/mariadb
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: 123456
ports:
- containerPort: 3306
name: mysql
And the service file:
apiVersion: v1
kind: Service
metadata:
labels:
name: mysql
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: 30306
selector:
name: mysql
I can see that the pod is is running:
# kubectl get pod mysql
NAME READY STATUS RESTARTS AGE
mysql 1/1 Running 0 3d
And the service is connected to an endpoint:
# kubectl describe service mysql
Name: mysql
Namespace: default
Labels: name=mysql
Selector: name=mysql
Type: NodePort
IP: 10.254.200.20
Port: <unset> 3306/TCP
NodePort: <unset> 30306/TCP
Endpoints: 11.0.14.2:3306
Session Affinity: None
No events.
I can see on netstat that kube-proxy is listening on port 30306 for all incoming connections.
tcp6 6 0 :::30306 :::* LISTEN 53039/kube-proxy
But somehow I don't get a response from mysql even on the localhost.
# telnet localhost 30306
Trying ::1...
Connected to localhost.
Escape character is '^]'.
Whereas a normal mysql installation responds with something of the following:
$ telnet [REDACTED] 3306
Trying [REDACTED]...
Connected to [REDACTED].
Escape character is '^]'.
N
[REDACTED]-log�gw&TS(gS�X]G/Q,(#uIJwmysql_native_password^]
Notice the mysql part in the last line.
On a final note there is this kubectl output:
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 9d
mysql 10.254.200.20 nodes 3306/TCP 1h
But I don't understand what "nodes" mean in the EXTERNAL-IP column.
So what I want to happen is to open the access to the mysql service through the master IP(preferrably). How do I do that and what am I doing wrong?
I'm still not sure how to make clients connect to a single server that transparently routes all connections to the minions.
-> To do this you need a load balancer, which unfortunately is not a default Kubernetes building bloc.
You need to set up a reverse proxy that will send the traffic to the minion, like a nginx pod and a service using hostPort: <port> that will bind the port to the host. That means the pod needs to stay on that node, and to do that you would want to use a DaemonSet that uses the node name as selector for example.
Obviously, this is not very fault tolerant, so you can setup multiple reverse proxies and use DNS round robin resolution to forward traffic to one of the proxy pods.
Somewhere, at some point, you need a fixed IP to talk to your service over the internet, so you need to insure there is a static pod somewhere to handle that.
The NodePort is exposed on each Node in your cluster via the kube-proxy service. To connect, use the IP of that host (Node01) to connect to:
telnet [IpOfNode] 30306