Cannot connect to a Mongodb pod in Kubernetes (Connection refused) - mongodb

I have a few remote virtual machines, on which I want to deploy some Mongodb instances and then make them accessible remotely, but for some reason I can't seem to make this work.
These are the steps I took:
I started a Kubernetes pod running Mongodb on a remote virtual machine.
Then I exposed it through a Kubernetes NodePort service.
Then I tried to connect to the Mongodb instance from my laptop, but it
didn't work.
Here is the command I used to try to connect:
$ mongo host:NodePort
(by "host" I mean the Kubernetes master).
And here is its output:
MongoDB shell version v4.0.3
connecting to: mongodb://host:NodePort/test
2018-10-24T21:43:41.462+0200 E QUERY [js] Error: couldn't connect to server host:NodePort, connection attempt failed: SocketException:
Error connecting to host:NodePort :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:257:13
#(connect):1:6
exception: connect failed
From the Kubernetes master, I made sure that the Mongodb pod was running. Then I ran a shell in the container and checked that the Mongodb server was working properly. Moreover, I had previously granted remote access to the Mongodb server, by specifying the "--bind-ip=0.0.0.0" option in its yaml description. To make sure that this option had been applied, I ran this command inside the Mongodb instance, from the same shell:
db._adminCommand( {getCmdLineOpts: 1}
And here is the output:
{
"argv" : [
"mongod",
"--bind_ip",
"0.0.0.0"
],
"parsed" : {
"net" : {
"bindIp" : "0.0.0.0"
}
},
"ok" : 1
}
So the Mongodb server should actually be accessible remotely.
I can't figure out whether the problem is caused by Kubernetes or by Mongodb.
As a test, I followed exactly the same steps by using MySQL instead, and that worked (that is, I ran a MySQL pod and exposed it with a Kubernetes service, to make it accessible remotely, and then I successfully connected to it from my laptop). This would lead me to think that the culprit is Mongodb here, but I'm not sure. Maybe I'm just making a silly mistake somewhere.
Could someone help me shed some light on this? Or tell me how to debug this problem?
EDIT:
Here is the output of the kubectl describe deployment <mongo-deployment> command, as per your request:
Name: mongo-remote
Namespace: default
CreationTimestamp: Thu, 25 Oct 2018 06:31:24 +0000
Labels: name=mongo-remote
Annotations: deployment.kubernetes.io/revision=1
Selector: name=mongo-remote
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: name=mongo-remote
Containers:
mongocontainer:
Image: mongo:4.0.3
Port: 5000/TCP
Host Port: 0/TCP
Command:
mongod
--bind_ip
0.0.0.0
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: mongo-remote-655478448b (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 15m deployment-controller Scaled up replica set mongo-remote-655478448b to 1
For the sake of completeness, here is the yaml description of the deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo-remote
spec:
replicas: 1
template:
metadata:
labels:
name: mongo-remote
spec:
containers:
- name: mongocontainer
image: mongo:4.0.3
imagePullPolicy: Always
command:
- "mongod"
- "--bind_ip"
- "0.0.0.0"
ports:
- containerPort: 5000
name: mongocontainer
nodeSelector:
kubernetes.io/hostname: xxx

I found the mistake (and as I suspected, it was a silly one).
The problem was in the yaml description of the deployment. As no port was specified in the mongod command, mongodb was listening on the default port (27017), but the container was listening on another specified port (5000).
So the solution is to either set the containerPort as the default port of mongodb, like so:
command:
- "mongod"
- "--bind_ip"
- "0.0.0.0"
ports:
- containerPort: 27017
name: mongocontainer
Or to set the port of mongodb as the one of the containerPort, like so:
command:
- "mongod"
- "--bind_ip"
- "0.0.0.0"
- "--port"
- "5000"
ports:
- containerPort: 5000
name: mongocontainer

Related

k0s kubectl exec and kubectl port-forwarding are broken

I have a simple nginx pod and a k0s cluster setup with the k0s binary. Now i want to connect to that pod, but i get this error:
$ kubectl port-forward frontend-deployment-786ddcb47-p5kkv 7000:80
error: error upgrading connection: error dialing backend: rpc error: code = Unavailable
desc = connection error: desc = "transport: Error while dialing dial unix /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock: connect: connection refused"
I dont understand why this happens and why it is tries to access /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock which does not exist on my maschine.
Do I have to add my local dev maschine with k0s to the cluster?
Extract from pod describe:
Containers:
frontend:
Container ID: containerd://897a8911cd31c6d58aef4b22da19dc8166cb7de713a7838bc1e486e497e9f1b2
Image: nginx:1.16
Image ID: docker.io/library/nginx#sha256:d20aa6d1cae56fd17cd458f4807e0de462caf2336f0b70b5eeb69fcaaf30dd9c
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 28 Jan 2021 14:20:58 +0100
Ready: True
Restart Count: 0
Environment: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m43s default-scheduler Successfully assigned remove-me/frontend-deployment-786ddcb47-p5kkv to k0s-worker-2
Normal Pulling 3m42s kubelet Pulling image "nginx:1.16"
Normal Pulled 3m33s kubelet Successfully pulled image "nginx:1.16" in 9.702313183s
Normal Created 3m32s kubelet Created container frontend
Normal Started 3m32s kubelet Started container frontend
deployment.yml and service.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
labels:
app: frontend
spec:
replicas: 2
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: nginx:1.16
ports:
- containerPort: 80
----
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
Workaround is to just remove the file.
/var/lib/k0s/run/konnectivity-server/konnectivity-server.sock and restart the server.
Currenlty my github issue is still open.
https://github.com/k0sproject/k0s/issues/665
I had a similar issue where /var/lib/k0s/run/konnectivity-server/konnectivity-server.sock was not getting created.(in my case path was /run/k0s/konnectivity-server/konnectivity-server.sock )
I changed my host configuration which finally fixed this issue.
Hostname: The hostnames in my nodes were in uppercase but k0s somehow expected it to be in lower case. We can override the hostname in the configuration file but that still did not fix the konnectivity sock issue so I had to reset all my hostnames on nodes to small case hostnames.
This change finally fixed my issue and on my first attempt, I saw the .sock file was getting created but still didn't have the permission. Then I followed the suggestion given above by TecBeast and that fixed the issue permanently.
This issue was then raised as an Issue in the K0s GitHub repository, and fixed in a Pull Request.

EKS, Windows node. networkPlugin cni failed

Per https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html, I ran the command, eksctl utils install-vpc-controllers --cluster <cluster_name> --approve
My EKS version is v1.16.3. I tries to deploy Windows docker images to a windows node. I got error below.
Warning FailedCreatePodSandBox 31s kubelet, ip-west-2.compute.internal Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ab8001f7b01f5c154867b7e" network for pod "mrestapi-67fb477548-v4njs": networkPlugin cni failed to set up pod "mrestapi-67fb477548-v4njs_ui" network: failed to parse Kubernetes args: pod does not have label vpc.amazonaws.com/PrivateIPv4Address
$ kubectl logs vpc-resource-controller-645d6696bc-s5rhk -n kube-system
I1010 03:40:29.041761 1 leaderelection.go:185] attempting to acquire leader lease kube-system/vpc-resource-controller...
I1010 03:40:46.453557 1 leaderelection.go:194] successfully acquired lease kube-system/vpc-resource-controller
W1010 23:57:53.972158 1 reflector.go:341] pkg/mod/k8s.io/client-go#v0.0.0-20180910083459-2cefa64ff137/tools/cache/reflector.go:99: watch of *v1.Pod ended with: too old resource version: 1480444 (1515040)
It complains too old resource version. How do I upgrade the version?
I removed the windows nodes, re-created windows nodes with different instance type. But, it did not work.
Removed windows nodes group, re-created windows nodes group. It did not work.
Finally, I removed entire EKS cluster, re-created eks cluster. The command, kubectl describe node <windows_node> gives me the output below.
vpc.amazonaws.com/CIDRBlock 0 0
vpc.amazonaws.com/ENI 0 0
vpc.amazonaws.com/PrivateIPv4Address 1 1
Deployed windows-server-iis.yaml. It works as expected. The root cause of the problem is mystery.
To troubleshoot this I would...
First list the components to make sure they're running:
$kubectl get pod -n kube-system | grep vpc
vpc-admission-webhook-deployment-7f67d7b49-wgzbg 1/1 Running 0 38h
vpc-resource-controller-595bfc9d98-4mb2g 1/1 Running 0 29
If they are running check their logs
kubectl logs <vpc-yadayada> -n kube-system
Make sure the instance type you are using has enough available IPs per ENI because in the Windows world only one ENI is used and is limited to the max available IP's per ENI minus one for the Primary IP address. I have run into this error before where I have exceeded the number of IP's available to my ENI.
Confirm that the selector of your pod is right
nodeSelector:
kubernetes.io/os: windows
kubernetes.io/arch: amd64
As an anecdote, I have done the steps mentioned under the To enable Windows support for your cluster with a macOS or Linux client section of the doc you linked on a few clusters to date, and they have worked well.
What is your output for
kubectl describe node <windows_node>
?
if it's like :
vpc.amazonaws.com/CIDRBlock: 0
vpc.amazonaws.com/ENI: 0
vpc.amazonaws.com/PrivateIPv4Address: 0
then you need to re-create the nodegroup with different instance type...
then try to deploy this :
apiVersion: apps/v1
kind: Deployment
metadata:
name: windows-server-iis-test
namespace: default
spec:
selector:
matchLabels:
app: windows-server-iis-test
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: windows-server-iis-test
tier: backend
track: stable
spec:
containers:
- name: windows-server-iis-test
image: mcr.microsoft.com/windows/servercore:1809
ports:
- name: http
containerPort: 80
imagePullPolicy: IfNotPresent
command:
- powershell.exe
- -command
- "Add-WindowsFeature Web-Server; Invoke-WebRequest -UseBasicParsing -Uri 'https://dotnetbinaries.blob.core.windows.net/servicemonitor/2.0.1.6/ServiceMonitor.exe' -OutFile 'C:\\ServiceMonitor.exe'; echo '<html><body><br/><br/><marquee><H1>Hello EKS!!!<H1><marquee></body><html>' > C:\\inetpub\\wwwroot\\default.html; C:\\ServiceMonitor.exe 'w3svc'; "
resources:
limits:
cpu: 256m
memory: 256Mi
requests:
cpu: 128m
memory: 100Mi
nodeSelector:
kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
name: windows-server-iis-test
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: windows-server-iis-test
tier: backend
track: stable
sessionAffinity: None
type: ClusterIP
kubectl proxy
open browser http://localhost:8001/api/v1/namespaces/default/services/http:windows-server-iis-test:80/proxy/default.html will shown webpage with Hello EKS text

How to connect mongo replicaset on kubernetes cluster from another cluster pods

I have a mongo db replicaset running on kubernetes cluster ( on AWS EKS ) , say cluster-1. This is running within VPC-1 having cidr 192.174.0.0/16.
I have another cluster in a separate VPC , say VPC-2, where I'll be running some applications on top of the mongo cluster. This VPC cidr range is 192.176.0.0/16. All VPC peering and security group ingress/egress rules are working fine and I am able to ping cluster nodes across the two VPCs.
I am using NodePort type service and StatefulSet for the mongo cluster :
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
name: mongodb
spec:
selector:
role: mongo
type: NodePort
ports:
- port: 26017
targetPort: 27017
nodePort: 30017
Here are the nodes & pods in mongo cluster, cluster-1 :
ubuntu#ip-192-174-5-253:/st_config/kubeobj$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-192-174-187-133.ap-south-1.compute.internal Ready <none> 19h v1.16.8-eks-e16311 192.174.187.133 13.232.195.39 Amazon Linux 2 4.14.181-140.257.amzn2.x86_64 docker://19.3.6
ip-192-174-23-229.ap-south-1.compute.internal Ready <none> 19h v1.16.8-eks-e16311 192.174.23.229 13.234.111.139 Amazon Linux 2 4.14.181-140.257.amzn2.x86_64 docker://19.3.6
ubuntu#ip-192-174-5-253:/st_config/kubeobj$
ubuntu#ip-192-174-5-253:/st_config/kubeobj$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mongod-0 1/1 Running 0 45m 192.174.8.10 ip-192-174-23-229.ap-south-1.compute.internal <none> <none>
mongod-1 1/1 Running 0 44m 192.174.133.136 ip-192-174-187-133.ap-south-1.compute.internal <none> <none>
ubuntu#ip-192-174-5-253:/st_config/kubeobj$
If I try to connect using a specific node address, OR both node addresses, kubernetes is perhaps load-balancing or rotating the connection in a round robin fashion:
ubuntu#ip-192-176-42-206:~$ mongo mongodb://192.174.23.229:30017
MongoDB shell version v3.6.3
connecting to: mongodb://192.174.23.229:30017
MongoDB server version: 3.4.24
WARNING: shell and server versions do not match
test_rs0:PRIMARY>
ubuntu#ip-192-176-42-206:~$ mongo mongodb://192.174.23.229:30017
MongoDB shell version v3.6.3
connecting to: mongodb://192.174.23.229:30017
MongoDB server version: 3.4.24
WARNING: shell and server versions do not match
test_rs0:SECONDARY>
ubuntu#ip-192-176-42-206:~$ mongo mongodb://192.174.23.229:30017,192.174.187.133:30017
MongoDB shell version v3.6.3
connecting to: mongodb://192.174.23.229:30017,192.174.187.133:30017
MongoDB server version: 3.4.24
WARNING: shell and server versions do not match
test_rs0:PRIMARY>
I wish to leverage the replicaset features. So when I used the the connect string as - mongodb://192.174.23.229:30017,192.174.187.133:30017/?replicaSet=test_rs0 , it is actually getting the FQDNs of the pods which are not resolved from the node in cluster-2 nodes/pods in VPC-2.
ubuntu#ip-192-176-42-206:~$ mongo mongodb://192.174.23.229:30017,192.174.187.133:30017/?replicaSet=test_rs0
MongoDB shell version v3.6.3
connecting to: mongodb://192.174.23.229:30017,192.174.187.133:30017/?replicaSet=test_rs0
2020-06-23T15:59:07.407+0000 I NETWORK [thread1] Starting new replica set monitor for test_rs0/192.174.23.229:30017,192.174.187.133:30017
2020-06-23T15:59:07.409+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to 192.174.23.229:30017 (1 connections now open to 192.174.23.229:30017 with a 5 second timeout)
2020-06-23T15:59:07.409+0000 I NETWORK [thread1] Successfully connected to 192.174.187.133:30017 (1 connections now open to 192.174.187.133:30017 with a 5 second timeout)
2020-06-23T15:59:07.410+0000 I NETWORK [thread1] changing hosts to test_rs0/mongod-0.mongodb-service.default.svc.cluster.local:27017,mongod-1.mongodb-service.default.svc.cluster.local:27017 from test_rs0/192.174.187.133:30017,192.174.23.229:30017
2020-06-23T15:59:07.415+0000 I NETWORK [thread1] getaddrinfo("mongod-1.mongodb-service.default.svc.cluster.local") failed: Name or service not known
2020-06-23T15:59:07.415+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] getaddrinfo("mongod-0.mongodb-service.default.svc.cluster.local") failed: Name or service not known
2020-06-23T15:59:07.917+0000 I NETWORK [thread1] getaddrinfo("mongod-0.mongodb-service.default.svc.cluster.local") failed: Name or service not known
2020-06-23T15:59:07.918+0000 I NETWORK [thread1] getaddrinfo("mongod-1.mongodb-service.default.svc.cluster.local") failed: Name or service not known
2020-06-23T15:59:07.918+0000 W NETWORK [thread1] Unable to reach primary for set test_rs0
2020-06-23T15:59:07.918+0000 I NETWORK [thread1] Cannot reach any nodes for set test_rs0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.
Do I need to have some additional DNS service so that the names get resolved in VPC-2 nodes ? What would be the best approach ?
Also how can I use connect string can be based on the service name eg. mongodb://mongodb-service.default.svc.cluster.local:/?replicaSet=test_rs0 from any node in VPC-2 ? It works from any pod in VPC-1. But I need to get this working from pods in cluster in VPC-2 so that I don't have to specify specific pod/node IP in connect string. All my kubernetes objects are in default namespace.
Really appreciate some help here.
**Please Note: I am NOT using helm **
Kubernetes has coredns to connect to each pod.
If i didn't wrong you use StatefulSet deployment.
The best approch for connection each of your mongo cluster is use ClusterIP to communicate each other.
If you use same namespaces in mongo, you could connect use :
mongod-0.app_name:27017,mongod-1.app_name:27017
for each of your application
Note : app_name=mongod
Here some example :
apiVersion: v1
kind: Service
metadata:
namespace: mongo-cluster
name: mongo
labels:
app: mongo
name: mongo
spec:
type: ClusterIP
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: mongo-cluster
name: mongo
spec:
serviceName: "mongo"
replicas: 3
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--replSet"
- "MainSetRep"
- "--bind_ip"
- "0.0.0.0"
- "--auth"
- "--clusterAuthMode"
- "keyFile"
- "--keyFile"
- "/etc/secrets-volume/mongodb-keyfile"
- "--setParameter"
- "authenticationMechanisms=SCRAM-SHA-1"
ports:
- containerPort: 27017
volumeMounts:
- name: data
mountPath: /data/db
volumeMounts:
- name: mongo-key
mountPath: "/etc/secrets-volume"
readOnly: true
volumes:
- name: mongo-key
secret:
defaultMode: 0400
secretName: mongo-key
volumeClaimTemplates:
- metadata:
name: data
spec:
storageClassName: openebs-hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20G

How to connect MySQL running on Kubernetes

I have deployed my application on Google gcloud container engine. My application required MySQL. Application is running fine and connecting to MySQL correctly.
But I want to connect MySQL database from my local machine using MySQL Client (Workbench, or command line), Can some one help me how to expose this to local machine? and how can I open MySQL command line on gcloud shell ?
I have run below command but external ip is not there :
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
app-mysql 1 1 1 1 2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
app-mysql-3323704556-nce3w 1/1 Running 0 2m
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-mysql 11.2.145.79 <none> 3306/TCP 23h
EDIT
I am using below yml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-mysql
spec:
replicas: 1
template:
metadata:
labels:
app: app-mysql
spec:
volumes:
- name: data
emptyDir: {}
containers:
- name: mysql
image: mysql:5.6.22
env:
- name: MYSQL_USER
value: root
- name: MYSQL_DATABASE
value: appdb
ports:
- containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql/
---
apiVersion: v1
kind: Service
metadata:
name: app-mysql
spec:
selector:
app: app-mysql
ports:
- port: 3306
Try the kubectl port-forward command.
In your case; kubectl port-forward app-mysql-3323704556-nce3w 3306:3306
See The documentation for all available options.
There are 2 steps involved:
1 ) You first perform port forwarding from localhost to your pod:
kubectl port-forward <your-mysql-pod-name> 3306:3306 -n <your-namespace>
2 ) Connect to database:
mysql -u root -h 127.0.0.1 -p <your-password>
Notice that you might need to change 127.0.0.1 to localhost - depends on your setup.
If host is set to:
localhost - then a socket or pipe is used.
127.0.0.1 - then the client is forced to use TCP/IP.
You can check if your database is listening for TCP connections with netstat -nlp.
Read more in:
Cant connect to local mysql server through socket tmp mysql sock
Can not connect to server
To add to the above answer, when you add --address 0.0.0.0 kubectl should open 3306 port to the INTERNET too (not only localhost)!
kubectl port-forward POD_NAME 3306:3306 --address 0.0.0.0
Use it with caution for short debugging sessions only, on development systems at most. I used it in the following situation:
colleague who uses Windows
didn't have ssh key ready
environment was a playground I was not afraid to expose to the world
You need to add a service to your deployment. The service will add a load balancer with a public ip in front of your pod, so it can be accessed over the public internet.
See the documentation on how to add a service to a Kubernetes deployment. Use the following code to add a service to your app-mysql deployment:
kubectl expose deployment/app-mysql
You may also need to configure your MySQL service so it allows remote connections. See this link on how to enable remote access on MySQL server:

Enable remote acess to kubernetes pod on local installation

I'm also trying to expose a mysql server instance on a local kubernetes installation(1 master and one node, both on oracle linux) but I not being able to access to the pod.
The pod configuration is this:
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- resources:
limits :
cpu: 1
image: docker.io/mariadb
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: 123456
ports:
- containerPort: 3306
name: mysql
And the service file:
apiVersion: v1
kind: Service
metadata:
labels:
name: mysql
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: 30306
selector:
name: mysql
I can see that the pod is is running:
# kubectl get pod mysql
NAME READY STATUS RESTARTS AGE
mysql 1/1 Running 0 3d
And the service is connected to an endpoint:
# kubectl describe service mysql
Name: mysql
Namespace: default
Labels: name=mysql
Selector: name=mysql
Type: NodePort
IP: 10.254.200.20
Port: <unset> 3306/TCP
NodePort: <unset> 30306/TCP
Endpoints: 11.0.14.2:3306
Session Affinity: None
No events.
I can see on netstat that kube-proxy is listening on port 30306 for all incoming connections.
tcp6 6 0 :::30306 :::* LISTEN 53039/kube-proxy
But somehow I don't get a response from mysql even on the localhost.
# telnet localhost 30306
Trying ::1...
Connected to localhost.
Escape character is '^]'.
Whereas a normal mysql installation responds with something of the following:
$ telnet [REDACTED] 3306
Trying [REDACTED]...
Connected to [REDACTED].
Escape character is '^]'.
N
[REDACTED]-log�gw&TS(gS�X]G/Q,(#uIJwmysql_native_password^]
Notice the mysql part in the last line.
On a final note there is this kubectl output:
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 9d
mysql 10.254.200.20 nodes 3306/TCP 1h
But I don't understand what "nodes" mean in the EXTERNAL-IP column.
So what I want to happen is to open the access to the mysql service through the master IP(preferrably). How do I do that and what am I doing wrong?
I'm still not sure how to make clients connect to a single server that transparently routes all connections to the minions.
-> To do this you need a load balancer, which unfortunately is not a default Kubernetes building bloc.
You need to set up a reverse proxy that will send the traffic to the minion, like a nginx pod and a service using hostPort: <port> that will bind the port to the host. That means the pod needs to stay on that node, and to do that you would want to use a DaemonSet that uses the node name as selector for example.
Obviously, this is not very fault tolerant, so you can setup multiple reverse proxies and use DNS round robin resolution to forward traffic to one of the proxy pods.
Somewhere, at some point, you need a fixed IP to talk to your service over the internet, so you need to insure there is a static pod somewhere to handle that.
The NodePort is exposed on each Node in your cluster via the kube-proxy service. To connect, use the IP of that host (Node01) to connect to:
telnet [IpOfNode] 30306