How do I connect to MongoDB NodePort service from Compass - mongodb

I have an Mongo deploy on k8s which is exposed as a NodePort. The database is running on a remote Kubernetes cluster and I am trying to connect from my PC using Compass (and my private internet connection).
When I attempt to connect from Compass using the following I am getting an error connect ETIMEDOUT XXX.XXX.XXX.XXX:27017 :
mongodb://root:password#XXX.XXX.XXX.XXX:27017/TestDb?authSource=admin
I have checked the service with
kubectl -n labs get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mongodb-service NodePort 10.XXX.XXX.XXX <none> 27017:31577/TCP 172m
and using the url mongodb://root:password#XXX.XXX.XXX:31577/TestDb?authSource=admin yields the same connect timeout error also.
The service is defined as :
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
namespace: demos
spec:
type: NodePort
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
I have tried using the URL that displays in shell access as :
mongodb://XXX.XX.XX.XX:31577/?compressors=disabled&gssapiServiceName=mongodb
but I get error error-option-gssapiservicename-is-not-supported.
I am able to login and access the database from the command line (kubectl -n demos exec -it podname -- sh) and after login I get :
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
My understanding is that I should be using the Node IP for my k8s cluster as something like
https://XXX.XXX.XXX.XXX:31577/?compressors=disabled&gssapiServiceName=mongodb
but this also complains with error Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
Using either of these URIs :
mongodb://root:password#mongodb-service:27017/
mongodb://root:password#mongodb-service:31562/
also gives an error getaddrinfo ENOTFOUND mongodb-service
What am I missing ?

Related

Kubernetes how to access application in one namespace from another

I have the following components up and running in a kubernetes cluster
A GoLang Application writing data to a mongodb statefulset replicaset in namespace app1
A mongodb replicaset (1 replica) running as a statefulset in the namespace ng-mongo
What I need to do is, I need to access the mongodb database by the golang application for write/read opeations, so what I did was;
Create a headless service for the mongodb in the ng-mongo namespace as below:
# Source: mongo/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: ng-mongo
labels:
app: mongo
spec:
ports:
- port: 27017
targetPort: 27017
name: mongo
clusterIP: None
selector:
role: mongo
And then I deployed the mongodb statefulset and initialized the replicaset as below:
kubectl exec -it mongo-0 -n ng-mongo mongosh
rs.initiate({_id: "rs0",members: [{_id: 0, host: "mongo-0"}]})
// gives output
{ ok: 1 }
Then I created an ExternalName service in the app1 namespace linking the above mongo service in step 1, look below:
# Source: app/templates/svc.yaml
kind: Service
apiVersion: v1
metadata:
name: app1
namespace: app1
spec:
type: ExternalName
externalName: mongo.ng-mongo.svc.cluster.local
ports:
- port: 27017
And at last, I instrumented my golang application as follows;
// Connection URI
const mongo_uri = "mongodb://app1" <-- Here I used the app1, as the ExternalName service's name is `app1`
<RETRACTED-CODE>
And then I ran the application, and checked the logs. Here is what I found:
2022/11/22 12:49:47 server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: mongo-0:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: dial tcp: lookup mongo-0 on 10.96.0.10:53: no such host }, ] }
Update: I haven't set any usernames or passwords for the mongodb
Can someone help me why this is happening?
After some digging, I was able to find the issue.
When specifying the host entry for the rs.initiate({}), I should provide the FQDN of the relevant mongodb instance (in my case it is the mongo-0 pod). Therefore, my initialisation command should look like this;
rs.initiate({_id: "rs0",members: [{_id: 0, host: "mongo-0.mongo.ng-mongo.svc.cluster.local:27017"}]})
From my understanding of what you are trying to do,
Your Pod(golang application) and app1 Service are already in the same namespace.
However, looking at the log,
2022/11/22 12:49:47 server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: mongo-0:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: dial tcp: lookup mongo-0 on 10.96.0.10:53: no such host }, ] }
The log means that the domain named 'mongo-0' could not be found in DNS. (Note that 10.96.0.10 IP is probably kube-dns)
Your application tries to connect to the domain mongo-0, but the domain mongo-0 does not exist in DNS (It means there is no service named mongo-0 on app1 namespace).
What is the 'mongo-0' that your Application trying to access?
(Obviously the log shows an attempt to access the domain mongo-0 and your golang applications mongo_uri indicates mongodb://app1)
Finding out why your application are trying to connect to the mongo-0 domain will help solve the problem.
Hope this helps you.

Why Can't I Access My Kubernetes Cluster Using the minikube IP?

I have some questions regarding my minikube cluster, specifically why there needs to be a tunnel, what the tunnel means actually, and where the port numbers come from.
Background
I'm obviously a total kubernetes beginner...and don't have a ton of networking experience.
Ok. I have the following docker image which I pushed to docker hub. It's a hello express app that just prints out "Hello world" at the / route.
DockerFile:
FROM node:lts-slim
RUN mkdir /code
COPY package*.json server.js /code/
WORKDIR /code
RUN npm install
EXPOSE 3000
CMD ["node", "server.js"]
I have the following pod spec:
web-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: web-pod
spec:
containers:
- name: web
image: kahunacohen/hello-kube:latest
ports:
- containerPort: 3000
The following service:
web-service.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web-pod
ports:
- port: 8080
targetPort: 3000
protocol: TCP
name: http
And the following deployment:
web-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 2
selector:
matchLabels:
app: web-pod
service: web-service
template:
metadata:
labels:
app: web-pod
service: web-service
spec:
containers:
- name: web
image: kahunacohen/hello-kube:latest
ports:
- containerPort: 3000
protocol: TCP
All the objects are up and running and look good after I create them with kubectl.
I do this:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h5m
web-service NodePort 10.104.15.61 <none> 8080:32177/TCP 25m
Then, as per a book I'm reading if I do:
$ curl $(minikube ip):8080 # or :32177, # or :3000
I get no response.
I found when I do this, however I can access the app by going to http://127.0.0.1:52650/:
$ minikube service web-service
|-----------|-------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------|-------------|---------------------------|
| default | web-service | http/8080 | http://192.168.49.2:32177 |
|-----------|-------------|-------------|---------------------------|
🏃 Starting tunnel for service web-service.
|-----------|-------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------|-------------|------------------------|
| default | web-service | | http://127.0.0.1:52472 |
|-----------|-------------|-------------|------------------------|
Questions
what this "tunnel" is and why we need it?
what the targetPort is for (8080)?
What this line means when I do kubectl get services:
web-service NodePort 10.104.15.61 <none> 8080:32177/TCP 25m
Specifically, what is that port mapping means and where 32177 comes from?
Is there some kind of problem with simply mapping the internal port to the same port number externally, e.g. 3000:3000? If so, do we specifically have to provide this mapping?
Let me answer on all your questions.
0 - There's no need to create pods separately (unless it's something to test), this should be done by creating deployments (or statefulsets, depends on the app and needs) which will create a replicaset which will be responsible for keeping right amount of pods in operational conditions. (you can get familiar with deployments in kubernetes.
1 - Tunnel is used to expose the service from inside of VM where minikube is running to the host machine's network. Works with LoadBalancer service type. Please refer to access applications in minikube.
1.1 - Reason why the application is not accessible on the localhost:NodePort is NodePort is exposed within VM where minikube is running, not on your local machine.
You can find minikube VM's IP by running minikube IP and then curl %GIVEN_IP:NodePort. You should get a response from your app.
2 - targetPort indicates the service with which port connection should be established. Please refer to define the service.
In minikube it may be confusing since it's pointed to the service port, not to the targetPort which is define within the service. I think idea was to indicate on which port service is accessible within the cluster.
3 - As for this question, there are headers presented, you can treat them literally. For instance:
$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-service NodePort 10.106.206.158 <none> 80:30001/TCP 21m app=web-pod
NodePort comes from your web-service.yaml for service object. Type is explicitly specified and therefore NodePort is allocated. If you don't specify type of service, it will be created as ClusterIP type and will be accessible only within kubernetes cluster. Please refer to Publishing Services (ServiceTypes).
When service is created with ClusterIP type, there won't be a NodePort in output. E.g.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-service ClusterIP 10.106.206.158 <none> 80/TCP 23m
External-IP will pop up when LoadBalancer service type is used. Additionally for minikube address will appear once you run minikube tunnel in a different shell. After your service will be accessible on your host machine by External-IP + service port.
4 - There are not issues with such mapping. Moreover this is a default behaviour for kubernetes:
Note: A Service can map any incoming port to a targetPort. By default
and for convenience, the targetPort is set to the same value as the
port field.
Please refer to define a service
Edit:
Depending on the driver of minikube (usually this is a virtual box or docker - can be checked on linux VM in .minikube/profiles/minikube/config.json), minikube can have different port forwarding. E.g. I have a minikube based on docker driver and I can see some mappings:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ebcbc898b557 gcr.io/k8s-minikube/kicbase:v0.0.23 "/usr/local/bin/entr…" 5 days ago Up 5 days 127.0.0.1:49157->22/tcp, 127.0.0.1:49156->2376/tcp, 127.0.0.1:49155->5000/tcp, 127.0.0.1:49154->8443/tcp, 127.0.0.1:49153->32443/tcp minikube
For instance 22 for ssh to ssh into minikube VM. This may be an answer why you got response from http://127.0.0.1:52650/

Treafik Let's encrypt simplest example on GKE

I am trying to work on simplest possible example of implementing let's encrypt with Traefik on GKE using this article. I have made some changes to suit my requirement but I am unable to get the ACME certificate.
What I have done so far
Run the following command and create all the resource objects except ingress-route
$ kubectl apply -f 00-resource-crd-definition.yml,05-traefik-rbac.yml,10-service-account.yaml,15-traefik-deployment.yaml,20-traefik-service.yaml,25-whoami-deployment.yaml,30-whoami-service.yaml
customresourcedefinition.apiextensions.k8s.io/ingressroutes.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/middlewares.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/ingressroutetcps.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/tlsoptions.traefik.containo.us created
customresourcedefinition.apiextensions.k8s.io/traefikservices.traefik.containo.us created
clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created
serviceaccount/traefik-ingress-controller created
deployment.apps/traefik created
service/traefik created
deployment.apps/whoami created
service/whoami created
Get the IP of the Traefik Service exposed as Load Balancer
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.109.0.1 <none> 443/TCP 6h16m
traefik LoadBalancer 10.109.15.230 34.69.16.102 80:32318/TCP,443:32634/TCP,8080:32741/TCP 70s
whoami ClusterIP 10.109.14.91 <none> 80/TCP 70s
Create a DNS record for this IP
$ nslookup k8sacmetest.gotdns.ch
Server: 192.168.1.1
Address: 192.168.1.1#53
Non-authoritative answer:
Name: k8sacmetest.gotdns.ch
Address: 34.69.16.102
Create the resource ingress-route
$ kubectl apply -f 35-ingress-route.yaml
ingressroute.traefik.containo.us/simpleingressroute created
ingressroute.traefik.containo.us/ingressroutetls created
Logs of traefik
time="2020-04-25T20:10:31Z" level=info msg="Configuration loaded from flags."
time="2020-04-25T20:10:32Z" level=error msg="subset not found for default/whoami" providerName=kubernetescrd ingress=simpleingressroute namespace=default
time="2020-04-25T20:10:32Z" level=error msg="subset not found for default/whoami" providerName=kubernetescrd ingress=ingressroutetls namespace=default
time="2020-04-25T20:10:52Z" level=error msg="Unable to obtain ACME certificate for domains \"k8sacmetest.gotdns.ch\": unable to generate a certificate for the domains [k8sacmetest.gotdns.ch]: acme: Error -> One or more domains had a problem:\n[k8sacmetest.gotdns.ch] acme: error: 400 :: urn:ietf:params:acme:error:connection :: Timeout during connect (likely firewall problem), url: \n" routerName=default-ingressroutetls-08dd2bb9eecaa72a6606#kubernetescrd rule="Host(`k8sacmetest.gotdns.ch`) && PathPrefix(`/tls`)" providerName=default.acme
What i have acheived
Traefik Dashboard
link
Whoami with notls
link
NOT ABLE TO GET THE ACME CERTIFICATE USING FOR TLS WHOAMI
my-pain
INFRA Details
I am using Google Kubernetes Cluster (the one being talked about here -cloud.google.com/kubernetes-engine, click on Go to Console).
Traefik version is 2.2.
And I am using "CloudShell" to access the cluster".
ASK:
1) Where am i going wrong to get the TLS Certificate?
2) If its firewall issue how to resolve?
3) If you have any other better example for Treafik Let's encrypt simplest example on GKE please let me know
Just run sudo before kubectl port-forward command. You are trying to bind to privileged ports, so you need more permissions.
It is not the simplest example for GKE, because you could use GKE LoadBalnacer instead of kubectl port-forward.
Try with this:
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80
targetPort: web
- protocol: TCP
name: websecure
port: 443
targetPort: websecure
selector:
app: traefik
type: LoadBalancer
Then you can find your new IP with kubectl get svc in EXTERNAL-IP column, add proper DNS record for your domain and you should be fine.

How to connect MySQL running on Kubernetes

I have deployed my application on Google gcloud container engine. My application required MySQL. Application is running fine and connecting to MySQL correctly.
But I want to connect MySQL database from my local machine using MySQL Client (Workbench, or command line), Can some one help me how to expose this to local machine? and how can I open MySQL command line on gcloud shell ?
I have run below command but external ip is not there :
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
app-mysql 1 1 1 1 2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
app-mysql-3323704556-nce3w 1/1 Running 0 2m
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-mysql 11.2.145.79 <none> 3306/TCP 23h
EDIT
I am using below yml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-mysql
spec:
replicas: 1
template:
metadata:
labels:
app: app-mysql
spec:
volumes:
- name: data
emptyDir: {}
containers:
- name: mysql
image: mysql:5.6.22
env:
- name: MYSQL_USER
value: root
- name: MYSQL_DATABASE
value: appdb
ports:
- containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql/
---
apiVersion: v1
kind: Service
metadata:
name: app-mysql
spec:
selector:
app: app-mysql
ports:
- port: 3306
Try the kubectl port-forward command.
In your case; kubectl port-forward app-mysql-3323704556-nce3w 3306:3306
See The documentation for all available options.
There are 2 steps involved:
1 ) You first perform port forwarding from localhost to your pod:
kubectl port-forward <your-mysql-pod-name> 3306:3306 -n <your-namespace>
2 ) Connect to database:
mysql -u root -h 127.0.0.1 -p <your-password>
Notice that you might need to change 127.0.0.1 to localhost - depends on your setup.
If host is set to:
localhost - then a socket or pipe is used.
127.0.0.1 - then the client is forced to use TCP/IP.
You can check if your database is listening for TCP connections with netstat -nlp.
Read more in:
Cant connect to local mysql server through socket tmp mysql sock
Can not connect to server
To add to the above answer, when you add --address 0.0.0.0 kubectl should open 3306 port to the INTERNET too (not only localhost)!
kubectl port-forward POD_NAME 3306:3306 --address 0.0.0.0
Use it with caution for short debugging sessions only, on development systems at most. I used it in the following situation:
colleague who uses Windows
didn't have ssh key ready
environment was a playground I was not afraid to expose to the world
You need to add a service to your deployment. The service will add a load balancer with a public ip in front of your pod, so it can be accessed over the public internet.
See the documentation on how to add a service to a Kubernetes deployment. Use the following code to add a service to your app-mysql deployment:
kubectl expose deployment/app-mysql
You may also need to configure your MySQL service so it allows remote connections. See this link on how to enable remote access on MySQL server:

Enable remote acess to kubernetes pod on local installation

I'm also trying to expose a mysql server instance on a local kubernetes installation(1 master and one node, both on oracle linux) but I not being able to access to the pod.
The pod configuration is this:
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- resources:
limits :
cpu: 1
image: docker.io/mariadb
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: 123456
ports:
- containerPort: 3306
name: mysql
And the service file:
apiVersion: v1
kind: Service
metadata:
labels:
name: mysql
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: 30306
selector:
name: mysql
I can see that the pod is is running:
# kubectl get pod mysql
NAME READY STATUS RESTARTS AGE
mysql 1/1 Running 0 3d
And the service is connected to an endpoint:
# kubectl describe service mysql
Name: mysql
Namespace: default
Labels: name=mysql
Selector: name=mysql
Type: NodePort
IP: 10.254.200.20
Port: <unset> 3306/TCP
NodePort: <unset> 30306/TCP
Endpoints: 11.0.14.2:3306
Session Affinity: None
No events.
I can see on netstat that kube-proxy is listening on port 30306 for all incoming connections.
tcp6 6 0 :::30306 :::* LISTEN 53039/kube-proxy
But somehow I don't get a response from mysql even on the localhost.
# telnet localhost 30306
Trying ::1...
Connected to localhost.
Escape character is '^]'.
Whereas a normal mysql installation responds with something of the following:
$ telnet [REDACTED] 3306
Trying [REDACTED]...
Connected to [REDACTED].
Escape character is '^]'.
N
[REDACTED]-log�gw&TS(gS�X]G/Q,(#uIJwmysql_native_password^]
Notice the mysql part in the last line.
On a final note there is this kubectl output:
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 9d
mysql 10.254.200.20 nodes 3306/TCP 1h
But I don't understand what "nodes" mean in the EXTERNAL-IP column.
So what I want to happen is to open the access to the mysql service through the master IP(preferrably). How do I do that and what am I doing wrong?
I'm still not sure how to make clients connect to a single server that transparently routes all connections to the minions.
-> To do this you need a load balancer, which unfortunately is not a default Kubernetes building bloc.
You need to set up a reverse proxy that will send the traffic to the minion, like a nginx pod and a service using hostPort: <port> that will bind the port to the host. That means the pod needs to stay on that node, and to do that you would want to use a DaemonSet that uses the node name as selector for example.
Obviously, this is not very fault tolerant, so you can setup multiple reverse proxies and use DNS round robin resolution to forward traffic to one of the proxy pods.
Somewhere, at some point, you need a fixed IP to talk to your service over the internet, so you need to insure there is a static pod somewhere to handle that.
The NodePort is exposed on each Node in your cluster via the kube-proxy service. To connect, use the IP of that host (Node01) to connect to:
telnet [IpOfNode] 30306