how to configure dns resolution for mongo to connect to atlas from inside a k8s cluster - mongodb

I have several pages that I found with similar question and most answer tell us to white list our IP. However I have allowed access from anywhere 0.0.0.0/0 in the atlas, and have installed the latest version of mongoose(6.2.6 ; which is supposed to have support for the protocol (mongodb+srv).
The connection works perfectly when I run locally using npm start or even from a dockerized container. But, when I deploy to a k8s cluster, I get an error saying:
querySrv ENOTFOUND _mongodb._tcp.mongodb-cluster0.zvnxj.mongodb.net
The deployment and service file are as:
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ns-my-workflow-api
name: my-workflow-api
spec:
replicas: 2
selector:
matchLabels:
app: my-workflow-api
template:
metadata:
labels:
app: my-workflow-api
spec:
containers:
- name: my-workflow-api
image: "myname/my-workflow-api:1.0.0"
ports:
- containerPort: 3000
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: "256m"
The service.yaml has the contents:
apiVersion: v1
kind: Service
metadata:
namespace: ns-my-workflow-api
name: my-workflow-api
spec:
selector:
app: my-workflow-api
type: LoadBalancer
ports:
- name: http
port: 8000
targetPort: 3000
protocol: TCP
The namespace.yaml has the contents:
apiVersion: v1
kind: Namespace
metadata:
name: ns-my-workflow-api
I also tried the deployment.yaml with the dns rule:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ns-my-workflow-api
name: my-workflow-api
spec:
replicas: 2
selector:
matchLabels:
app: my-workflow-api
template:
metadata:
labels:
app: my-workflow-api
spec:
dnsPolicy: Default # <------ this rule
containers:
- name: my-workflow-api
image: "myname/my-workflow-api:1.0.0"
ports:
- containerPort: 3000
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: "256m"
Once I changed the connection url to use 2.0.14 or earlier I was able to connect. The connection string started with mongodb://....
While I have managed to make the connection work with the workaround using an old-style connection string, and it seems to be some sort of dns resolution issue, how do I make the newer protocols work to connect to atlas from inside the cluster? Thanks in advance

I was able to solve it using this to start minikube:
minikube start --driver=docker
It seems there's some dns resolution issue with the underlying oracle's virtualbox driver(Maybe some configuration and setup issue as well)

Related

Kubernetes: Getting name resolution error

I am deploying php and redis to a local minikube cluster but getting below error related to name resolution.
Warning: Redis::connect(): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution in /app/redis.php on line 4
Warning: Redis::connect(): connect() failed: php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution in /app/redis.php on line 4
Fatal error: Uncaught RedisException: Redis server went away in /app/redis.php:5 Stack trace: #0 /app/redis.php(5): Redis->ping() #1 {main} thrown in /app/redis.php on line 5
I am using below configurations files:
apache-php.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver
labels:
app: apache
spec:
replicas: 1
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: php-apache
image: webdevops/php-apache
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: app-code
mountPath: /app
volumes:
- name: app-code
hostPath:
path: /minikubeMnt/src
---
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
app: apache
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
selector:
app: apache
redis.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:5.0.4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: NodePort
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
And I am using the below PHP code to access Redis, I have mounted below code into the apache-php deployment.
<?php
ini_set('display_errors', 1);
$redis = new Redis();
$redis->connect("redis-service", 6379);
echo "Server is running: ".$redis->ping();
Cluster dashboard view for the services is given below:
Thanks in advance.
When I run env command getting below values related to redis and when I use the IP:10.104.115.148 to access redis then it is working fine.
REDIS_SERVICE_PORT=tcp://10.104.115.148:6379
REDIS_SERVICE_PORT_6379_TCP=tcp://10.104.115.148:6379
REDIS_SERVICE_SERVICE_PORT=6379
REDIS_SERVICE_PORT_6379_TCP_ADDR=10.104.115.148
REDIS_SERVICE_PORT_6379_TCP_PROTO=tcp```
Consider using K8S liveliness and readiness probes here, to automatically recover from errors. You can find more related information here.
And you can use an initContainer that check for availability of redis-server using bash while loop with break and then let php-apache to start. For more information, check Scenario 2 in here.
Redis Service as Cluster IP
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: clusterIP
ports:
- port: 6379
targetPort: 6379
selector:
app: redis

Kubernetes: access from outside

I have a flask app running on a remote Kubernetes cluster and when I'm accessing it on the inside it works. However, when I'm trying to access it from the outside nothing happens.
I'm using kind to create the cluster. Locally I can access the flask app via node's IP address.
I'm don't know how to access the service from the outside, do I need to do something else to be able to access the app.
apiVersion: v1
vi serkind: Service
metadata:
name: iweblens-svc
labels:
app: flaskapp
spec:
type: NodePort
ports:
- port: 5000
targetPort: 5000
protocol: TCP
selector:
app: flaskapp
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
kubeadmConfigPatches:
- |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
kubeadmConfigPatchesJSON6902:
- group: kubeadm.k8s.io
version: v1beta2
kind: ClusterConfiguration
patch: |
- op: add
path: /apiServer/certSANs/-
value: my-hostname
nodes:
- role: control-plane
- role: worker
apiVersion: apps/v1
kind: Deployment
metadata:
name: flaskapp
labels:
app: flaskapp
spec:
replicas: 1
selector:
matchLabels:
app: flaskapp
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: flaskapp
image: myimage
imagePullPolicy: Never
ports:
- containerPort: 5000
resources:
limits:
cpu: "0.5"
requests:
cpu: "0.5"
Create a NodePort or LoadBalancer (works only on supported cloud providers) service to expose the deployment outside the cluster.
Here is a guide on how to use NodePort service.
To be be able to access an app via NodePort service the Node IP need to be reachable(i.e should be in same network) from the system where you are accessing it.

Kubernetes connect service and deployment

I am wondering what to specify in a separate deployment in order to have it access a DB deployment/service. Here is the DB deployment/service:
apiVersion: v1
kind: Service
metadata:
name: oracle-db
labels:
app: oracle-db
spec:
ports:
- name: oracle-db
port: 1521
protocol: TCP
targetPort: 1521
selector:
app: oracle-db
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oracle-db-depl
labels:
app: oracle-db
spec:
selector:
matchLabels:
app: oracle-db
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: oracle-db
spec:
containers:
- name: oracle-db
image: oracledb:latest
imagePullPolicy: Always
ports:
- containerPort: 1521
env:
...
How exactly do I specify the connection in the separate deployment? Do I specify the oracle-db service name somewhere? So far I specify a containerPort in the container.
If the other app deployment is in the same namespace you can refer to the oracle service by oracle-db. Here is an example of a word-press application using oracle.
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: oracle-db
ports:
- containerPort: 80
name: wordpress
As you can see oracle service is being referred by oracle-db as an environment variable.
If the service is in different namespace than the app deployment then you can refer to it as oracle-db.namespacename.svc.cluster.local
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
Services in Kubernetes are an "abstract way to expose an application running on a set of Pods as a network service." (k8s documentation)
You can access your pod by its IP and port that Kubernetes have given to it, but that's not a good practice as the Pods can die and another one will be created (if controlled by a Deployment/ReplicaSet). When the new one is created, a new IP will be used, and everything on your app will start to fail.
To solve this you can expose your Pod using a Service (as you already have done), and use service-name:service-port assigned to the Service to access your Pod. In this case, even if the Pod dies and a new one is created, Kubernetes will keep forwarding the traffic to the right Pod.

How to set dynamic IP to property file?

I had deployed 2 pods which needed to talk to another pod (let say Pod A).
Pod A requires Ip address of services of deployed pods.So i need to set those IP address in config property file needed for pod A.
As Ip address are dynamic i.e if pod crashed it get changed.So need to set it dynamically.
Currently I deployed 2 pods and do
kubectl get ep
and set those Ip address in config property file and build Dockerfile and push it and use that image for deployment.
This is my deplyment and svc file in which image djtijare/a2ipricing refers to config file
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
selector:
app: spring-boot-demo-pricing
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
spec:
replicas: 1
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084
nodeSelector:
disktype: ssd
So How to set IP's of those 2 pods dynamically in config file and build and push docker image.
I think you should think about using Headless services.
Sometimes you don’t need or want load-balancing and a single service IP. In this case, you can create what are termed “headless” Services, by explicitly specifying "None" for the cluster IP (.spec.clusterIP).
You can use a headless Service to interface with other service discovery mechanisms, without being tied to Kubernetes’ implementation. For example, you could implement a custom [Operator]( be built upon this API.
For such Services, a cluster IP is not allocated, kube-proxy does not handle these services, and there is no load balancing or proxying done by the platform for them. How DNS is automatically configured depends on whether the service has selectors defined.
For your example if you set service to spec.clusterIP = None you could nslookup -type=A spring-boot-demo-pricing which will show you IPs of pods attached to this service.
/ # nslookup -type=A spring-boot-demo-pricing
Server: 10.11.240.10
Address: 10.11.240.10:53
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.2.20
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.12
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.13
And here are the yaml I've used:
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
clusterIP: None
selector:
app: spring-boot-demo-pricing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
replicas: 3
selector:
matchLabels:
app: spring-boot-demo-pricing
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084

Kubernetes on Spinnaker - Interservice communication

I have a sample application running on a Kubernetes cluster. Two microservices, one is a mongodb container and the other is a java springboot container.
The springboot container interacts with the mongodb container thro a service and stores data into the mongodb container.
The specs are provided below.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: empappdepl
labels:
name: empapp
spec:
replicas: 1
template:
metadata:
labels:
name: empapp
spec:
containers:
-
resources:
limits:
cpu: 0.5
image: 11.168.xx.xx:5000/employee:latest
imagePullPolicy: IfNotPresent
name: wsemp
ports:
- containerPort: 8080
name: wsemp
command: ["java","-Dspring.data.mongodb.uri=mongodb://mongoservice/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
labels:
name: empwhatever
name: empservice
spec:
ports:
- port: 8080
nodePort: 30062
type: NodePort
selector:
name: empapp
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongodbdepl
labels:
name: mongodb
spec:
replicas: 1
template:
metadata:
labels:
name: mongodb
spec:
containers:
- resources:
limits:
cpu: 1
image: mongo
imagePullPolicy: IfNotPresent
name: mongodb
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
labels:
name: mongowhatever
name: mongoservice
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
type: NodePort
selector:
name: mongodb
I would like to know how this communication can be accomplished in spinnaker since it creates its own labels and selectors.
Thanks,
This is how it needs to be done.
Each loadbalancer created for the application is the service. So for mongodb application, after a loadbalancer is created with the nodeport settings, get the name of the service eg: mongodb-dev. The server group for mongodb also needs to be created.
Then when creating the employee server group, you need to specify the commands one by one in a separate line for that container as mentioned here
https://github.com/spinnaker/spinnaker/issues/2021#issuecomment-334885467
"java","-Dspring.data.mongodb.uri=mongodb://name-of-mongodb-service/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"
Now when the employee and mongodb pod starts, it is able to get its mapping and able to communicate properly.