Kubernetes: Getting name resolution error - kubernetes

I am deploying php and redis to a local minikube cluster but getting below error related to name resolution.
Warning: Redis::connect(): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution in /app/redis.php on line 4
Warning: Redis::connect(): connect() failed: php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution in /app/redis.php on line 4
Fatal error: Uncaught RedisException: Redis server went away in /app/redis.php:5 Stack trace: #0 /app/redis.php(5): Redis->ping() #1 {main} thrown in /app/redis.php on line 5
I am using below configurations files:
apache-php.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver
labels:
app: apache
spec:
replicas: 1
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: php-apache
image: webdevops/php-apache
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: app-code
mountPath: /app
volumes:
- name: app-code
hostPath:
path: /minikubeMnt/src
---
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
app: apache
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
selector:
app: apache
redis.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:5.0.4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: NodePort
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
And I am using the below PHP code to access Redis, I have mounted below code into the apache-php deployment.
<?php
ini_set('display_errors', 1);
$redis = new Redis();
$redis->connect("redis-service", 6379);
echo "Server is running: ".$redis->ping();
Cluster dashboard view for the services is given below:
Thanks in advance.
When I run env command getting below values related to redis and when I use the IP:10.104.115.148 to access redis then it is working fine.
REDIS_SERVICE_PORT=tcp://10.104.115.148:6379
REDIS_SERVICE_PORT_6379_TCP=tcp://10.104.115.148:6379
REDIS_SERVICE_SERVICE_PORT=6379
REDIS_SERVICE_PORT_6379_TCP_ADDR=10.104.115.148
REDIS_SERVICE_PORT_6379_TCP_PROTO=tcp```

Consider using K8S liveliness and readiness probes here, to automatically recover from errors. You can find more related information here.
And you can use an initContainer that check for availability of redis-server using bash while loop with break and then let php-apache to start. For more information, check Scenario 2 in here.
Redis Service as Cluster IP
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: clusterIP
ports:
- port: 6379
targetPort: 6379
selector:
app: redis

Related

how to configure dns resolution for mongo to connect to atlas from inside a k8s cluster

I have several pages that I found with similar question and most answer tell us to white list our IP. However I have allowed access from anywhere 0.0.0.0/0 in the atlas, and have installed the latest version of mongoose(6.2.6 ; which is supposed to have support for the protocol (mongodb+srv).
The connection works perfectly when I run locally using npm start or even from a dockerized container. But, when I deploy to a k8s cluster, I get an error saying:
querySrv ENOTFOUND _mongodb._tcp.mongodb-cluster0.zvnxj.mongodb.net
The deployment and service file are as:
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ns-my-workflow-api
name: my-workflow-api
spec:
replicas: 2
selector:
matchLabels:
app: my-workflow-api
template:
metadata:
labels:
app: my-workflow-api
spec:
containers:
- name: my-workflow-api
image: "myname/my-workflow-api:1.0.0"
ports:
- containerPort: 3000
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: "256m"
The service.yaml has the contents:
apiVersion: v1
kind: Service
metadata:
namespace: ns-my-workflow-api
name: my-workflow-api
spec:
selector:
app: my-workflow-api
type: LoadBalancer
ports:
- name: http
port: 8000
targetPort: 3000
protocol: TCP
The namespace.yaml has the contents:
apiVersion: v1
kind: Namespace
metadata:
name: ns-my-workflow-api
I also tried the deployment.yaml with the dns rule:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ns-my-workflow-api
name: my-workflow-api
spec:
replicas: 2
selector:
matchLabels:
app: my-workflow-api
template:
metadata:
labels:
app: my-workflow-api
spec:
dnsPolicy: Default # <------ this rule
containers:
- name: my-workflow-api
image: "myname/my-workflow-api:1.0.0"
ports:
- containerPort: 3000
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: "256m"
Once I changed the connection url to use 2.0.14 or earlier I was able to connect. The connection string started with mongodb://....
While I have managed to make the connection work with the workaround using an old-style connection string, and it seems to be some sort of dns resolution issue, how do I make the newer protocols work to connect to atlas from inside the cluster? Thanks in advance
I was able to solve it using this to start minikube:
minikube start --driver=docker
It seems there's some dns resolution issue with the underlying oracle's virtualbox driver(Maybe some configuration and setup issue as well)

connect Postgres database in docker to app in Kubernetes

I'm new with Kubernetes and I try to understand how to connect Postgres database which is outside from Kubernetes (exactly in docker with ip address 172.17.0.2 and port 5432) to my webapp in Kubernetes.
I try connect database through env variable PS_DATABASE_URL in Deployment section.
But it cannot find mentioned url for postgres. How it need to be done correctly?
webapp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dmitriy83/flask_kuber
ports:
- containerPort: 5000
env:
- name: PS_DATABASE_URL
value: postgresql://postgres:password#172.17.0.2:5432/db
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 30100
I figured it out. it depends from cloud provider. For this example i use amazon cloud and to connect database on amazon (this is external service). So we must define it in yaml file like an external service.
postgres_external.yaml
kind: Service
apiVersion: v1
metadata:
name: postgres
spec:
type: ExternalName
externalName: db.cdmhjidhpqyu.us-east-2.rds.amazonaws.com
to connect to external service you need to link to it on deployment.
webapp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dmitriy83/flask_kuber
ports:
- containerPort: 5000
env:
- name: PS_DATABASE_URL
value: postgresql://<username>:<password>#postgres:5432/db
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 30100
Please note in webapp.yaml, env section value value: postgresql://<username>:<password>#postgres:5432/db   contains postgres - this is name of our external service which we define in postgres_external.yaml

Kubernetes two pods communication ( One is Beanstalkd and another is worker )

I am working on kubernetes to create two pods using deployment:
First deployment - pods have container and running is beanstalkd.
The second one has a worker which is running on php7/nginx and has an application codebase.
I am getting exception:
"user_name":"anonymous","message":"exception 'Pheanstalk_Exception_ConnectionException' with message 'Socket error 0: php_network_getaddresses: getaddrinfo failed: Try again (connecting to test-beanstalkd:11300)' in /var/www/html/vendor/pda/pheanstalk/classes/Pheanstalk/Socket/NativeSocket.php:35\nStack trace:\n#0 "
How to communicate between them:
My beanstalkd.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: test-beanstalkd
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: test-beanstalkd
template:
metadata:
labels:
app: test-beanstalkd
spec:
containers:
# Our PHP-FPM application
- image: schickling/beanstalkd
name: test-beanstalkd
args:
- -p
- "11300"
- -z
- "1073741824"
---
apiVersion: v1
kind: Service
metadata:
name: test-beanstalkd-svc
namespace: test
labels:
run: test-beanstalkd
spec:
ports:
- port: 11300
protocol: TCP
selector:
app: test-beanstalkd
selector:
app: test-beanstalkd
type: NodePort
the below is our worker.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: test-worker
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: test-worker
template:
metadata:
labels:
app: test-worker
spec:
volumes:
# Create the shared files volume to be used in both pods
- name: shared-files
emptyDir: {}
containers:
# Our PHP-FPM application
- image: test-worker:master
name: worker
env:
- name: beanstalkd_host
value: "test-beanstalkd"
volumeMounts:
- name: nginx-config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
---
apiVersion: v1
kind: Service
metadata:
name: test-worker-svc
namespace: test
labels:
run: test-worker
spec:
ports:
- port: 80
protocol: TCP
selector:
app: worker
type: NodePort
the mistake is that in the env of test-worker the beanstalkd_host variable needs to be set to test-beanstalkd-svc because it is the name of the service.

What host does Kubernetes assign to my deployment?

I have two Kubernetes deployments: composite-app (1 pod) and product-app (2 pods), both listening in port 8080. The first one needs to call the second one sometimes.
However, the first deployment can't find the second one. When it tries to call it using the product.app host it fails:
Exception: I/O error on GET request for "http://product-app:8080/product/123": product-app;
nested exception is UnknownHostException
Am I using the right host? So far I've tried (to no avail):
product
product-app.default.pod.cluster.local
product-app
Here's my YAML:
apiVersion: v1
kind: Service
metadata:
name: composite-service
spec:
type: NodePort
selector:
app: composite-app
ports:
- targetPort: 8080
port: 8080
nodePort: 30091
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: composite-deploy
spec:
replicas: 1
selector:
matchLabels:
app: composite-app
template:
metadata:
labels:
app: composite-app
spec:
containers:
- name: composite-container
image: 192.168.49.2:2376/composite-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You need to define a Service object for the product-deploy Deployment as well for the other pod to be able to connect to it. The Service can be of type ClusterIP if it is not needed to be exposed to the external world.
apiVersion: v1
kind: Service
metadata:
name: product-service
spec:
type: ClusterIP
selector:
app: product-app
ports:
- targetPort: 8080
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You can connect to the other pod using the pod's IP without the service. However, that is not recommended since the pod's IP can be changed across pod updates.
You can then connect to the product-app pod from the composite-app using product-service.

Kubernetes MySQL connection timeout

I've set up a Kubernetes deployment and service for MySQL. I cannot access the MySQL service from any pod using its DNS name... It just times out. Any other port refuses the connection immediately, but the port in my service configuration times out after ~10 seconds.
I am able to resolve the MySQL Pod DNS.
I cannot ping the host.
Service.yml
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
run: mysql-service
spec:
ports:
- port: 3306
protocol: TCP
- port: 3306
protocol: UDP
selector:
run: mysql-service
Deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-service
labels:
app: mysql-service
spec:
replicas: 1
selector:
matchLabels:
app: mysql-service
template:
metadata:
labels:
app: mysql-service
spec:
containers:
- name: 'mysql-service'
image: mysql:5.5
env:
- name: MYSQL_ROOT_PASSWORD
value: some_password
- name: MYSQL_DATABASE
value: some_database
ports:
- containerPort: 3306
Your deployment (and more specifically its pod spec) says
labels:
app: mysql-service
but your service says
selector:
run: mysql-service
These don't match, so your service isn't attaching to the pod. You should also see this if you kubectl describe service mysql-service, the "endpoints" list will be empty.
Change the service's selector to match the pod's labels (or vice versa) and this should be better.