dial tcp IP:80: connect: operation timed out - kubernetes

I've been using Google Cloud Platform to deploy my Magento
application with Google Kubernetes Engine.
It used to work but now I got that error. Here's the architecture:
and this are the services:
dial tcp IP:80: connect: operation timed out
I'm trying to expose my nginx pod using load balancer.
My nginx pod configuration file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: nginx
tier: backend
template:
metadata:
labels:
app: nginx
tier: backend
spec:
volumes:
- name: dir
persistentVolumeClaim:
claimName: dir
- name: config
configMap:
name: nginx-config
items:
- key: config
path: site.conf
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: dir
mountPath: /var/www/
- name: config
mountPath: /etc/nginx/conf.d
and to expose it I run this command :
kubectl expose deployment nginx --port=80 --type=LoadBalancer
Is there anyone familiar with this error ?

Related

mongodb microservice k8 persistent volume claim not persisting data

I have several microservices, each one with its own mongodb deployment. I would like to start with getting my auth service working with a persistent volume. I have watched courses where postgresql is used and read a lot in the kubernetes docs but am having trouble getting this to work for mongodb.
When I run skaffold dev the PVC is created with no errors. kubectl shows the PVC is in Bound status, and running describe on the PVC shows my mongo deployment as the user.
However, when I visit my client service in the browser, I signup, logout, signin again with no problem and then if I restart skaffold so it deletes and recreates the containers my data is gone and I have to signup again.
Here are my files
auth-mongo-depl.yaml
# auth-mongo service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
volumes:
- name: auth-mongo-data
persistentVolumeClaim:
claimName: auth-mongo-pvc
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
name: 'auth-mongo-port'
volumeMounts:
- name: auth-mongo-data
mountPath: '/data/db'
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: auth-mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
auth-depl.yaml
# auth service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: isimmons33/ticketing-auth
env:
- name: MONGO_URI
value: 'mongodb://auth-mongo-ip-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-ip-srv
spec:
selector:
app: auth
type: ClusterIP
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
api/users portion of my ingress-srv.yaml
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-ip-srv
port:
number: 3000
My client fires off a post request to /api/users/auth with which I can successfully signup or signin as long as I don't restart skaffold.
I even used kubectl to get a shell into my mongo deployment and queried to see the new user account there as it should be. But of course it is gone after restarting skaffold.
I am on Windows 10 but am running everything through WSL2 (Ubuntu)
Thanks for any help
It is highly recommended to use StatefulSets for running databases in Kubernetes. In Deployment if your pod crashes for some reason and creates new one, it's not guaranteed the pod will get patched to the same PV, hence the you loose the data.
Have a look on this https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets
The solution as pointed out by raghu_manne was to use StatefulSets. But because the link posted is extremely old, here is the full solution that worked for me.
Also here is a youtube video I just found that explains StatefulSet and volumeClaimTemplates quite well.
How to run MongoDB with StatefulSets in Kubernetes
auth-mongo-depl.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: auth-mongo-depl
spec:
replicas: 1
serviceName: auth-mongo
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: auth-mongo-data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: auth-mongo-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017

visual studio kubernetes project 503 error in azure

I have created a kubernetes project in visual studio 2019, with the default template. This template creates a WeatherForecast controller.
After that I have published it to my ARC.
I used this command to create the AKS:
az aks create -n $MYAKS -g $MYRG --generate-ssh-keys --z 1 -s Standard_B2s --attach-acr /subscriptions/mysubscriptionguid/resourcegroups/$MYRG/providers/Microsoft.ContainerRegistry/registries/$MYACR
And I enabled HTTP application routing via the azure portal.
I have deployed it to azure kubernetes (Standard_B2s), with the following deployment.yaml:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1-deployment
labels:
app: kubernetes1-deployment
spec:
replicas: 2
selector:
matchLabels:
app: kubernetes1
template:
metadata:
labels:
app: kubernetes1
spec:
containers:
- name: kubernetes1
image: mycontainername.azurecr.io/kubernetes1:latest
ports:
- containerPort: 80
service.yaml:
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes1
spec:
type: ClusterIP
selector:
app: kubernetes1
ports:
- port: 80 # SERVICE exposed port
name: http # SERVICE port name
protocol: TCP # The protocol the SERVICE will listen to
targetPort: http # Port to forward to in the POD
ingress.yaml:
#ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes1
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: kubernetes1.<uuid (removed for this post)>.westeurope.aksapp.io # Which host is allowed to enter the cluster
http:
paths:
- backend: # How the ingress will handle the requests
service:
name: kubernetes1 # Which service the request will be forwarded to
port:
name: http # Which port in that service
path: / # Which path is this rule referring to
pathType: Prefix # See more at https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types
But when I go to kubernetes1..westeurope.aksapp.io or kubernetes1..westeurope.aksapp.io/WeatherForecast I get the following error:
503 Service Temporarily Unavailable
nginx/1.15.3
It's working now. For other people who have the same problem. I have updated my deployment config from:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1-deployment
labels:
app: kubernetes1-deployment
spec:
replicas: 2
selector:
matchLabels:
app: kubernetes1
template:
metadata:
labels:
app: kubernetes1
spec:
containers:
- name: kubernetes1
image: mycontainername.azurecr.io/kubernetes1:latest
ports:
- containerPort: 80
to:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1
spec:
selector: # Define the wrapping strategy
matchLabels: # Match all pods with the defined labels
app: kubernetes1 # Labels follow the `name: value` template
template: # This is the template of the pod inside the deployment
metadata:
labels:
app: kubernetes1
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- image: mycontainername.azurecr.io/kubernetes1:latest
name: kubernetes1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
name: http
I don't know exactly which line solved the problem. Feel free to comment it if you know which line the problem was.

Kubernetes: Getting name resolution error

I am deploying php and redis to a local minikube cluster but getting below error related to name resolution.
Warning: Redis::connect(): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution in /app/redis.php on line 4
Warning: Redis::connect(): connect() failed: php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution in /app/redis.php on line 4
Fatal error: Uncaught RedisException: Redis server went away in /app/redis.php:5 Stack trace: #0 /app/redis.php(5): Redis->ping() #1 {main} thrown in /app/redis.php on line 5
I am using below configurations files:
apache-php.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver
labels:
app: apache
spec:
replicas: 1
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: php-apache
image: webdevops/php-apache
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: app-code
mountPath: /app
volumes:
- name: app-code
hostPath:
path: /minikubeMnt/src
---
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
app: apache
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
selector:
app: apache
redis.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:5.0.4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: NodePort
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
And I am using the below PHP code to access Redis, I have mounted below code into the apache-php deployment.
<?php
ini_set('display_errors', 1);
$redis = new Redis();
$redis->connect("redis-service", 6379);
echo "Server is running: ".$redis->ping();
Cluster dashboard view for the services is given below:
Thanks in advance.
When I run env command getting below values related to redis and when I use the IP:10.104.115.148 to access redis then it is working fine.
REDIS_SERVICE_PORT=tcp://10.104.115.148:6379
REDIS_SERVICE_PORT_6379_TCP=tcp://10.104.115.148:6379
REDIS_SERVICE_SERVICE_PORT=6379
REDIS_SERVICE_PORT_6379_TCP_ADDR=10.104.115.148
REDIS_SERVICE_PORT_6379_TCP_PROTO=tcp```
Consider using K8S liveliness and readiness probes here, to automatically recover from errors. You can find more related information here.
And you can use an initContainer that check for availability of redis-server using bash while loop with break and then let php-apache to start. For more information, check Scenario 2 in here.
Redis Service as Cluster IP
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: clusterIP
ports:
- port: 6379
targetPort: 6379
selector:
app: redis

Connecting to redis pod from another pod in the same cluster

In my cluster I have a nodejs application pod and a redis pod, and I am trying to connect to redis from nodejs, but I am getting the following error:
[ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND redis at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:60:26)
It is worth noting that in my nodejs app I am pointing at redis:6379 and if I change the pointer to localhost:6379, I will get a ECONNREFUSED error.
My redis deployment yaml looks like this:
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis:5.0.4
command:
- redis-server
- '/redis-master/redis.conf'
env:
- name: MASTER
value: 'true'
ports:
- containerPort: 6379
resources:
limits:
cpu: '0.1'
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-config
items:
- key: redis-config
path: redis.conf
Mu redis service yaml is the following:
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
ports:
- name: redis
port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
And the service of my app looks like this:
apiVersion: v1
kind: Service
metadata:
name: bff
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '8080'
labels:
app: bff
spec:
ports:
- name: external
port: 80
targetPort: web
protocol: TCP
- name: metrics
port: 8080
targetPort: metrics
protocol: TCP
selector:
app: bff
I have tried following other answers given to similar questions, but they do not seem to work in my case.
The root cause is your service selector definition doesn't match your pods's lables.
Quote from Kubernetes service document:
The controller for the Service selector continuously scans for Pods that match its selector, and then POSTs any updates to an Endpoint object also named “my-service”.
See this link for full document.
In short, by following this Kubernetes official example, you should deploy the deployment "redis-master-deployment.yaml" instead of bare pod.

Kubernetes on Spinnaker - Interservice communication

I have a sample application running on a Kubernetes cluster. Two microservices, one is a mongodb container and the other is a java springboot container.
The springboot container interacts with the mongodb container thro a service and stores data into the mongodb container.
The specs are provided below.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: empappdepl
labels:
name: empapp
spec:
replicas: 1
template:
metadata:
labels:
name: empapp
spec:
containers:
-
resources:
limits:
cpu: 0.5
image: 11.168.xx.xx:5000/employee:latest
imagePullPolicy: IfNotPresent
name: wsemp
ports:
- containerPort: 8080
name: wsemp
command: ["java","-Dspring.data.mongodb.uri=mongodb://mongoservice/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
labels:
name: empwhatever
name: empservice
spec:
ports:
- port: 8080
nodePort: 30062
type: NodePort
selector:
name: empapp
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongodbdepl
labels:
name: mongodb
spec:
replicas: 1
template:
metadata:
labels:
name: mongodb
spec:
containers:
- resources:
limits:
cpu: 1
image: mongo
imagePullPolicy: IfNotPresent
name: mongodb
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
labels:
name: mongowhatever
name: mongoservice
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
type: NodePort
selector:
name: mongodb
I would like to know how this communication can be accomplished in spinnaker since it creates its own labels and selectors.
Thanks,
This is how it needs to be done.
Each loadbalancer created for the application is the service. So for mongodb application, after a loadbalancer is created with the nodeport settings, get the name of the service eg: mongodb-dev. The server group for mongodb also needs to be created.
Then when creating the employee server group, you need to specify the commands one by one in a separate line for that container as mentioned here
https://github.com/spinnaker/spinnaker/issues/2021#issuecomment-334885467
"java","-Dspring.data.mongodb.uri=mongodb://name-of-mongodb-service/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"
Now when the employee and mongodb pod starts, it is able to get its mapping and able to communicate properly.