Kubernetes MySQL connection timeout - kubernetes

I've set up a Kubernetes deployment and service for MySQL. I cannot access the MySQL service from any pod using its DNS name... It just times out. Any other port refuses the connection immediately, but the port in my service configuration times out after ~10 seconds.
I am able to resolve the MySQL Pod DNS.
I cannot ping the host.
Service.yml
apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
run: mysql-service
spec:
ports:
- port: 3306
protocol: TCP
- port: 3306
protocol: UDP
selector:
run: mysql-service
Deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-service
labels:
app: mysql-service
spec:
replicas: 1
selector:
matchLabels:
app: mysql-service
template:
metadata:
labels:
app: mysql-service
spec:
containers:
- name: 'mysql-service'
image: mysql:5.5
env:
- name: MYSQL_ROOT_PASSWORD
value: some_password
- name: MYSQL_DATABASE
value: some_database
ports:
- containerPort: 3306

Your deployment (and more specifically its pod spec) says
labels:
app: mysql-service
but your service says
selector:
run: mysql-service
These don't match, so your service isn't attaching to the pod. You should also see this if you kubectl describe service mysql-service, the "endpoints" list will be empty.
Change the service's selector to match the pod's labels (or vice versa) and this should be better.

Related

connect Postgres database in docker to app in Kubernetes

I'm new with Kubernetes and I try to understand how to connect Postgres database which is outside from Kubernetes (exactly in docker with ip address 172.17.0.2 and port 5432) to my webapp in Kubernetes.
I try connect database through env variable PS_DATABASE_URL in Deployment section.
But it cannot find mentioned url for postgres. How it need to be done correctly?
webapp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dmitriy83/flask_kuber
ports:
- containerPort: 5000
env:
- name: PS_DATABASE_URL
value: postgresql://postgres:password#172.17.0.2:5432/db
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 30100
I figured it out. it depends from cloud provider. For this example i use amazon cloud and to connect database on amazon (this is external service). So we must define it in yaml file like an external service.
postgres_external.yaml
kind: Service
apiVersion: v1
metadata:
name: postgres
spec:
type: ExternalName
externalName: db.cdmhjidhpqyu.us-east-2.rds.amazonaws.com
to connect to external service you need to link to it on deployment.
webapp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dmitriy83/flask_kuber
ports:
- containerPort: 5000
env:
- name: PS_DATABASE_URL
value: postgresql://<username>:<password>#postgres:5432/db
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 30100
Please note in webapp.yaml, env section value value: postgresql://<username>:<password>#postgres:5432/db   contains postgres - this is name of our external service which we define in postgres_external.yaml

What host does Kubernetes assign to my deployment?

I have two Kubernetes deployments: composite-app (1 pod) and product-app (2 pods), both listening in port 8080. The first one needs to call the second one sometimes.
However, the first deployment can't find the second one. When it tries to call it using the product.app host it fails:
Exception: I/O error on GET request for "http://product-app:8080/product/123": product-app;
nested exception is UnknownHostException
Am I using the right host? So far I've tried (to no avail):
product
product-app.default.pod.cluster.local
product-app
Here's my YAML:
apiVersion: v1
kind: Service
metadata:
name: composite-service
spec:
type: NodePort
selector:
app: composite-app
ports:
- targetPort: 8080
port: 8080
nodePort: 30091
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: composite-deploy
spec:
replicas: 1
selector:
matchLabels:
app: composite-app
template:
metadata:
labels:
app: composite-app
spec:
containers:
- name: composite-container
image: 192.168.49.2:2376/composite-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You need to define a Service object for the product-deploy Deployment as well for the other pod to be able to connect to it. The Service can be of type ClusterIP if it is not needed to be exposed to the external world.
apiVersion: v1
kind: Service
metadata:
name: product-service
spec:
type: ClusterIP
selector:
app: product-app
ports:
- targetPort: 8080
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You can connect to the other pod using the pod's IP without the service. However, that is not recommended since the pod's IP can be changed across pod updates.
You can then connect to the product-app pod from the composite-app using product-service.

GKE NodePort service refusing incoming traffic

I have created a Node port service in Google cloud with the following specification... I have a firewall rule created to allow traffic from 0.0.0.0/0 for the port '30100' ,I have verified stackdriver logs and traffic is allowed but when I either use curl or from browser to hit http://:30100 I am not getting any response. I couldn't proceed how to debug the issue also... can someone please suggest on this ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginxv1
template:
metadata:
labels:
app: nginxv1
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginxv1
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
nodePort: 30100
selector:
app: nginxv1
type: NodePort
Thanks.
You need to fix the container port, it must be 80 because the nginx container exposes this port as you can see here
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginxv1
template:
metadata:
labels:
app: nginxv1
spec:
containers:
- name: nginx
image: nginx:latest
---
apiVersion: v1
kind: Service
metadata:
name: nginxv1
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 30100
selector:
app: nginxv1
type: NodePort
Also, you need to create a firewall rule to permit the traffic to the node, as mentioned by #danyL in comments:
gcloud compute firewall-rules create test-node-port --allow tcp:30100
Get the node IP with the command
kubectl get nodes -owide
And them try to access the nginx page with:
curl http://<NODEIP>:30100

Cannot access Kubernetes Service - Connection Refused

I'm trying to get a very basic app running on my Kubernetes cluster.
It consists of an ASP.NET Core WebAPI and a .NET Core console application that it accessing the WebAPI.
In the console application, I receive the error: "Connection Refused":
[rro-a#dh-lnx-01 ~]$ sudo kubectl exec -it synchronizer-54b47f496b-lkb67 -n pv2-test -- /bin/bash
root#synchronizer-54b47f496b-lkb67:/app# curl http://svc-coreapi/api/Synchronizations
curl: (7) Failed to connect to svc-coreapi port 80: Connection refused
root#synchronizer-54b47f496b-lkb67:/app#
Below is my YAML for the service:
apiVersion: v1
kind: Service
metadata:
name: svc-coreapi
namespace: pv2-test
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
---
The YAML for the WebAPI:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
creationTimestamp: null
labels:
app: pv2
name: coreapi
namespace: pv2-test
spec:
replicas: 1
selector:
matchLabels:
app: pv2
strategy: {}
template:
metadata:
annotations:
creationTimestamp: null
labels:
app: pv2
spec:
containers:
- env:
- name: DBNAME
value: <DBNAME>
- name: DBPASS
value: <PASSWORD>
- name: DBSERVER
value: <SQLSERVER>
- name: DBUSER
value: <DBUSER>
image: myrepoapi:latest
name: coreapi
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
imagePullSecrets:
- name: pv2-repo-cred
status: {}
The most funny thing is: When I execute kubectl expose deployment coreapi --type=NodePort --name=svc-coreapi it works, but I do not want the WebAPI exposed to the outside.
Omitting the --type=NodePort reverts the type back to ClusterIP and I will get the Connection Refused again.
Can anyone tell me what I can do to resolve this issue?
As #David Maze suggested your ClusterIP service definition lacks selector field which is responsible for selecting a set of Pods labelled with key app and value pv2 as in your example:
...
spec:
replicas: 1
selector:
matchLabels:
app: pv2
strategy: {}
template:
metadata:
annotations:
creationTimestamp: null
labels:
app: pv2
...
Your service definition may look like this and it should work just fine:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: pv2
ports:
- protocol: TCP
port: 80
targetPort: 80

How create service on minikube with yaml configuration,which accessible from host?

How correct write yaml configuration for kubernetes pod and service in minikube cluster with driver on docker with one requirement: 80 port of container must be accessible from host machine. Solution with nodePort doesn't work as excepected:
type: NodePort
ports:
- port: 80
targetPort: 8006
selector:
app: blogapp
Label app: blogapp set on container. Can you show correct configuration for nginx image for example with port accessible from host.
You should create a Kubernetes deployment instead of creating a NodePort. Once you create the deployment(which will also create a ReplicaSet and Pod automatically), you can expose it. The blogapp will not be available to the outside world by default, so you must expose it if you want to be able to access it from outside the cluster.
Exposing the deployment will automatically create a service as well.
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: blogapp
labels:
app: blogapp
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: blogapp
spec:
containers:
- image: <YOUR_NGINX_IMAGE>
name: blogapp
ports:
- containerPort: 8006
resources: {}
restartPolicy: Always
status: {}
Create the deployment
kubectl create -f deployment.yml
Expose the deployment
kubectl expose deployment blogapp --name=blogapp --type=LoadBalancer --target-port=8006
Get the exposed URL
minikube service blogapp --url
You can use the below configuration:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-app-server-instance
labels:
app: blog-app
spec:
replicas: 1
selector:
matchLabels:
app: blog-app
template:
metadata:
labels:
app: blog-app
spec:
containers:
- name: blog-app-server-instance
image: blog-app-server
ports:
- containerPort: 8006
---
apiVersion: v1
kind: Service
metadata:
name: blog-app-service
labels:
app: blog-app
spec:
selector:
app: blog-app
type: NodePort
ports:
- port: 80
nodePort: 31364
targetPort: 8006
protocol: TCP
name: http
I guess you were missing spec.ports[0].nodePort.