Access external database from Kubernetes - postgresql

I have a kubernetes (v1.18.6) with 1 service (loadbalancer), 2 pods in a develoment:
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: app
ports:
- protocol: "TCP"
port: 6000
targetPort: 5000
type: LoadBalancer
A network policy to access Intenert (it is necesary for me):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internet-access
spec:
podSelector:
matchLabels:
networking/allow-internet-access: "true"
policyTypes:
- Ingress
- Egress
ingress:
- {}
Deployment config file
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
progressDeadlineSeconds: 120
selector:
matchLabels:
app: app
replicas: 2
template:
metadata:
labels:
app: app
spec:
imagePullSecrets:
- name: myregistrykey
containers:
- name: app
image: app
imagePullPolicy: Always
ports:
- containerPort: 5000
It is working correctly. But now, I want to connect this imagen to an external database (in another network only access by internet). For this proposition I use this service:
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
clusterIP: None
ports:
- port: 25060
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgresql
subsets:
- addresses:
- ip: 206............
ports:
- port: 25060
name: postgresql
It is all the services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-service LoadBalancer 10.245.134.137 206........... 6000:31726/TCP 2d4h
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 3d7h
postgresql ClusterIP None <none> 25060/TCP 19h
But when I try to connect I receive a timeout error of the database, like can't connect to the database.
I have an internet connection in the image.
I find the solution, the problem was the rules of inbound of the database. I must add the IP of Kubernetes.
Thx.

Here is what worked for me:
Define a service , but set clusterIP: None , so no endpooint is created.
And then create an endpoint yourself with the SAME NAME as your service and set the IP and port of your db.
In your example , you have a type in your endpoint: the name of your endpoint is postgresql not postgresSql.
My example:
---
service.yaml
kind: Service
apiVersion: v1
metadata:
name: backend-mobile-db-service
spec:
clusterIP: None
ports:
- port: 5984
---
kind: Endpoints
apiVersion: v1
metadata:
name: backend-mobile-db-service
subsets:
- addresses:
- ip: 192.168.1.50
ports:
- port: 5984
name: backend-mobile-db-service

For better visibility I am placing the answer OP mentioned in question:
I find the solution, the problem was the rules of inbound of the database. I must add the IP of Kubernetes

The service definition should be corrected. Default service type is clusterIP which doesn't work for external database. You need to update the service type as given below
type: ExternalName
also ensure that service name and the endpoint name should match. it is different in your yaml. please check

If I understand correctly, you have your cluster with application on Digital Ocean cloud and your PostgreSQL is outside this cluster.
In your Application Deployment <> application service you have used services with selectors so you didn't need to create Endpoints manually.
In your external database service you have used services without selectors so you had to create Endpoint manually.
As database is external service, using clusterIP: None is pointless as it will try to match pods inside the cluster. I guess you added it as you read in this docs.
Last thing is that in Endpoint you set ip: 206... which is the same as application service LoadBalancer ip?
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-service LoadBalancer 10.245.134.137 206........... 6000:31726/TCP 2d4h
subsets:
- addresses:
- ip: 206............
It is only a part of information so I am guessing. However in this part you should provide IP of desired database, not your application Loadbalancer IP.
Now based on scenario you can connect:
Database outside cluster with IP address
Remotely hosted database with URI
Remotely hosted database with URI and port remapping
Detailed information about above scenarios you can find in Kubernetes best practices: mapping external services
Based on your current config I assume you want to use scenario 1.
If this database and cluster are somewhere in cloud you could use internal Database IP. If not you should provide IP of machine where this Database is hosted.
You can also read Kubernetes Access External Services article.
Please let me know if you will still have issue after IP change

Related

Kubernetes Ingress - how to access my service on my computer?

I have the following template, with a deployment, a service and an Ingress. I ran minikube addons enable ingress locally to add an ingress controller before.
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi
labels:
app: fastapi
spec:
replicas: 1
selector:
matchLabels:
app: fastapi
template:
metadata:
labels:
app: fastapi
spec:
containers:
- name: fastapi
image: datamastery/fastapi
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: fastapi-service
spec:
selector:
app: fastapi
type: LoadBalancer
ports:
- protocol: TCP
port: 5000
targetPort: 3000
nodePort: 30002
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: datamastery.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: fastapi-service
port:
number: 3000
When I run kubectl get servicesI get:
fastapi-service LoadBalancer 10.108.5.228 <pending> 5000:30002/TCP 5d22h
I my etc/hosts/ file I added the following:
10.108.5.228 datamastery.com
Normally I would expect now to be able to open my service in the browser, but nothing happens. What did I do wrong? Did I miss something in the template? Is the IP wrong? Something in the hosts file?
Thank you!
fastapi-service LoadBalancer 10.108.5.228 5000:30002/TCP 5d22h
10.108.5.228 is an address within your SDN. Only members of your SDN can reach this address, it is unlikely your workstation would have a route sending this trafic to one of your Kubernetes nodes.
<pending> means your cluster is not integrated with a cloud provider with LoadBalancer capabilities. When in doubt: you should use ClusterIP as your service type. LoadBalancer only makes sense in specific cases. While setting a nodePort as you did is also not required (would make sense with a NodePort service, which is as well useful in few use cases, though should not be used otherwise).
You did create an Ingress. If you have an Ingress Controller, you want to connect to that ip/port. The Host header would tell your ingress controller where to route this, within your SDN.
I believe what you are doing here is trying to combine two different things.
NodePort is only sufficient if you have only one node OR you really control where your service pods getting deployed. Otherwise it is not suitable to use the node IP to access services.
To overcome this issue we usually use ingress as a service proxy. Incoming traffic will be routed to the correct service pods depending on the URL and port. Ingress also manages the SSL termination. So basically this is your first "load balancer" as ingress assigned traffic to services across nodes and pods.
In production environment you deploy the ingress controller with the type: Loadbalancer in the kube-system namespace, example for Nginx-ingress:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: kube-system
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
This would spin up a cloud load balancer of your provider and link it to the ingress service in your cluster. So basically now you would have a real load balancer in place balancing traffic between your nodes and ingress routes them to your services and services to your pods.
Back to your question:
In your config files you try to spin up a service with the type: LoadBalancer. This would skip the ingress part and spin up a second cloud load balancer from your provider dedicated for this single service.
You have to remove the type (and nodePort) to use default ClusterIP for your service.
apiVersion: v1
kind: Service
metadata:
name: fastapi-service
spec:
selector:
app: fastapi
ports:
- protocol: TCP
port: 3000
targetPort: 3000
In addition you have mentioned a wrong port. Your ingress object points on port 3000. You Service object on port 5000. So we also change this.
With this config your traffic on the FQDN is routed to ingress, to ClusterIP service on port 3000 to your pods.

Kubernetes service is getting external ip as pending

I running a kubernetes LB but the external ip says "pending", looks like it is trying to get a IP but, I need it as "localhost" to access it in my browser:
What do I miss?
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dev-pypi LoadBalancer 10.106.128.15 <pending> 80:30914/TCP 2m7s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 47h
service/qa-pypi LoadBalancer 10.97.62.94 <pending> 8200:30114/TCP 94m
Thanks in advance.
This is my yaml file:
kind: Service
apiVersion: v1
metadata:
name: qa-pypi
spec:
type: LoadBalancer
selector:
app: pypi-qa
ports:
- protocol: TCP
port: 8200
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: qa-pypi
labels:
app: pypi-qa
spec:
replicas: 2
selector:
matchLabels:
app: pypi-qa
template:
metadata:
labels:
app: pypi-qa
spec:
containers:
- name: pypi-qa
imagePullPolicy: IfNotPresent
image: myimg2
ports:
- containerPort: 8080
volumeMounts:
- name: storageqa
mountPath: /app/local
volumes:
- name: storageqa
persistentVolumeClaim:
claimName: persistvolumeqa
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persistvolumeqa
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Thank you !!!!!!!!!!!!!!!!!!!!
You already have here a lot of advises how to use localhost, what you can do with NodePort. Let me fill the gap and explain why you see Pending state.
Look, minikube itself, doesnt allocate and provide you LoadBalancer ip-address. LoadBalancer service type is widely used in cloud implementations, like EKS, AKS, GKE and others. Because cloud service providers create you loadbalancer in the background as soon as you chose this type.
If you want to use LoadBalancer with minikube you should configure minikube first. What you can do is use built-in metallb minikube addon.
MetalLB had addressed the gap and provides the network LoadBalancer
implementation as an addon.
During metallb installation and configuration you are able to set the range og local ip-adressed that would be assigned to LoadBalancer service instead of Pending state
If you want to know more about this and check real example of configuration - check for example MetalLB Configuration in Minikube — To enable Kubernetes service of type “LoadBalancer” article
you can use ingress to access your application from the browser
ingress
settingup_ingress
tutorial
if you doesn't find your host after setup it may need to register at root ./etc/hosts
127.0.0.1 localhost
127.0.0.1 https-my-nginx.com
To get expose your app in localhost you can try the NodePort type. If you are trying with kubeadm you can use NodePort type or ClusterIp type, so that you can able to expose it locally. Usually, Load-balancer is widely used in deployment on cloud machines.
If you are trying with minikube, then run minikube tunnel so that external-IP will be added for Loadbalancer.
you have to configure external IP on your machine(platform) first then only external ip will be allocated to your service. It can be VIP which you can advertise or address pool needs to make, other wise setting up ingress will also not going to serve the purpose as ingress itself needs external IP to be allocated for communication.
it's a networking thing you need to setup on your private machine.
if you don't want to use this option then can go for node port with that you can access externally with nodeip:nodeport.

Kubernetes to find Pod IP from another Pod

I have the following pods hello-abc and hello-def.
And I want to send data from hello-abc to hello-def.
How would pod hello-abc know the IP address of hello-def?
And I want to do this programmatically.
What's the easiest way for hello-abc to find where hello-def?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-abc-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-abc
spec:
containers:
- name: hello-abc
image: hello-abc:v0.0.1
imagePullPolicy: Always
args: ["/hello-abc"]
ports:
- containerPort: 5000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-def-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-def
spec:
containers:
- name: hello-def
image: hello-def:v0.0.1
imagePullPolicy: Always
args: ["/hello-def"]
ports:
- containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: hello-abc-service
spec:
ports:
- port: 80
targetPort: 5000
protocol: TCP
selector:
app: hello-abc
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: hello-def-service
spec:
ports:
- port: 80
targetPort: 5001
protocol: TCP
selector:
app: hello-def
type: NodePort
Preface
Since you have defined a service that routes to each deployment, if you have deployed both services and deployments into the same namespace, you can in many modern kubernetes clusters take advantage of kube-dns and simply refer to the service by name.
Unfortunately if kube-dns is not configured in your cluster (although it is unlikely) you cannot refer to it by name.
You can read more about DNS records for services here
In addition Kubernetes features "Service Discovery" Which exposes the ports and ips of your services into any container which is deployed into the same namespace.
Solution
This means, to reach hello-def you can do so like this
curl http://hello-def-service:${HELLO_DEF_SERVICE_PORT}
based on Service Discovery https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
Caveat: Its very possible that if the Service port changes, only pods that are created after the change in the same namespace will receive the new environment variables.
External Access
In addition, you can also reach this your service externally since you are using the NodePort feature, as long as your NodePort range is accessible from outside.
This would require you to access your service by node-ip:nodePort
You can find out the NodePort which was randomly assigned to your service with kubectl describe svc/hello-def-service
Ingress
To reach your service from outside you should implement an ingress service such as nginx-ingress
https://github.com/helm/charts/tree/master/stable/nginx-ingress
https://github.com/kubernetes/ingress-nginx
Sidecar
If your 2 services are tightly coupled, you can include both in the same pod using the Kubernetes Sidecar feature. In this case, both containers in the pod would share the same virtual network adapter and accessible via localhost:$port
https://kubernetes.io/docs/concepts/workloads/pods/pod/#uses-of-pods
Service Discovery
When a Pod is run on a Node, the kubelet adds a set of environment
variables for each active Service. It supports both Docker links
compatible variables (see makeLinkVariables) and simpler
{SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the
Service name is upper-cased and dashes are converted to underscores.
Read more about service discovery here:
https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
You should be able to reach hello-def-service from pods in hello-abc via DNS as specified here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
However, kube-dns or CoreDNS has to be configured/installed in your k8s cluster before DNS records can be utilized in your cluster.
Specifically, you should be reach hello-def-service via the DNS record http://hello-def-service for the service running in the same namespace as hello-abc-service
And you should be able to reach hello-def-service running in another namespace ohter_namespace via the DNS record hello-def-service.other_namespace.svc.cluster.local.
If, for some reason, you do not have DNS add-ons installed in your cluster, you still can find the virtual IP of the hello-def-service via environment variables in hello-abc pods. As is documented here.

Kubernetes service not exposed when ClusterIp is set

I have the following YAML file -
apiVersion: v1
kind: Service
metadata:
labels:
name: mariadb
name: mariadb
spec:
ports:
- port: 3306
selector:
name: mariadb
When this service is created, a ClusterIP is automatically set.
My stateful set 'mariadb' is exposed using this service.
But if I login to another pod on Kubernetes, I cannot ping this pod using
ping mariadb-0.mariadb.[namespace].svc.cluster.local
It also does not work if the ServiceType is set to 'NodePort'.
If I update the service to
apiVersion: v1
kind: Service
metadata:
labels:
name: mariadb
name: mariadb
spec:
ports:
- port: 3306
clusterIP: None
selector:
name: mariadb
When I login to another pod on Kubernetes, I can ping this pod using
ping mariadb-0.mariadb.[namespace].svc.cluster.local
Is there any reason why this internal url is not accessible when the ClusterIP is set?
The key is 'clusterIP: None'.
If clusterIP is not set, k8s will allocate one for the service automatically, also the kube-dns will set a domain name for the service, named mariadb.[namespace].svc.cluster.local, that's your first case.
While if clusterIP is set to 'None', that means k8s doesn't allocate a ip for the service, in this case, kube-dns will set a domain name for every endpoints that the service points to, in your second case, it's mariadb-0.mariadb.[namespace].svc.cluster.local.
Also you can set clusterIP to a ip address, in that case, it's the same as your first case.
That's why you can ping mariadb-0.mariadb.[namespace].svc.cluster.local in your second case, while can't in your first case.

Minikube expose MySQL running on localhost as service

I have minikube version v0.17.1 running on my machine. I want to simulate the environment I will have in AWS, where my MySQL instance will be outside of my Kubernetes cluster.
Basically, how can I expose my local MySQL instance running on my machine to the Kubernetes cluster running via minikube?
Kubernetes allows you to create a service without selector, and cluster will not create related endpoint for this service, this feature is usually used to proxy a legacy component or an outside component.
Create a service without selector
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 1443
targetPort: <YOUR_MYSQL_PORT>
Create a relative Endpoint object
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: <YOUR_MYSQL_ADDR>
ports:
- port: <YOUR_MYSQL_PORT>
Get service IP
$ kubectl get svc my-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service <SERVICE_IP> <none> 1443/TCP 18m
Access your MYSQL from service <SERVICE_IP>:1443 or my-service:1443
As of minikube 1.10, there is a special hostname host.minikube.internal that resolves to the host running the minikube VM or container. You can then configure this hostname in your pod's environment variables or the ConfigMap that defines the relevant settings.
Option 1 - use a headless service without selectors
Because this service has no selector, the corresponding Endpoints object will not be created. You can manually map the service to your own specific endpoints (See doc).
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
ports:
- port: 80
targetPort: 8080
---
kind: Endpoints
apiVersion: v1
metadata:
name: my-service
subsets:
- addresses:
- ip: 10.0.2.2
ports:
- port: 8080
Option 2 - use ExternalName service
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
type: ExternalName
externalName: minikube.host
The only caveat is that it needs to be able to resolve minikube.host. Simply add this line to the etc/hosts file should do it.
10.0.2.2 minikube.host
ExternalName doesn't support port mapping at the moment.
Another note: The IP 10.0.2.2 is known to work with Virtual Box only (see SO).
For xhyve, try replacing that with 192.168.99.1 (see GitHub issue and issue). A demo GitHub.
Just a reminder, if on Windows, open your firewall.