Access SQL Server database from Kubernetes Pod - kubernetes

My deployed Spring boot application to trying to connect to an external SQL Server database from Kubernetes Pod. But every time it fails with error
Failed to initialize pool: The TCP/IP connection to the host <>, port 1443 has failed.
Error: "Connection timed out: no further information.
Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.
I have tried to exec into the Pod and successfully ping the DB server without any issues
Below are the solutions I have tried:
Created a Service and Endpoint and provided the DB IP in configuration file tried to bring up the application in the Pod
Tried using the Internal IP from Endpoint instead of DB IP in configuration to see Internal IP is resolved to DB IP
But both these cases gave the same result. Below is the yaml I am using the create the Service and Endpoint.
---
apiVersion: v1
kind: Service
metadata:
name: mssql
namespace: cattle
spec:
type: ClusterIP
ports:
- port: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: mssql
namespace: cattle
subsets:
- addresses:
- ip: <<DB IP>>
ports:
- port: 1433
Please let me know if I am wrong or missing in this setup.
Additional information the K8s setup
It is clustered master with external etcd cluster topology
OS on the nodes is CentOS
Able to ping the server from all nodes and the pods that are created

For this scenario a headless service is very useful. You will redirect traffic to this ip without defining an endpoint.
kind: "Service"
apiVersion: "v1"
metadata:
namespace: "your-namespace"
name: "ftp"
spec:
type: ExternalName
externalName: your-ip

The issue was resolved by updating the deployment yaml with IP address. Since all the servers were in same subnet, I did not need the to create a service or endpoint to access the DB. Thank you for all the inputs on the post

Related

Monitor TCP traffic (from/to PostgresQL) in Kubernetes with the Open Service Mesh

I am using Open Service Mesh as a service mesh to monitor network traffic in my Kubernetes cluster. Everything is working fine, but I am not able to monitor traffic to/from PostgresQL pods since all the related metrics have label value "unknown".
In my setup, Postgres is the only pod that uses TCP as protocol, so this may be the reason. However, this is from the docs: "HTTP requests that invoke a local response from Envoy have "unknown" destination_* labels in the metrics." But what does that mean exactly? Is there any way to configure Postgres/OSM so that I can monitor this traffic?
My Postgres Kubernetes service looks like this:
apiVersion: v1
kind: Service
metadata:
name: service-keycloak-postgres
spec:
type: ClusterIP
ports:
- targetPort: 5432
port: 5432
name: tcp-name-keycloak
selector:
app: keycloak-postgres
Any ideas? Thanks a lot!

Kubernetes: is there any way to get headless service endpoint info in container environment vars

I used Cloud Foundry a lot previously, when an app is bind with a service, all the service connection info will be injected into app's environment variables. In Kubernetes world, I think this is same for normal service.
For me, I try to use headless service to describe an external PostgreSQL using below service yaml.
---
kind: "Service"
apiVersion: "v1"
metadata:
name: "postgresql"
spec:
clusterIP: None
ports:
- protocol: "TCP"
port: 5432
targetPort: 5432
nodePort: 0
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: "postgresql"
subsets:
-
addresses:
- ip: "10.29.0.123"
ports:
- port: 5432
After deploy the headless service to cluster, the container does not has any environment variables for that, I guess it is because the ClusterIP = None.
The apps can use postgresql:5432 to access by DNS, but I just wonder why Kubernetes does not inject the headless service and its endpoints into the app's environment variable, so the app can get both ip and port from it?
Is there any way to do so?
Thanks!
The Kube-proxy does not manage HeadLess Service, a request made to theses service is only forwarded to the it.
Kubernetes does not really aknowledge theses endpoints (cf https://kubernetes.io/docs/concepts/services-networking/service/#headless-services).
To pass the IP of your postgreSQL DB, you will have to add a environment variable in your deployment, like this:
env:
- name: POSTGRESQL_ADDR
value: "10.29.0.123:5432"
I found the answer to the question. For a headless service, the service info will not be shown in pod's environment variables. If service info is to be available in the environment var, you need to use the service without selectors, simply remove the "clusterIP: None".
The client pod can use both DNS and environment var for external service discovery.

Mosquitto Broker - DNS name instead of IP address for MQTT clients to use

I am able to get eclipse mosquitto broker up and running with the MQTT clients able to talk to the broker using Broker's IP address. However, as am running these on kubernetes, the broker IP keeps changing on restart. I would like to enable DNS name service for the broker, so the clients can use broker-name instead of the IP. coreDNS is running default in kubernetes..
Any suggestions on what can be done ?
$ nslookup kubernetes.default
Server: 10.43.0.10
Address: 10.43.0.10:53
** server can't find kubernetes.default: NXDOMAIN
** server can't find kubernetes.default: NXDOMAIN
You can achieve that using headless service. You create it by setting the clusterIP field in a service spec to None. Once you do that the DNS server will return the pod IPs instead of the single service and instead of returning a single DNS A record, the DNS server will return multiple A records for the service each pointing to the IP of an individual pod backing the service at the moment.
With this your client can perform a single DNS A record lookup to fetch the IP of all the pods that are part of the service. Headless service is also often used as service discovery system.
apiVersion: v1
kind: Service
metadata:
name: your-headless-service
spec:
clusterIP: None # <-- This makes the service hadless!
selector:
app: your-mosquito-broker-pod
ports:
- protocol: TCP
port: 80
targetPort: 3000
You are able also to resolve the dns name with regular service as well. The difference is that with headless service you are able to talk to the pod directly instead having service as load-balancer or proxy.
Resolving the service thru dns is easy and you do that with the following pattern:
backend-broker.default.svc.cluster.local
Whereas backend-broker corresponds to the service name, default stands for the namespace the service is defined in, and svc.cluster.local is a configurable cluster domain suffix used in all cluster local service names.
Note that if you client and broker are in the same namespace you can omit the svc.cluster.local suffix and the namespace. You then reffer the servie as:
backend-broker
I high encourage you to read more about Dns in kubernetes.
All,
Thanks for answering the query, especially Thomas for code pointers. With your suggestions, once I create a Service for the POD, I was able to get the DNS working as core-dns was already running .. I was able to use the hostname in MQTT broker also after this.
opts.AddBroker(fmt.Sprintf("tcp://mqtt-broker:1883"))
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-02-01T19:08:46Z"
labels:
app: ipc
name: mqtt-broker
namespace: default
BTW, I wasnt able to get the headless service working, was getting this error, so continued with ClusterIP itself + 1883 exposed port for MQTT. Any suggestions please ?
`services "mqtt-broker" was not valid:`
`spec.clusterIPs[0]: Invalid value: []string{"None"}: may not change once set`

Kubernetes / Metallb single entrypoint

I'm building a K8 cluster for a school project.
It's bare metal and uses metallb as a loadbalancer.
Each service works in a separate pod:
Nginx
Wordpress
Phpmyadmin
Mysql (mariadb)
In the phpmyadmin file, I need to link my mysql server with something like this:
$cfg['Servers'][$i]['host'] = "mysql-server-name";
I've tried to use the node's IP:
kubectl get node -o=custom-columns='DATA:status.addresses[0].address' | sed -n 2p
adding the port :3306 but I realised that none of my services could be reached through the browser with this method.
For instance the node's Ip:5050 should redirect me to my wordpress but it doesn't.
Is there any way to get a single IP that I can use to make my pods communicate between them ?
I must add that each service works appart when I use the svc IP instead of the nodes.
Here's the configmap I use for metallb:
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.99.100-192.168.99.200
The reason the node IP doesn't expose your application to other apps is that the pods in the kubernetes cluster don't listen to the requests coming to the node by default. In other words, the port on the pod is not connected to the port on the node.
The service resource is what you need to make that connection.
Services have different types. A service of type cluster IP will assign an IP internal to the cluster to the app. If you don't want to access your mysql database directly from the internet, this is what you would want.
Here is an example service of type cluster IP for your project.
apiVersion: v1
kind: Service
metadata:
name: mysql-service
namespace: metallb-system
spec:
selector:
app: Mysql
ports:
- protocol: TCP
port: 80
targetPort: 3306
Selector selects pods that carry the label app=mysql.
Port is the port that the service will listen to.
TargetPort is the port that mysql is listening to.
When you create the service you can find it's IP by running this command
kubectl get services -n metallb-system
Under CLUSTER-IP column note the IP of the service you created.
So in this case, if mysql is listening to 3306, you can reach it through this service on the service IP on port 80.
If you want to expose your wordpress app to the internet, use either the NodePort or LoadBalancer service types. Here is the reference for service types.

Calling an external service from within Minikube

I have a service (/deployment/pod) running in my Minikube (installed on my Mac) that needs to call an external http service that runs directly on my Mac (i.e. outside Minikube). The domain name of that external service is defined into my Mac /etc/hosts file. Yet, my service within Minikube cannot call that external service. Any idea what I need to configure where? Many thanks. C
Create Endpoints that will forward traffic to your desire external IP address (your local machine). You can directly connect using Endpoints but according to Goole Cloud best practice (doc) is to access it through a Service
Create your Endpoints
kind: Endpoints
apiVersion: v1
metadata:
name: local-ip
subsets:
- addresses:
- ip: 10.240.0.4 # IP of your desire end point
ports:
- port: 27017 # Port that you want to access
Then create you Service
kind: Service
apiVersion: v1
metadata:
name: local-ip
spec:
type: ClusterIP
ports:
- port: 27017
targetPort: 27017
Now you can call external http service using the Service name. In this case local-ip like any other internal service of minikube.
Because your minikube is running on a virtual machine on your laptop , you just need minikube ssh into that machine and enter the address of your external service into the /etc/hosts file of that virtual machine.