Accessing database with pgAdmin in Kubernetes cluster - postgresql

In my current non-Kubernetes environment, if I need to access the Postgres database, I just setup an SSH tunnel with:
ssh -L 5432:localhost:5432 user#domain.com
I'm trying to figure out how to do something similar in a test Kubernetes cluster I am setting up in EKS, that doesn't involve a great security risk. For example, creating a path in the ingress control to the databases port is a terrible idea.
The cluster would be setup where Postgres is in a pod, but all of the data is on persistent volume claim so that the data persists when the pod is destroyed.
How would one use pgAdmin to access the database in this kind of setup?

The kubectl command can forward TCP ports into a POD via the kube-api
kubectl port-forward {postgres-pod-ID} 5432:5432
If you are not using a cluster-admin user, the user will need to be bound to a role that allows it to create pods/portforward in the pods namespace.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pg-portforward
rules:
- apiGroups: [""]
resources: ["pods/portforward"]
verbs: ["create"]

I have similar kind of setup. My postgreSQL database is deployed in a pod in kubernetes cluster running in AWS cloud. Following are the steps that I performed to access this remote database from pgAdmin running in my local machine (Followed this video).
1 -> Modify the Service type of db from ClusterIP to NodePort as shown below.
apiVersion: v1
kind: Service
metadata:
name: db
spec:
selector:
app: db
ports:
- name: dbport
port: 5432
nodePort: 31000
type: NodePort
2-> Add new rule to security group of any node(EC2 Instance) of your Kubernetes cluster as shown below.
3-> Connect to this remote database using the public IPv4 address of the node(EC2 Instance).
HostAddress: IPv4 address of the node ec2 instance.
Port: 31000

Related

Kubernetes open port to server on same subnet

I am launching a service in a Kubernetes pod that I would like to be available to servers on the same subnet only.
I have created a service with a LoadBalancer opening the desired ports. I can connect to these ports through other pods on the cluster, but I cannot connect from virtual machines I have running on the same subnet.
So far my best solution has been to assign a loadBalancerIP and restrict it with loadBalancerSourceRanges, however this still feels too public.
The virtual machines I am attempting to connect to my service are ephemeral, and have a wide range of public IPs assigned, so my loadBalancerSourceRanges feels too broad.
My understanding was that I could connect to the internal LoadBalancer cluster-ip from servers that were on that same subnet, however this does not seem to be the case.
Is there another solution to limit this service to connections from internal IPs that I am missing?
This is all running on GKE.
Any help would be really appreciated.
i think you are right here a little bit but not sure why you mentioned the cluster-ip
My understanding was that I could connect to the internal LoadBalancer
cluster-ip from servers that were on that same subnet, however this
does not seem to be the case.
Now if you have deployment running on GKE and you have exposed it with service type LoadBalancer and have internal LB you will be able to access to internal LB across same VPC.
apiVersion: v1
kind: Service
metadata:
name: internal-svc
annotations:
networking.gke.io/load-balancer-type: "Internal"
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
selector:
app: internal-svcinternal-svc
ports:
- name: tcp-port
protocol: TCP
port: 8080
targetPort: 8080
once your changes are applied check the status using
kubectl get service internal-svc --output yaml
In YAML output check at last section for
status:
loadBalancer:
ingress:
- ip: 10.127.40.241
that's your actual IP you can use to connect with service from other VMs in subnet.
Doc ref
To restrict the service to only be available to servers on the same subnet, you can use a combination of Network Policies and Service Accounts.
First, you'll need to create a Network Policy which specifies the source IP range that is allowed to access your service. To do this, you'll need to create a YAML file which contains the following:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-same-subnet-traffic
spec:
podSelector: {}
ingress:
- from:
- ipBlock:
cidr: <subnet-range-cidr>
ports:
- protocol: TCP
port: <port-number>
Replace the <subnet-range-cidr> and <port-number> placeholders with the relevant IP address range and port numbers. Once this YAML file is created, you can apply it to the cluster with the following command:
kubectl apply -f path-to-yaml-file
Next, you'll need a Service Account and assign it to the service. you can use the Service Account to authenticate incoming requests. To do this, you'll need to add a Service Account to the service's metadata with the following command:
kubectl edit service <service-name>
You must first create a Role or ClusterRole and grant it access to the network policy before you can assign a Service Account to it. The network policy will then be applied to the Service Account when you bind the Role or ClusterRole to it. This can be accomplished using the Kubernetes kubectl command line tool as follows:
kubectl create role <role_name> --verb=get --resource=networkpolicies
kubectl create clusterrole <clusterrole_name> --verb=get --resource=networkpolicies
kubectl create rolebinding <rolebinding_name> --role=<role_name> --serviceaccount=<service_account_name>
kubectl create clusterrolebinding <clusterrolebinding_name> --clusterrole=<clusterrole_name> --serviceaccount=<service_account_name>
The network policy will be applied to all pods that make use of the Service Account when the Role or ClusterRole is bound to it. To access the service, incoming requests will need to authenticate with the Service Account once it has been added. The service will only be accessible to authorized requests as a result of this.
For more info follow this documentation.

MongoDB Community Kubernetes Operator Connection

I'm trying to deploy a MongoDB replica set by using the MongoDB Community Kubernetes Operator in Minikube. To view the content of the database I would like to connect to the MongoDB replica set through Mongo Compass.
I followed the instructions on the official GitHub, so:
Install the CRD
Install the necessary roles and role-bindings
Install the Operator
Deploy the Replicaset
The yaml file used for the replica set deployment is the following one:
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongo-rs
namespace: mynamespace
spec:
members: 3
type: ReplicaSet
version: "4.4.0"
persistent: true
security:
authentication:
modes: ["SCRAM"]
users:
- name: user
db: admin
passwordSecretRef:
name: user
roles:
- name: userAdminAnyDatabase
db: admin
scramCredentialsSecretName: user-scram
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
---
apiVersion: v1
kind: Secret
metadata:
name: user
type: Opaque
stringData:
password: password
The MongoDB resource deployed and the mongo-rs pods are all running. Also, I'm able to connect to the replica set directly through the mongo shell within the Kubernetes cluster.
Anyway, I'd like to connect to the MongoDB replicaset also from outside the Kubernetes cluster, so in addition I've created a LoadBalancer service like the following one:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
namespace: mynamespace
spec:
type: LoadBalancer
selector:
app: mongo-rs-svc
ports:
- port: 27017
nodePort: 30017
The pods (namely mongo-rs-0, mongo-rs-1, mongo-rs-2) are correctly binded to the service. On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On minikube, the LoadBalancer type makes the Service accessible through the minikube service command, which generates a tunnel for the service mongodb-service (for instance: 127.0.0.1:34873), but if I try to connect to the MongoDB replica set through the Mongo Compass client using the connection string:
mongodb://user:password#127.0.0.1:34873/?authSource=admin&replicaSet=mongo-rs&readPreference=primary
The client cannon't connect to mongodb, returning the error:
getaddrinfo ENOTFOUND
mongo-rs-0.mongo-rs-svc.mynamespace.svc.cluster.local
Any suggestions on how to access the replica set from outside kubernetes?
Thanks in advance!
Edit: I know it's possible to connect from the outside using port forwarding, but I'd be interested in a more production-oriented approach.
minikube is a development app. So for you it may be sufficent to connect from your host(desktop) via localhost.
So first you can't use the type LoadBalancer, because this round robins between the mongodb instances, but only the primary in the replica set can write.
Normaly the mongo client with the right connection string will select the primary.
So take nodePort and you will get a connection to one mongodb instance.
Make a kubectl port-forward <resource-type/resource-name> [local_port]:<pod_port>` to the service
Before that you may change the mongodb-service. As far as I know it's not nodePort, its targetPort
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
namespace: mynamespace
spec:
type: nodePort
selector:
app: mongo-rs-svc
ports:
- port: 27017
targetPort: 27017
So something like that
kubectl port-forward mongodb-service 27017:27017
Aftert that you may connect you mongodb from localhost:27017 to the kubernetes service, which connects you to the mongo replica.
mongo localhost:27017
should connect. Edit the ports on your needs
Hope this idea helps,

How to access a database that is only accesible from Kubernetes cluster locally?

I have a situation where I have a Kubernetes cluster that has access to a Postgres instance (which is not run in the Kubernetes cluster). The Postgres instance is not accessible from anywhere else.
What I would like to do is connect with my Database tools locally. What I have found is kubectl port-forward but I think this would only be a solution if the Postgres instance is run as a pod. What I basically need is a Pod, that forwards everything that is sent on Port 8432 to the postgres instance and then I could use the port forward.
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
What is the right way to do this?
You can create service for your postgresql instance:
---
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
ports:
- port: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgresql
subsets:
- addresses:
- ip: ipAddressOfYourPGInstance
ports:
- port: 5432
And then use:
kubectl port-forward service/postgresql 5432:5432
you can use the Postgres client to connect with the Postgres instance and expose that pod using the ingress and you can access the UI over the URL.
for Postgres client, you can use: https://hub.docker.com/r/dpage/pgadmin4/
you can set this as pgclient and use it

Access SQL Server database from Kubernetes Pod

My deployed Spring boot application to trying to connect to an external SQL Server database from Kubernetes Pod. But every time it fails with error
Failed to initialize pool: The TCP/IP connection to the host <>, port 1443 has failed.
Error: "Connection timed out: no further information.
Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.
I have tried to exec into the Pod and successfully ping the DB server without any issues
Below are the solutions I have tried:
Created a Service and Endpoint and provided the DB IP in configuration file tried to bring up the application in the Pod
Tried using the Internal IP from Endpoint instead of DB IP in configuration to see Internal IP is resolved to DB IP
But both these cases gave the same result. Below is the yaml I am using the create the Service and Endpoint.
---
apiVersion: v1
kind: Service
metadata:
name: mssql
namespace: cattle
spec:
type: ClusterIP
ports:
- port: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: mssql
namespace: cattle
subsets:
- addresses:
- ip: <<DB IP>>
ports:
- port: 1433
Please let me know if I am wrong or missing in this setup.
Additional information the K8s setup
It is clustered master with external etcd cluster topology
OS on the nodes is CentOS
Able to ping the server from all nodes and the pods that are created
For this scenario a headless service is very useful. You will redirect traffic to this ip without defining an endpoint.
kind: "Service"
apiVersion: "v1"
metadata:
namespace: "your-namespace"
name: "ftp"
spec:
type: ExternalName
externalName: your-ip
The issue was resolved by updating the deployment yaml with IP address. Since all the servers were in same subnet, I did not need the to create a service or endpoint to access the DB. Thank you for all the inputs on the post

Connect to local database from inside minikube cluster

I'm trying to access a MySQL database hosted inside a docker container on localhost from inside a minikube pod with little success. I tried the solution described Minikube expose MySQL running on localhost as service but to no effect. I have modelled my solution on the service we use on AWS but it does not appear to work with minikube. My service reads as follows
apiVersion: v1
kind: Service
metadata:
name: mysql-db-svc
namespace: external
spec:
type: ExternalName
ExternalName: 172.17.0.2
...where I try to connect to my database from inside a pod using "mysql-db-svc" on port 3306 but to no avail. If I try and CURL the address "mysql-db-svc" from inside a pod it cannot resolve the host name.
Can anybody please advise a frustrated novice?
I'm using ubuntu with Minikube and my database runs outside of minikube inside a docker container and can be accessed from localhost # 172.17.0.2. My Kubernetes service for my external mysql container reads as follows:
kind: Service
apiVersion: v1
metadata:
name: mysql-db-svc
namespace: external
spec:
type: ExternalName
externalName: 10.0.2.2
Then inside my .env for a project my DB_HOST is defined as
mysql-db-svc.external.svc
... the name of the service "mysql-db-svc" followed by its namespace "external" with "svc"
Hope that makes sense.
If I'm not mistaken, you should also create an Endpoint for this service as it's external.
In your case, the Endpoints definition should be as follows:
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: mysql-db-svc
namespace: external
subsets:
- addresses:
- ip: "10.10.1.1"
ports:
port: 3306
You can read about the external sources on Kubernetes Defining a service docs.
This is because your service type is ExternalName which only fits in cloud environment such as AWS and GKE. To run your service locally change the service type to NodePort which will assign a static NodePort between 30000-32767. If you need to assign a static port on your own so that minikube won't pick a random port for you define that in your service definition under ports section like this nodePort: 32002.
And also I don't see any selector that points to your MySQL deployment in your service definition. So include the corresponding selector key pair (e.g. app: mysql-server) in your service definition under spec section. That selector should match with the selector you have defined in MySQL deployment definition.
So your service definition should be like this:
kind: Service
apiVersion: v1
metadata:
name: mysql-db-svc
namespace: external
spec:
selector:
app: mysql-server
ports:
- protocol: TCP
port: 3306
targetPort: 3306
nodePort: 32002
type: NodePort
After you deploy the service you can hit the MySQL service via http://{minikube ip}:32002 Replace {minikube ip} with actual minikube ip.
Or else you can get the access URL for the service with following command
minikube service <SERVICE_NAME> --url
Replace the with actual name of the service. In your case it is mysql-db-svc
I was also facing a similar problem where I needed to connect a POD inside minikube with a SQL Server container on the machine.
I noticed that minikube is itself a container in the local docker environment and during it's setup it creates a local docker network minikube. I connected my local SQL Server container to this minikube docker network using docker network connect minikube <SQL Server Container Name> --ip=<any valid IP on the minikube network subent>.
I was now able to access the local SQL Server container using the IP address on minikube network.
Above solution somehow doesn't work for me. Finally it works below in terraform
resource "kubernetes_service" "host" {
metadata {
name = "minikube-host"
labels = {
app = "minikube-host"
}
namespace = "default"
}
spec {
port {
name = "app"
port = 8082
}
cluster_ip = "None"
}
}
resource "kubernetes_endpoints" "host" {
metadata {
name = "minikube-host"
namespace = "default"
}
subset {
address {
// This ip comes from command: minikube ssh 'grep host.minikube.internal /etc/hosts | cut -f1'
ip = "192.168.65.2"
}
port {
name = "app"
port = 8082
}
}
}
Then I can access the local service (e.g, postgres, or mysql) in my mac with host minikube-host.default.svc.cluster.local from k8s pods.
The plain yaml file version and more details can be found at issue.
The minikube host access host.minikube.internal detail can be found here.
On the other hand, the raw ip address from command minikube ssh 'grep host.minikube.internal /etc/hosts | cut -f1'(e.,g "192.168.65.2"), can be used as the service host directly, instead of 127.0.0.1/localhost at code. And no more above configurations required.
As an addon to #Crou s answer from 2018, In 2022, kubernetes docs say ExternalName takes in a string and not an address. So, in case ExternalName doesn't work, you can also use the simpler option of services-without-selectors
You can also refer to this Google Cloud Tech video for how this services-without-selectors concept works