I have a sharded & replicated MongoDB cluster which uses keyFile auth.
I am trying to configure the MongoDB MMS Agent to communicate with all of the cluster members.
I've tried installing MMS on every cluster member and informing mms.10gen.com of the IP/port of each cluster member's address. The agent reports that it is unauthorized and I get no data.
It appears that MMS does not support keyFile auth, but is this not the standard production cluster setup?
How can I set up MMS for this kind of cluster?
I posted this on the 10gen-mms mailing list and found the answer.
keyFile authentication is meant only for intra-cluster communication and communication between MongoS instances and the cluster.
Specifying keyFile auth means that you can access the cluster without a username/password via MongoS (with keyFile) if no users exist.
If a user is created, then user auth is additionally required.
You can create a user locally on each of the MongoD instances and use that to connect directly to them without a keyFile.
So the solution was to create users for MMS and my front-end service and to use user authentication.
Related
I have a GKE cluster and I would like to connect some, but not all (!), pods and services to a managed Postgresql Cloud DB running in the same VPC.
Of course, I could just go for it (https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine), but I would like to make sure that only those pods and services can connect to the Postgresql DB, that should do so.
I thought of creating a separate node pool in my GKE cluster (https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools), where only those pods services do run, that should be able to connect to the Postgresql DB, and allow only those pods and services to connect to the DB by telling the DB which IPs to accept. However, it seems that I cannot set dedicated IPs on the node pool level, only on the cluster level.
Do you have an idea how I can make such a restriction?
When you create your node pool, create it with a service account that haven't the permission to access to Cloud SQL instances.
Then, leverage Workload identity to load a specific service account with some of your pods, and grant the service account the permission to access to Cloud SQL instance
You asked "how to know the IP to restrict them to a access to Cloud SQL". It's a wrong (or legacy) assumption. Google always says "Don't trust the network (and so, the IPs)". Base your security on the identity (the service account of the node pool and of the pod through workload identity) is a far better option.
We have an airflow task that adds data to the mongodb server.
We can connect to the mongodb server only behind IP Access or VPC Peering.
We are having issues with VPC Peering, so we thought we can just enable direct IP access between the airflow workers and the mongodb server
Has anyone done that?
If not, do you have another suggestion?
Let's say I have an EKS cluster with multiple pods hosting different applications. I want to allow connections from a specific application to an RDS instance without allowing all the pods in the EKS cluster to connect to the RDS.
After some research, I found out that there's a networking approach to solve the issue, by creating security groups for pods. But I am trying to look for another approach.
I was expecting to have a simple setup where I can:
create IAM policy with read/write permissions to the DB
create an IAM role and attach that policy
create a service account (IAM and k8s service accounts) with that role
assign the service account to the pods I want to grant RDS access.
But it seems like the only way to have IAM authentication from pods to RDS, is by continuously generating a token each 15m. Is this the only way?
I would recommend this task to be divided into following parts:
making sure the security concept is sound
allowing network traffic from EC2 worker nodes to RDS
creating Egress network policy for the cluster to only allow RDS traffic for specific pods
making sure the security concept is sound
The question here is why would you want to select only specific pods to access RDS database? Is there a trust issue with parties deploying pods to the cluster or is it a matter of compliance (some departments can't access other department resources)? If it's trust, maybe the separation here is not enough, maybe you should not allow untrusted pods on your cluster (vulnerabilities with root gaining via docker)?
allowing network traffic from EC2 worker nodes to RDS
For this, the Security Group of worker EC2 nodes must be allowed on RDS side (Inbound rules), and just to be sure, SG of EKS cluster nodes should also allow connections to RDS. Without these generic rules, the traffic can't flow. You could be specific here - for example allow access on only specific RDS instances, not all. You can have more than one node group for your EKS cluster also (and run pods that require RDS access only on those nodes, with labels).
creating Egress network policy for the cluster
If you create a default deny-all Egress policy (which is highly recommended), than no pods can access RDS by default. You can then apply more granular Egress policies to access RDS database by namespace, label, pod name. More here:
https://www.cncf.io/blog/2020/02/10/guide-to-kubernetes-egress-network-policies/
Clarification: in this scenario you store secrets to access database in kubernetes secrets, mount them into pod and login normally. If you want to just get connection without auth, my answer won't help you.
I have a Mongodb Atlas database which is set up with VPC peering to a VPC in AWS. This works find and I'm able to access it from inside the VPC. I was, however, hoping to provide a jumpbox so that developers could use an SSH tunnel to connect to the Atlas database from their workstations outside of the VPC.
Developer workstation --> SSH Tunnel to box in VPC --> Atlas
I'm having trouble with that, however because I'm not sure what tunnel I need to set up. It looks to me like Mongo connects by looking up replica information in a DNS seed list (mongodb+srv://). So it isn't as simple as doing
ssh user#jumpbox -L 27017:env.somehost.mongodb.net:27017
Is there a way to enable direct connections on Atlas so that I can enable developers to access this database through an SSH tunnel?
For a replica set connection this isn't going to work with just MongoDB and a driver, but you can try running a proxy like https://github.com/coinbase/mongobetween on the jumpbox.
For standalone deployments you can connect through tunnels since the driver uses the address you supply and that's the end of it. Use directConnection URI option to force a standalone connection to a node of any deployment. While this allows you to connect to any node, you have to connect to the right node for replica sets (you can't write to secondaries) so this approach has limited utility for replica set deployments.
For mongos deployments that are not on Atlas the standalone behavior applies. With Atlas there are SRV records published which the driver follows, therefore for the tunneling purposes an Atlas sharded cluster behaves like a replica set and you can't trivially proxy connections to it. mongobetween may also work in this case.
Followed instructions here to create a local 3 node secure cluster
Got the go example app running with the following DB connection string to connect to the secure cluster
sql.Open("postgres", "postgresql://root#localhost:26257/dbname?sslmode=verify-full&sslrootcert=<location of ca.crt>&sslcert=<location of client.root.crt>&sslkey=<location of client.root.key>")
Cockroach DB worked well locally so I decided to move the DB (as in the DB solution and not the actual data) to GCP Kubernetes Engine using the instructions here
Everything worked fine - pods created and could use the built in SQL client from the cloud console.
Now I want to use the previous example app to now connect to this new cloud DB. I created a load balancer using kubectl expose command and got a public ip to use in the code.
How do I get the new ca.crt, client.root.crt, client.root.key files to use in my connection string for the DB running on GCP?
We have 5+ developers and the idea is to have them write code on their local machines and connect to the cloud db using the connection strings and the certificates.
Or is there a better way to let 5+ developers use a single DEV DB cluster running on GCP?
The recommended way to run against a Kubernetes CockroachDB cluster is to have your apps run in the same cluster. This makes certificate generation fairly simple. See the built-in SQL client example and its config file.
The config above uses an init container to send a CSR for client certificates and makes them available to the container (in this case just the cockroach sql client, but it would be anything else).
If you wish to run a client outside the kubernetes cluster, the simplest way is to copy the generated certs directly from the client pod. It's recommended to use a non root user:
create the user through the SQL command
modify the client-secure.yaml config for your new user and start the new client pod
approve the CSR for the client certificate
wait for the pod to finish initializing
copy the ca.crt, client.<username>.crt and client.<username>.key from the pod onto your local machine
Note: the public DNS or IP address of your kubernetes cluster is most likely not included in the node certificates. You either need to modify the list of hostnames/addresses before bringing up the nodes, or change your connection URL to sslmode=verify-ca (see client connection parameters for details).
Alternatively, you could use password authentication in which case you would only need the CA certificate.