I have mongodb running on an EC2 instance. After setting mongod.conf to accept traffic from 0.0.0.0, I am able to connect and send queries from my local machine. This machine is set to accept all traffic on port 27017.
I have an express app running mongoose also deployed to EC2 on a different instance. However, I cannot connect to the mongo instance from the express instance. I checked the outbound traffic rules, port 27017 is enabled explicitly, though all outbound traffic is enabled as well.
I can't figure out why I would be able to connect from my local machine but not my EC2 instance. The only thing I can think of is perhaps some setting in the VPC these instances are in. Both instances share the same VPC. Both instances are running ubuntu. The only other difference between my local environment and deployment environment is I'm running node 11 (macOS) locally and node 8 in deployment. Any ideas?
Related
I need to allow inbound connections from a remote platform to do some administrative tasks on one of my databases (in my case, allow a reverse-ETL service to feed one of my postgresql databases in a pod in my k8s cluster)
The remote platform lets me configure a PostgreSQL destination through SSH tunnels or reverse SSH tunnels, or direct connections. Of course, I would like traffic to be encrypted, so I’m opting for the SSH or reverse SSH Tunnel.
Any idea if/how I can setup this access on my k8s cluster ?
I would like to give the remote service ONLY access to one of my pg database (and not the whole cluster/namespace for security reasons)
The scenario I was thinking about
Traefik listens to ssh on specific port (like 2222)
route this port to a SSH bastion pod capable of managing incoming SSH connections, and log in as a specific linux user. Only allow connections from the remote service IPs via an ip whitelist middleware.
Allow connections from this bastion host pod (or ideally, this linux user) ONLY to my postgresql instance on the default pg port
If I open a bastion host (2), by default, all my users will have access to all services on the cluster...right ? How can I isolate my bastion host instance to only connect it to PG ? I haven't used Network policies yet, but I believe they may be the answer... however, would it be possible to activate networking policies for a single pod only ? (my bastion host) and leave the rest as it is ?
I have a REST API running locally on my laptop at https://localhost:5001/something. I want that to be reachable inside the Kubernetes cluster from a K8s DNS name. For example, an application running inside a Pod could use some-service instead of needing the entire Url.
Also, since localhost is relative to the host machine, how would I get the Service or ExternalName to reach localhost on the host machine, instead of inside the K8s cluster?
I tried docker.host.internal (as suggested here) but that didn't work.
And this from K8s documentation says that it can't be the loopback:
The endpoint IPs must not be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6).
I'm running:
Host Machine: Ubuntu 20.04
K8s: k3d
Web API: (.Net Core 3.1 on Linux, created by dotnet new webapi MyAPI)
Telepresence is a tool created for that quick local testing your application with k8s cluster. It allows you to run single service locally while connecting it to remote Kubernetes cluster.
It substitutes a two-way network proxy for your normal pod running in the Kubernetes cluster. This pod proxies data from your Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. The local process has its networking transparently overridden so that DNS calls and TCP connections are routed through the proxy to the remote Kubernetes cluster.
Alternative way would be to create service that is being backed by ssh server running in a pod and use reverse tunnel to open reverse connection to your local machine.
I have a Mongodb Atlas database which is set up with VPC peering to a VPC in AWS. This works find and I'm able to access it from inside the VPC. I was, however, hoping to provide a jumpbox so that developers could use an SSH tunnel to connect to the Atlas database from their workstations outside of the VPC.
Developer workstation --> SSH Tunnel to box in VPC --> Atlas
I'm having trouble with that, however because I'm not sure what tunnel I need to set up. It looks to me like Mongo connects by looking up replica information in a DNS seed list (mongodb+srv://). So it isn't as simple as doing
ssh user#jumpbox -L 27017:env.somehost.mongodb.net:27017
Is there a way to enable direct connections on Atlas so that I can enable developers to access this database through an SSH tunnel?
For a replica set connection this isn't going to work with just MongoDB and a driver, but you can try running a proxy like https://github.com/coinbase/mongobetween on the jumpbox.
For standalone deployments you can connect through tunnels since the driver uses the address you supply and that's the end of it. Use directConnection URI option to force a standalone connection to a node of any deployment. While this allows you to connect to any node, you have to connect to the right node for replica sets (you can't write to secondaries) so this approach has limited utility for replica set deployments.
For mongos deployments that are not on Atlas the standalone behavior applies. With Atlas there are SRV records published which the driver follows, therefore for the tunneling purposes an Atlas sharded cluster behaves like a replica set and you can't trivially proxy connections to it. mongobetween may also work in this case.
I need to be able to give access to a mongodb instance running inside a aws ec2 instance(private subnet) in most secure way. So I thought of granting access to this mongodb instance by putting it under an ELB.
I have create a target group for port 27017 and have added the ec2 instance as targets. Security groups of ec2 instance allow access to 27017 to 0.0.0.0/0
ELB is exposed to outside via port 80 and 443 and forwards all requests to target group available on port 27017.
But I'm unable to access this instance via ELB domain. Tried accessing this via MongoCompass client.
I am new to Google Cloud Platform and the following context:
I have a Compute Engine VM running as a MongoDB server and a Compute Engine VM running as a NodeJS server already with Docker. Then the NodeJS application connects to Mongo via the default VPC internal IP. Now, I'm trying to migrate the NodeJS application to Google Kubernetes Engine, but I can't connect to the MongoDB server when I deploy the NodeJS application Docker image to the cluster.
All services like GCE and GKE are in the same region (us-east-1).
I did a hard test accessing a kubernetes cluster node via SSH and deploying a simple MongoDB Docker image and trying to connect to the remote MongoDB server via command line, but the problem is the same, time out when trying to connect.
I have also checked the firewall settings on GCP as well as the bindIp setting on the MongoDB server and it has no blocking on that.
Does anyone know what may be happening? Thank you very much.
In my case traffic from GKE to GCE VM was blocked by Google Firewall even thou both are in the same network (default).
I had to whitelist cluster pod network listed in cluster details:
Pod address range 10.8.0.0/14
https://console.cloud.google.com/kubernetes/list
https://console.cloud.google.com/networking/firewalls/list
By default, containers in a GKE cluster should be able to access GCE VMs of the same VPC through internal IPs. It is just like you access the internet (e.g., google.com) from GKE containers, GKE and VPC know how to route the traffic. The problem must be with other configurations (firewall or your application).
You can do a test, start a simple HTTP server in the GCE VM, say the internal IP is 10.138.0.5:
python -m SimpleHTTPServer 8080
then create a GKE container and try to access the service:
kubectl run my-client -it --image=tutum/curl --generator=run-pod/v1 -- curl http://10.138.0.5:8080