Connect to MongoDB server through Elastic IP - mongodb

I have one AWS EC2 instance. I have installed MongoDB there.
Private IP :- 10.x.x.x
Port :- 27017
I can ssh into that system and connect the MongoDB server by using private IP within the VPN.
10.x.x.x:27017 - MongoDB is running here.
But, I have assigned one Elastic IP into that EC2 instance.
Public IP :- 132.x.x.x
When I am trying to connect MongoDB server by using Public IP (132.x.x.x:27017) it is showing connection timed out.
MongoDB network config, /etc/mongod.conf
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
I am starting the MongoDB server by using,
sudo mongod
inbound rules,
27017 tcp 0.0.0.0/0
27017 tcp 0.0.0.0/0, ::/0

Check to make sure your setup has the following:
The elastic IP is attached to the instance.
The security group allows incoming traffic from the client on the correct port.
The network ACL of the subnet that allow for the needed inbound and outbound traffic, or you're using the non-existent/default ACLs, which allow all inbound/outbound traffic.
An Internet Gateway is in the same VPC as the instance.
There is a rule in the subnet's route table that sends internet-bound traffic to the Internet Gateway.
You may also find this AWS article helpful for using the Internet Gateway in your VPC.

Related

OpenSearch Dashboards with Podman gets the wrong unexposed IP

I have a machine X with a lot of IPs, podman-compose with OpenSearch and OpenSearch Dashboards links the images to the wrong (unexposed) IP. I tried to force the IP but if I do so, podman-compose would break. How can I do so?
I tried to add an IPv4 in the docker-compose.yml, I tried to modify the images and force the right IP whenever I found 0.0.0.0, but it keeps breaking.
Docker / Podman container IPs are not accessible from external clients.
You need to expose TCP or UDP ports from your container to the host system and then clients will connect to :.
The host port and the container port do not need to be the same port.
i.e. you can run multiple web server containers all using port 80 however you will need to pick unique ports on your host OS that are not used by other services to port-map to the containers. i.e 80->80, 81->80, 8080->80 etc.
Once you create the port definitions in your container configuration Podman will handle the port forwarding from the host to the container.
You might need to open the ports on the host firewall to allow clients to connect. 0.0.0.0 is another way of representing the local host.
Let say your host is 10.1.1.20 and your OpenSearch Dashboards container is 172.16.8.4 and your dashboard web app is configured to listen on port 5001/TCP.
You will need a ports directive in your docker-compose.yml file to map the host port 5001 to the container port 5001 similar to the below.
containers:
opensearch-dashboard:
ports:
- "5001:5001"
As long as port 5001 is permitted on your host firewall, the client should be able to connect using https://10.1.1.20:5001/

Unable to connect to postgresql remotely through pgadmin on port 8085 on Google Cloud

I've postgres server listening on all ip addresses at port 8085. Even after following the Google cloud instructions here to open the port 8085 (instead of default 5432 port) through firewall rules, I'm still getting the following error. I've set up both egress and ingress firewall rules with the same ip address as the source (for ingress) and destination (for egress rules).
Error:
could not connect to server: Connection timed out (0x0000274C/10060) Is the server running on host "xx.xx.xxx.xx" and accepting TCP/IP connections on port 8085?
For ingress rule, set the following values for source and destination:
Source is the client that is originating the request, so the source ip is 'any' and the source port is 'any'. However, the destination is the server that is serving the client request. So the destination ip is whatever is the public ip of your VM and destination port is 8085.
For egress rule, the source and destination values are analogous to the ingress rule. Source ip is the server ip address and the source port is 8085. Destination ip is 'any' and destination port is also 'any'.

Connecting to a mongoDB server on a Windows Azure VM using Robomongo

I am attempting to setup a mongodb server on an Azure vm and can not seem to connect to it from an external client.
Here is what I have done so for:
I have created a windows server 2016 VM
I have installed mongodb as a service and started it on the new vm
I have added an inbound rule in the firewall for mongodb on the port 27017 with the following configuration:
Name: Allow MongoDB
Profile: All
Enabled: Yes
Action: Allow:
Override: No
Program: Any
Local Address: Any
Remote Address: Any
Protocol: TCP
Local Port: 27017
Remote Port: Any (The rest of the settings are also set to Any)
I have created a Network Security Group on Windows Azure
On the network security group I have set the Inbound security rules to the following configuration:
Priority: 100
Name: AllowHttp
Source: Any
Destination: Any
Service: Custom(Any/80)
Action: Allow
I associated the Subnet section to the virtual network of my azure vm
I am trying to connect from my local pc to the vm's mongodb installation using robomongo with the connection type Direct Connection, Address as the vm's public ip displayed on the vm's sumamry and port 27017.
When I attempt this I get the following error:
Does anyone know what I am doing wrong?
You added NSG rule for port 80 but you are trying to access on port 27017, so NSG will block you. Try to add Allow rule for 27017 on the NSG.

How to expose NodePort to internet on GCE

How can I expose service of type NodePort to internet without using type LoadBalancer? Every resource I have found was doing it by using load balancer. But I don't want load balancing its expensive and unnecessary for my use case because I am running one instance of postgres image which is mounting to persistent disk and I would like to be able to connect to my database from my PC using pgAdmin. If it is possible could you please provide bit more detailed answer as I am new to Kubernetes, GCE and networking.
Just for the record and bit more context I have deployment running 3 replicas of my API server to which I am connecting through load balancer with set loadBalancerIP and another deployment which is running one instance of postgres with NodePort service through which my API servers are communicating with my db. And my problem is that maintaining the db without public access is hard.
using NodePort as Service type works straight away e.g. like this:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
- port: 443
nodePort: 30443
name: https
selector:
name: nginx
More details can be found in the documentation.
The drawback of using NodePort is that you've to take care of integrating with your providers firewall by yourself. A starting port for that can also be found in the Configuring Your Cloud Provider's Firewalls section of the official documentation.
For GCE opening up the above for publicly on all nodes could look like:
gcloud compute firewall-rules create myservice --allow tcp:30080,tcp:30443
Once this is in place your services should be accessable through any of the public IPs of your nodes. You'll find them with:
gcloud compute instances list
You can run kubectl in a terminal window (command or power shell in windows) to port forward the postgresql deployment to your localhost.
kubectl port-forward deployment/my-pg-deployment 5432:5432
While this command is running (it runs in the foreground) you can use pgAdmin to point to localhost:5432 to access your pod on the gke. Simply close the terminal once you are done using the pgadmin.
For the sake of improved security: if in doubt about exposing a service like a database to the public internet, you might like the idea of hiding it behind a simple linux VM called jump host, also called bastion host in the official GCP documentation which is recommended. This way your database instance will continue being open towards the internal network. You then can remove the external IP address so that it stops being exposed to the internet.
The high level concept:
public internet <- SSH:22 -> bastion host <- db:5432 -> database service
After setting up your ssh connection and establishing connection, you could reach out to the database by forwarding the database port (see example below).
The Procedure Overview
Create the GCE VM
Specific requirements:
Pick the image of a Linux distribution you are familiar with
VM Connectivity to internet: Attach a public IP to the VM (you can do this during or after the installation)
Security: Go to Firewall rules and add a new rule opening port 22 at internal VM IP. Restrict the incoming connections to your home public IP
Go to your local machine, from which you would connect, and setup the connection like in the following example below.
SSH Connect to the bastion host VM
An example setup for your ssh connection, located at $HOME/.ssh/config (if this file called config doesn't exist, just create it):
Host bastion-host-vm
Hostname external-vm-ip
User root
IdentityFile ~/.ssh/id_ed25519
LocalForward 5432 internal-vm-ip:5432
Now you are ready for connecting from your local machine terminal with this command:
ssh bastion-host-vm
Once connected, you could now pick your favorite database client and connect to localhost:5432 (which is the forwarded port through the ssh connection from the remote database instance, which is behind the ssh host).
CAUTION: The port forwarding is only function as long as the ssh connection is established. If you disconnect or close the terminal window the ssh connection will close, and so the database port forwarding as well. So keep the terminal open and connection to your bastion host established as long as you are using the database connection.
Pro tipp for cost saving on the GCE VM
you could use the free tier offer for creating the bastion host VM which means increased protection for free.
Search for "Compute Engine" in the official table.
You could check this thread for more details on GCE free limits.

Mongo DB - net.bindIp for local and network access

I want the mongod instance to be accessible both from the localhost and other servers on the network.
If I set the net.bindIp value to 127.0.0.1 then mongod doesn't listen to external connections and nmap -p 27017 <server> reports that the port is closed. The same occurs if I comment out the net.bindIp line in mongod.conf.
If I set the net.bindIp value to the local IP address - 192.168.0.10 - then mongod listens for network connections on port 27017, but it doesn't allow me to connect to the mongod instance from the local host using the mongo command.
What value do I need to set net.bindIp to, to ensure I can connect both locally and over the network to the mongod instance.
I am running Ubuntu Server 14.04.
Include both the localhost and network IP address as comma separated values.
net:
port: 27017
bindIp: 127.0.0.1,192.168.0.10
and restart the service
sudo service mongod restart