I am attempting to setup a mongodb server on an Azure vm and can not seem to connect to it from an external client.
Here is what I have done so for:
I have created a windows server 2016 VM
I have installed mongodb as a service and started it on the new vm
I have added an inbound rule in the firewall for mongodb on the port 27017 with the following configuration:
Name: Allow MongoDB
Profile: All
Enabled: Yes
Action: Allow:
Override: No
Program: Any
Local Address: Any
Remote Address: Any
Protocol: TCP
Local Port: 27017
Remote Port: Any (The rest of the settings are also set to Any)
I have created a Network Security Group on Windows Azure
On the network security group I have set the Inbound security rules to the following configuration:
Priority: 100
Name: AllowHttp
Source: Any
Destination: Any
Service: Custom(Any/80)
Action: Allow
I associated the Subnet section to the virtual network of my azure vm
I am trying to connect from my local pc to the vm's mongodb installation using robomongo with the connection type Direct Connection, Address as the vm's public ip displayed on the vm's sumamry and port 27017.
When I attempt this I get the following error:
Does anyone know what I am doing wrong?
You added NSG rule for port 80 but you are trying to access on port 27017, so NSG will block you. Try to add Allow rule for 27017 on the NSG.
Related
I have a machine X with a lot of IPs, podman-compose with OpenSearch and OpenSearch Dashboards links the images to the wrong (unexposed) IP. I tried to force the IP but if I do so, podman-compose would break. How can I do so?
I tried to add an IPv4 in the docker-compose.yml, I tried to modify the images and force the right IP whenever I found 0.0.0.0, but it keeps breaking.
Docker / Podman container IPs are not accessible from external clients.
You need to expose TCP or UDP ports from your container to the host system and then clients will connect to :.
The host port and the container port do not need to be the same port.
i.e. you can run multiple web server containers all using port 80 however you will need to pick unique ports on your host OS that are not used by other services to port-map to the containers. i.e 80->80, 81->80, 8080->80 etc.
Once you create the port definitions in your container configuration Podman will handle the port forwarding from the host to the container.
You might need to open the ports on the host firewall to allow clients to connect. 0.0.0.0 is another way of representing the local host.
Let say your host is 10.1.1.20 and your OpenSearch Dashboards container is 172.16.8.4 and your dashboard web app is configured to listen on port 5001/TCP.
You will need a ports directive in your docker-compose.yml file to map the host port 5001 to the container port 5001 similar to the below.
containers:
opensearch-dashboard:
ports:
- "5001:5001"
As long as port 5001 is permitted on your host firewall, the client should be able to connect using https://10.1.1.20:5001/
I have one AWS EC2 instance. I have installed MongoDB there.
Private IP :- 10.x.x.x
Port :- 27017
I can ssh into that system and connect the MongoDB server by using private IP within the VPN.
10.x.x.x:27017 - MongoDB is running here.
But, I have assigned one Elastic IP into that EC2 instance.
Public IP :- 132.x.x.x
When I am trying to connect MongoDB server by using Public IP (132.x.x.x:27017) it is showing connection timed out.
MongoDB network config, /etc/mongod.conf
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
I am starting the MongoDB server by using,
sudo mongod
inbound rules,
27017 tcp 0.0.0.0/0
27017 tcp 0.0.0.0/0, ::/0
Check to make sure your setup has the following:
The elastic IP is attached to the instance.
The security group allows incoming traffic from the client on the correct port.
The network ACL of the subnet that allow for the needed inbound and outbound traffic, or you're using the non-existent/default ACLs, which allow all inbound/outbound traffic.
An Internet Gateway is in the same VPC as the instance.
There is a rule in the subnet's route table that sends internet-bound traffic to the Internet Gateway.
You may also find this AWS article helpful for using the Internet Gateway in your VPC.
How can I expose service of type NodePort to internet without using type LoadBalancer? Every resource I have found was doing it by using load balancer. But I don't want load balancing its expensive and unnecessary for my use case because I am running one instance of postgres image which is mounting to persistent disk and I would like to be able to connect to my database from my PC using pgAdmin. If it is possible could you please provide bit more detailed answer as I am new to Kubernetes, GCE and networking.
Just for the record and bit more context I have deployment running 3 replicas of my API server to which I am connecting through load balancer with set loadBalancerIP and another deployment which is running one instance of postgres with NodePort service through which my API servers are communicating with my db. And my problem is that maintaining the db without public access is hard.
using NodePort as Service type works straight away e.g. like this:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
- port: 443
nodePort: 30443
name: https
selector:
name: nginx
More details can be found in the documentation.
The drawback of using NodePort is that you've to take care of integrating with your providers firewall by yourself. A starting port for that can also be found in the Configuring Your Cloud Provider's Firewalls section of the official documentation.
For GCE opening up the above for publicly on all nodes could look like:
gcloud compute firewall-rules create myservice --allow tcp:30080,tcp:30443
Once this is in place your services should be accessable through any of the public IPs of your nodes. You'll find them with:
gcloud compute instances list
You can run kubectl in a terminal window (command or power shell in windows) to port forward the postgresql deployment to your localhost.
kubectl port-forward deployment/my-pg-deployment 5432:5432
While this command is running (it runs in the foreground) you can use pgAdmin to point to localhost:5432 to access your pod on the gke. Simply close the terminal once you are done using the pgadmin.
For the sake of improved security: if in doubt about exposing a service like a database to the public internet, you might like the idea of hiding it behind a simple linux VM called jump host, also called bastion host in the official GCP documentation which is recommended. This way your database instance will continue being open towards the internal network. You then can remove the external IP address so that it stops being exposed to the internet.
The high level concept:
public internet <- SSH:22 -> bastion host <- db:5432 -> database service
After setting up your ssh connection and establishing connection, you could reach out to the database by forwarding the database port (see example below).
The Procedure Overview
Create the GCE VM
Specific requirements:
Pick the image of a Linux distribution you are familiar with
VM Connectivity to internet: Attach a public IP to the VM (you can do this during or after the installation)
Security: Go to Firewall rules and add a new rule opening port 22 at internal VM IP. Restrict the incoming connections to your home public IP
Go to your local machine, from which you would connect, and setup the connection like in the following example below.
SSH Connect to the bastion host VM
An example setup for your ssh connection, located at $HOME/.ssh/config (if this file called config doesn't exist, just create it):
Host bastion-host-vm
Hostname external-vm-ip
User root
IdentityFile ~/.ssh/id_ed25519
LocalForward 5432 internal-vm-ip:5432
Now you are ready for connecting from your local machine terminal with this command:
ssh bastion-host-vm
Once connected, you could now pick your favorite database client and connect to localhost:5432 (which is the forwarded port through the ssh connection from the remote database instance, which is behind the ssh host).
CAUTION: The port forwarding is only function as long as the ssh connection is established. If you disconnect or close the terminal window the ssh connection will close, and so the database port forwarding as well. So keep the terminal open and connection to your bastion host established as long as you are using the database connection.
Pro tipp for cost saving on the GCE VM
you could use the free tier offer for creating the bastion host VM which means increased protection for free.
Search for "Compute Engine" in the official table.
You could check this thread for more details on GCE free limits.
My Kubernetes cluster setup has n-tier web application running in dev and test environments on AWS. For the production environment, postgres RDS was chosen, to ensure periodic backup. While creating a postgres RDS instance, kubernetes-vpc was selected for db-subnet to keep networking stuff simple during pilot run.
Also, security group selected is the same as kubernetes-minions.
Following is the service and endpoint yaml:
apiVersion: v1
kind: Service
metadata:
labels:
name: pgsql-rds
name: pgsql-rds
spec:
ports:
- port: 5432
protocol: TCP
targetPort: 5432
--
apiVersion: v1
kind: Endpoints
metadata:
name: pgsql-rds
subsets:
- addresses:
- ip: 52.123.44.55
ports:
- port: 5432
name: pgsql-rds
protocol: TCP
When web-app service and deployment is created, it's unable to connect to RDS instance.
The log is as follows:
java.sql.SQLException: Error in allocating a connection. Cause: Connection could not be allocated because: Connection to pgsql-rds:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
What am I missing? any pointers to resolve the issue appreciated.
This has to do with DNS resolving. When you use the RDS dns name INSIDE the same VPC it will be resolved to a private ip. When you use the same dns name on the internet or another VPC you will get the public ip of the RDS instance.
This is a problem because from another VPC you can not make use of the load balancing feature unless you expose the RDS instance to the public internet.
It's been a while the issue was resolved.
Don't exactly remember now, which step I missed that caused connection problem.
But, below are the steps that did work for me.
Pre-requisite: kubernetes cluster is set up with vpc ('k8s-vpc')
Create VPC SUBNET
Go to vpc dashboard, ensure same aws region as k8s minion. (you will see existing 'k8s-vpc')
Create subnet with each availability zone.
Select 'k8s-vpc' as vpc from drop-down.
CIDR could be 172.20.16.0/24 or 172.20.32.0/24
Create DB SUBNET and SUBNET GROUP FOR VPC of k8s minion if not already available.
Go to RDS Dashboard.
Create subnet group (e.g. my-db-subnet-group) for DB and add all subnet from step 1 to create subnet group.
From RDS Dashboard create Parameter Group
(e.g. my-db-param-group) for Postgres (version 9.5 in this example)
Copy value for max_connections to the max_prepared_transactions field and save
Create RDS instance for Postgres DB
Launch DB instance -> select Engine Postgres -> Choose stage (Production or Dev/Test)
-> Give instance spec.s (instance type & disk space etc.) and specify DB settings (user/password)
-> Configure Advanced settings
vpc selection as 'k8s-vpc'
DB subnet should be one created in previous step (my-db-subnet-group)
VPC security group should be from Kubernetes minions - so that no additional config. required for access from minions
Select Publicly Accessible - to connect to postgres from internet
Select Parameter Group as 'my-db-param-group'.
Specify Database options, backup and maintenance options and finally launch the instance
Also check security group of VPC and add inbound rule to allow connection to postgres port.
You can test connection from one of the k8s pod (kubectl exec -it) where postgres client is installed.
Make sure to change user to postgres.
Connect to RDS using psql as shown below:
$ psql --host=my-rds-dev.cyhi3va0y2or.ap-northeast-1.rds.amazonaws.com --port=5432 --username=<masterUserName> --password --dbname=<masterDB>
If everything is set up correctly, it should prompt you for password of db user.
Providing correct password will finally connect to RDS.
This article was of great help.
Your IP is of the form: 52.123.44.55. This is a public IP. See the official RFC
Since you said both are in the same VPC, you could have used the internal IP address instead.
That said, the error "Connection to pgsql-rds:5432 refused" means that the address was resolved, otherwise you would get "psql: error: could not translate host name "psql-rds" to address: Name or service not known". Therefore, it is not a DNS issue as cited on another answer.
The cause of the block is likely that the security group was not configured to accept requests from the EC2 instance external IP address. This if the official AWS documentation on connecting to RDS scenarios.
You might have already whitelisted all connections from VPC, however double check the security groups. I would not recommend using a whitelist for the external IP address, however it works if you put the external IP address there. It is a security concern when you don't have an elastic IP address and there are data transfer costs unless you have a more complex setup.
That said, you could have avoided the Kubernetes resources and used the DNS address of the RDS instance.
If you had to avoid using the DNS address of the RDS instance directly, you could have used the following:
apiVersion: v1
kind: Service
metadata:
name: psql-rds
spec:
externalName: my-rds-dev.cyhi3va0y2or.ap-northeast-1.rds.amazonaws.com
ports:
- port: 5432
protocol: TCP
targetPort: 5432
sessionAffinity: None
type: ExternalName
With the setup above, you don't need a Kubernetes Endpoint. You can just use psql-rds, or any variation using the namespace as the domain or the fully qualified version, such as psql-rds.default. This is the documentation for the ExternalName
I see that the original poster mentioned the problem was solved, however it is not clear or well documented that the problem was a combination of using the external IP address and checking the security group rules.
I have a problem. I'm using Virtual Box with RHEL (Red Hat Enterprise Linux) and I've installed a MongoDB and an Oracle-XE database.
I'm trying to connect to my DBs from my Windows OS.
I can connect to my oracle DB using SQL Developer, however when trying to use Robomongo to connect to my MongoDB I can't connect. And I have no idea why.
I've specified port forwarding in both cases, why does one work and the other doesn't?
I've tryed the following:
address: localhost port: 27017
address: 127.0.0.1 port: 27017
address: mongo.localhost port: 27017
And others... Why can't I connect with Robomongo?
In ubuntu I opended \etc\mongod.conf
I commented bind_ip = 0.0.0.0 to #bind_ip = 0.0.0.0
And as you know, you should use address: 192.168.0.105. port: 27017 (your linux ip; you can get Ip with command >$ hostname -I
Maybe same thing works for u in RedHat
After comment the bind_ip in \etc\mongod.conf
You need to do port forwarding in the VirtualBox setting.
Typically your VirtualBbox IP would be something like 10.0.2.15
(confirm that with the command hostname -I in VM)
and suppose your host PC's IP is 192.168.1.234
(confirm that with the command ipconfig in host PC)
now open the settings for your VM
click Network -> Port forwarding
add something like:
Name Protocol HostIP HostPort GuestIP GuestPort
Rule1 TCP 192.168.1.234 27017 10.0.2.15 27017
Rule2 TCP 192.168.1.234 80 10.0.2.15 80 (if you're hosting a web server)
now, intead of setting Robomongo's connecting IP=GuestIP
you should use address: 192.168.1.234 port: 27017
then the virtualbox should now direct your request to the right place.