MongoDB - Prevent unauthorized user from opening console - mongodb

Trying to set up authorization in my development cluster, I couldn't prevent users from opening a console to my mongods.
I have enabled authorization in the config file:
secutiry:
authorization: enabled
And have created an admin user with the userAdminAnyDatabase role.
Yet, when connecting unauthorized to this server from another machine, I can enter the console.
I do get permission error when trying to issue commands, but I would like to know if there's any way of preventing the console from opening - getting the permission error earlier.

If you only need to access your MongoDB deployment from applications running on the same server you can use the bind_ip configuration option to control the network interface(s) that MongoDB processes listen to. By default this should already be set to '127.0.0.1' (localhost) in packaged versions of MongoDB 2.6+.
If you want to have the server listening to a more public network interface (eg. local LAN) and want to prevent remote connections entirely, you can limit source IP access via your firewall configuration.
The Network Security Tutorials in the MongoDB manual include examples that should be useful as a starting point:
Configure Linux iptables Firewall for MongoDB
Configure Windows netsh Firewall for MongoDB
If users/applications might authenticate from those remote IPs, you can't prevent them from opening a console connection (with no permissions). This is similar to how other services (sshd, apache, etc) work with authentication: step 1 is to establish a connection and step 2 authenticates.
For more information on MongoDB best practices, please refer to the Security section in the manual.

Related

Enable Mongo free-monitoring on a AWS EC2 instance?

I'm trying to enable the MongoDB Free Monitoring on an EC2 instance, but I got the error:
Unable to get immediate response from the Cloud Monitoring service. We willcontinue to retry in the background. Please check your firewall settings to ensure that mongod can communicate with "https://cloud.mongodb.com/freemonitoring/mongo"
I suspect mongod can not communicate with https://cloud.mongodb.com/freemonitoring/mongo because of firewall restrictions, but I can find what to allow at AWS to make this work while keeping it secure.
For information, the server has no public IP address, but the firewall rules allow anything to go out.

Does the fact I'm running a VM alter the whitelisting status of my regular ip address?

Our dev ops team have whitelisted my home ip address so that I can connect to our Postgres database on Azure. I am able to connect to our Azure database due to this.
Today I set up a VM in order to run Docker. I am running a container for RStudio which is an app that, among many other things, allows me to connect to our database using ODBC.
After configuring the odbcinst and odbc.ini files I believe that those are configured correctly because when I try to connect I get the following error:
Error: nanodbc/nanodbc.cpp:983: 00000: FATAL: SSL connection is required. Please specify SSL options and retry.
Thus I think that my odbc set up is correct because this error suggests my connection setting are fine, it's just that Azure will not allow it without SSL.
Searching that error message took me to this SO post with the following accepted answer:
By default, Azure Database for PostgreSQL enforces SSL connections between your server and your client applications to protect against MITM (man in the middle) attacks. This is done to make the connection to your server as secure as possible.
Although not recommended, you have the option to disable requiring SSL for connecting to your server if your client application does not support SSL connectivity. Please check How to Configure SSL Connectivity for your Postgres server in Azure for more details. You can disable requiring SSL connections from either the portal or using CLI. Note that Azure does not recommend disabling requiring SSL connections when connecting to your server.
My question is, if I am already able to connect to our database outside of my VM due to my home IP being whitelisted and just using a Postgres Driver with Dbeaver SQL client, is there anything I can do to connect from within my VM?
I can get my VMs ip address but I suspect (am not sure) if sending hat to our developers to whitelist would work?
Is there a prescribed course of action here?
I added this parameter to my .odbc.ini file and was able to connect:
sslmode=require
From Azure Postgres documentation, this parameter may take on different permutations depending on the context
"for example "ssl=true" or "sslmode=require" or "sslmode=required" and other variations"

Possible reasons why my shadowsocks not working on virmach's server?

I'm a new comer to using the overseas server. Recently I bought a vps from virmach in order to see foreign websites like google and wiki.
I've been trying for a long time configuring my shadowsocks on my server.
However, when I was using shadowsocks-qt5 to connect my server, it was timeout.
And of course I can't access google correctly.
What I want to ask is the reason why I failed.
Here are things that I do remember to do:
stop the firewall on both computers;
build the .json file which I referred to blogs in China.
Here are the outline of my shadowsocks.json on my server:
{
"server":"0.0.0.0",
"server_port":8388,
"local_address":"127.0.0.1",
"local_port":1080,
"password":"XXXX",
"timeout":600,
"method":"aes-256-cfb"
}
Other useful(maybe) information:
my client OS version: Ubuntu 18.04.3 LTS
my server OS version: Ubuntu 16.04.6 LTS
the client I choose is from: https://github.com/shadowsocks/shadowsocks-qt5
I could not help but wandered, are there any other possible reasons I've forgot? Can anyone inform me some helpful details to solve this puzzling problems? Thanks a lot!
I have not set up my own VPS but I have instead subscribed to the server provided by caonima.io, so I can't speak for any server related issues. Additionally, I have no affiliation with caonima.io. I did however successfully set up my client on Ubuntu 16.04 after having some issues connecting to GFW-blocked (China's Great FireWall) websites.
From what I understand from my solution, the client configuration is NOT the only step of setup. There are two layers of proxy access that need to be completed:
Client Configuration. Configure your client with the server and connection information. A successful connection looked like this for me with my command line interface
shadowsocks-libev command line client successful connection
System or Browser Proxy Configuration. You will need to configure either your browser or web access tool to use a proxy, or set system-wide proxy settings. To set system wide proxy settings, go to system settings > network > network proxy and enter the proxy information. Setting Socks host to localhost:1080 resulted in successful GFW-blocked website access (as shown below)!
Ubuntu network settings proxy manual configuration

Google dataproc: Unable to access spark history page

I created a Google dataproc cluster. After logging into master node I started spark-shell then trying to access spark history page using
http://<external_ip_masternode>:4040
It get redirected to
http://<hostname_mastername>:8088/proxy/application_1487485713573_0002/
Browser is rejecting with error "DNS address could not be found." which is understandable.
Following are VM instance setting
Public IP type Ephermal
tcp:4040 opened in firewall
ip forwarding Off: Unable to edit this configuration
Following troubleshooting done but did not help
Telnet to :4040 -> Working
Access from Ubantu host/ browser Chrome: Getting redirected and name lookup failure
Access from Ubantu host /browser Firefox: Getting redirected and name lookup failure
Access from Mac OSX host /browser Safari : Getting redirected and name lookup failure
Access from Mac OSX host/ browser chrome : Getting redirected and name lookup failure
To view Hadoop web interfaces in Dataproc, it is recommended to follow the instructions for running an SSH-based SOCKS proxy: https://cloud.google.com/dataproc/docs/concepts/cluster-web-interfaces
If you follow the instructions there, it'll also have you run a separate browser session using your SSH tunnel, and sets hostname resolution to occur on the VM side of the tunnel. That way, all the links in the Hadoop pages will automatically work, since they all reference each other using internal hostnames, and intentionally avoid any dependency on "external IP addresses".
Using the SSH tunnel is also much more secure than opening up firewall rules to visit the unencrypted HTTP traffic directly coming from the Hadoop HTTP servers (if you accidentally open up your firewall rules too broadly, then other people on the internet will be able to access your external IP addresses, and even if you don't, attackers could see your unencrypted web traffic served up by the ApplicationMaster, HistoryServer, etc.).

AWS RDS Postgresql Pgadmin - Server doesn't listen

I followed the aws tutorial found here.
Everything went smoothly up until connecting to the postgresql instance via pgadmin.
I entered the appropriate user/pw info and copy/pasted the address of the db appropriately.
The port is indeed 5432 on my aws dashboard.
I am receiving the following error message:
Server doesn't listen
The server doesn't accept connections: the connection library reports
could not connect to server: Operation timed out Is the server running on host "my_database_name.some_stuff.us-west-2.rds.amazonaws.com" (52.10.228.18) and accepting TCP/IP connections on port 5432?
If you encounter this message, please check if the server you're trying to contact is actually running PostgreSQL on the given port. Test if you have network connectivity from your client to the server host using ping or equivalent tools. Is your network / VPN / SSH tunnel / firewall configured correctly?
For security reasons, PostgreSQL does not listen on all available IP addresses on the server machine initially. In order to access the server over the network, you need to enable listening on the address first.
For PostgreSQL servers starting with version 8.0, this is controlled using the "listen_addresses" parameter in the postgresql.conf file. Here, you can enter a list of IP addresses the server should listen on, or simply use '*' to listen on all available IP addresses. For earlier servers (Version 7.3 or 7.4), you'll need to set the "tcpip_socket" parameter to 'true'.
You can use the postgresql.conf editor that is built into pgAdmin III to edit the postgresql.conf configuration file. After changing this file, you need to restart the server process to make the setting effective.
If you double-checked your configuration but still get this error message, it's still unlikely that you encounter a fatal PostgreSQL misbehaviour. You probably have some low level network connectivity problems (e.g. firewall configuration). Please check this thoroughly before reporting a bug to the PostgreSQL community.
Step 1
You are getting the same dialog I was seeing above. Crap!
Step 2
Go to your RDS instances
Step 3
Go to your security groups
Step 4
If your account was like mine you see this text:
Your account does not support the EC2-Classic Platform in this region.
DB Security Groups are only needed when the EC2-Classic Platform is supported.
Instead, use VPC Security Groups to control access to your DB Instances.
Go to the EC2 Console to view and manage your VPC Security Groups.
For more information, see AWS Documentation on Supported Platforms and Using RDS in VPC.
Step 5 Go back and check your RDS security group name (RDS->instances right click your instance). You will see something like Security GroupsList of VPC Security Groups associated with this DB Instance.
You will see something like:
default (sg-********) ( active )
Step 6 In your VPC security groups find your sg-******** that matches your database. Right click that. Edit inbound/outbound rules to add postgresql.
Try to connect again.
This solved my problem.
If this does not solve your problem I am very sorry, but I hope this documentation brings me some debugging karma.
go to AWS services in security group click on the security group id . from the "actions" button click on "edit inbound roles" and then change the "source" to "my ip"