Akka remote actors filter connections by IP - scala

I'm trying to add security to my remote actors. I've set untrusted-mode:
http://doc.akka.io/docs/akka/snapshot/scala/remoting.html
Is it possible to add IP filtering, to allow connection only from specific server? For example I have one master and 10 slaves, I want to allow only for my master (specific IP) to connect my slaves.
In open source everyone could just create a new instance of my master, and connect to my real slaves. How can make it secure?

Using IP filtering is not very secure as it's easy to fake an IP. Luckily Akka comes with secure transport support via SSL and secure cookie support.
A cookie is like an API key and will be required to establish the connection. SSL will guarantee eavesdropping is not possible to steal the secure cookie. See this doc for example.
I made a simple project that uses Akka remoting and SSL with secure cookie. Try it out here. Read how to setup SSL certificate storage and such here.

Related

Postgresql requests proxied by HTTP server

I am using a mobile application that connects directly to the database instance (Postgres), as such, I have to keep the ports open for traffic that is generated from the internet (4G, mobile app).
This mobile app (QFIELD, mobile version of QGIS) has a direct connection to the database, this is the reason why the database is reachable from the internet on a public ip but this is a critical issue for the security of the data and the requests that can be sent to the database.
I would like to proxy the requests so that the database is only available to local machines and not open for connections directly.
The mobile appp would send the request to an HTTP url which would send the request to the local ip and port, this way I would avoid to have the database exposed on the internet.
Ideally, I would like to go from this app (which uses a postgres connection string to connect to the server) to an HTTP server that routes the request locally, as such:
APP connects to https://myproxy/postgres
Request is proxied to a local server
Can I do this with Apache2? Any ideas?
At the moment I cannot write a middleware that proxies requests from the APP to the local postgres.
If your application is expecting to connect directly to a PostgreSQL database and you don't want to change that then you need to connect to something that "speaks" PostgreSQL's client protocol.
You can place a proxy such as pgbouncer or pgpool in front of it, but they aren't a guarantee of greater security just by themselves. This is the same problem as with any proxy - it is just forwarding requests and responses to your actual server so any vulnerability is still exposed.
What you can do is:
restrict the number of connections at the proxy point
restrict which users can connect non-locally to your PostgreSQL cluster
restrict where they can connect from to just your proxy
restrict those users permissions within the database(s)
That last point is particularly important - assume any user account your application can be used maliciously. Restrict the account to prevent mass updating or deleting of data. Also take special care to restrict access to other users' data.
If I was forced to allow access like this, I would want one PostgreSQL user account per actual user at the very least. In practice I wouldn't get to this point with a production application.

How to deploy a MongoDB sharded cluster with SSL/TLS only on mongos instances?

I have a MongoDB deployment we wanted to enable SSL for external connections, but only for external connections.
There's a number of reasons for not wanting SSL in internal communication. It adds unnecessary overhead, and we also don't want to expose the internal mongod's to the internet -- there shouldn't be any reason for them to even have an external IP. In this scenario, the mongos's should use SSL for external communication with clients and no SSL for internal communication with the mongod's.
Unfortunately in the documentation there's only discussion about four easy SSL modes:
requireSSL : only uses SSL for all communication, including internal.
preferSSL : uses SSL for internal communication, but allows non-SSL traffic from clients. This is pretty much the opposite of what we want.
allowSSL : allows the use of SSL, but also allows non-SSL
connections. disabled : No SSL whatsoever.
Neither of these helps with our situation.
In our case it's even worse, because we need SSL to migrate data from Parse.com safely, so we need to create the certificates with a commonly trusted CA (say letsencrypt) instead of our own homemade root CA.
So how do I create a MongoDB deployment that uses SSL for the external world but not internally? Do I need a reverse proxy that does SSL termination and understands the mongodb:// protocol? Or is there some other way?

Need to run a cron job as encrypted

I need to setup a cron job to run a SOAP client. The customer insists that I connect to their web service (on an https address) from an https address. They insist that if I don't their response to me can't be encrypted.
My first question is, is that true? I thought that as long as I'm connecting to their SOAP service over https, the response back would automatically be encrypted.
If that's true, how can I run a cron job to be as https? My site is on a LAMP setup with cPanel access.
Thank you in advance for your help!
Your customers statement seems to be a little bit unclear in what he/she specifically means by "... connecting from an https adress" as there isn't any notion of the term "https adress" in the specs and https URLS only seem to make sense in the context of Request-URI s given in a https request.
Given this unclarity I'm only wild guessing. Nevertheless to me it seems your clients requirements might most probably not be connected to the http protocol but rather to establishing your TLS connection.
If your client is very sensitive in respect to the security of his system - which in fact if he intends to offer RPC requests might be a very good idea - he might not want to the whole world to be able to connect an encrypted connection to his machines and rely on any secondary authentication mechanism once the connection has been established.
As most users of the public internet don't have any certificates signed by a trusted authority this feature it isn't used out in the open wild but besides server authentication the TLS handshake protocol also provides a means of client authentication via client certificates (the relevant part being section 7 in RFC 5246 here. see: https://www.rfc-editor.org/rfc/rfc5246#section-7)
While in the absence of widely used client certificates web services usually rely on establishing an encryted connection to first to authenticate users by some kind of challange response test like querying for username and password your client might want to either additionally secure access to his machines by additionally requiring a valid client certificate or even - probably not the best idea - replace a second authorization like the one already mentioned above.
Nevertheless all this are nothing but some ideas that I came along with given the riddle in your question.
Most probably the best idea might be to just ask your client what he/she meant when saying "... connecting from an https adress"

FTPS on Azure Worker role

I need to deploy a Azure Worker Role with input endpoint on port 21 so that it can accepts incoming FTP connections.so that i should be able to connect to worker role through FTP Client like Filezilla and access the azure blob storage.
For secured communication between client and SErver(Azure worker role) i need to implement AUTH TLS/SSL command.
can we able to support FTP over SSL/TLS - aka FTPS (FTP secure) on Azure Worker role via socket programming(tcplistener and tcpclient).
Regards,
Vivek.
IF you make sure that FTP server is running in the Windows Azure Worker Role, you sure can configure a TCP/IP endpoint in worker role set to use port 21 and then configure a SSL certificate set over this TCP/IP endpoint. Once endpoints are properly configured in the worker role along with SSL certificate bindings, and the application listening on those port is able respond to incoming connections, you can make secure FTP connection.
The bottom line is that you would need to configure it correctly they way you want and the infrastructure will not prohibit your doing so, just you would need to make it happen correctly.

stunnel on window for IBM MQ connection

Does anyone have an experience or just thoughts about securing MQ TCP
communication channels using stunnel?
I am integration with third party S.W which has MQ support built in but it can not support SSL. So to have some kind of security over the TCP we would like to use stunnel. Does any one have any thoughts how to implement and any best practices
I haven't used stunnel so I'll leave that part of the answer to another responder. With regard to WMQ, keep in mind that this will provide you with data privacy and data integrity over the stunnel link but will not give you channel-level services such as WMQ authentication. True, you will have some level of authentication on the stunnel connection itself, but anyone with a TCP route to the QMgr that does not arrive via stunnel will also be able to start that channel.
Your requirement for security obviously includes data privacy. If it also includes authentication and authorization, you might need to use something like BlockIP2 (from http://mrmq.dk )to filter incoming connections on that channel by IP address to insure they arrive over the stunnel link. Of course, there is nothing to prevent someone at the remote end from specifying any channel name to connect to so if you secure one channel, you need to secure them all - i.e. make sure that SYSTEM.DEF.* and SYSTEM.AUTO.* channels are disabled or that they use SSL and/or an exit to authenticate the inbound connection.
Finally, be aware that if WMQ is configured to accept the ID presented by the client then the connection has full administrative access and that includes remote code execution. To prevent this you must configure all inbound channels (RCVR, RQSTR, CLUSRCVR and SVRCONN) that are not administrative with a low-privileged ID in the channel's MCAUSER. For any channels that are intended for administrators, authenticate these with SSL. (Hopefully your 3rd party SW is an application and not an administrative tool! Any WMQ admin tool must support SSL or else don't use it!)
So by all means use stunnel to secure this link, just be sure to secure the rest of the QMgr or else anyone who can legitimately connect (or even anonymous remote users if you leave MCAUSER blank and aren't using SSL and/or exits) will just bypass the security or disable it.
There's a copy of the IMPACT presentation Hardening WMQ Security at https://t-rob.net/links/ which explains all this in more detail.
Rob - I agree with you. For that only we have MQIPT. Which is much better. For STunnel for MQ i have sloved the problem.
Keys -U need a .pem key (From Key manager you can create .p12 and use open ssl to covert to .PEM).
Client Side: Download and install stunnel have followoling entries in the config file
cert = XXX.pem
client = yes
[MQ]
accept = 1415
connect = DestinationIP:1415
Server Side:
cert = xxx.pem
client = no
[MQ]
accept = 1415
connect = MQIP:1415
Once you do this all you have do is just call the amquputc with the Queue name.