Please suggest a secure way to transfer data from one computer to another - sockets

I want "A" computer to responsible for execute the command, and "B" computer will send the command to "A" computer. I have some ideas on the implementation.
Way1:
The "A" computer have a while true loop to listen for "B" computer's command, when it is received, it execute.
Way2:
Using a FTP Server, that store information that about the command.
The "A" computer have a while true loop to check whether the "B" computer uploaded any new information about the command. if yes, reconstruct the command and execute. After executed, the file on FTP Server will be deleted. And store a copy the "A" computer.
Way3:
this is similar to way2, but using database to store. After the command is executed, it will made as executed.
What is your suggestion about these 3 ways? Is that secure enough?

A generic way: ssh and scp.
Reliable secure database specific: depends on the platform: Service Broker, Oracle AQ, MQSeries.
Not so good idea: write a socket program w/o knowing anything about security.

You're assuming a trust relationship without giving any clues as to how you know that the payload from computer A is benign. How do you plan to prevent computer A from sending a task that says "reformat your hard drive after plundering it for all bank accounts and passwords?
AKA write a socket listener on computer B and let computer A connect to it. No security here.
FTP just saves you from having to write the transport protocol.
Database for persistence instead of file store - nothing new here.
None of your options have any security, so it's hard to say which one is more secure. FTP requires that the user know the URL of your FTP server and, if you aren't allowing anonymous access, a username and password. That makes 2 more secure than 1. 2 and 3 are equally (in)secure.
All can work, but all are risky. I see little or no security here.

Without more details, all I can suggest for security is to use SSL.
Using FTP or a database server will just add needless complexity, potentially without gaining any real value.
If you want a secure solution, you will need to carefully describe your environment, risks, and attackers.
Are you afraid of spoofing? Denial of service? Information disclosure? Interception?

1 has nothing to do with security.
2 uses FTP and not FTPS? Anyone and their grandparents can sniff the username/password. Do not use.
3 depends on how securely you connect to the database --- on both ends. Also, how can you be sure what's inserted into the database if from your trustee?
What are you really implementing here? With all due respect, it sounds like you should pick up at least an introductory book on information security.

From your examples it seems that you aren't so much interested in securing against malicious attacks and bit manipulation but what you want is reliable delivery.
Have a look at a message queue solution, for example MSMQ.

It depends what kind of security you need.
if it is guaranteed delivery - anything way that makes the message stored and approve the storing before deletion will do.
if it's about the sender and the receiver id you should use certificates.
if it's about the line security - you should encrypt the message.
All things can be achieve using WCF if you're on the Microsoft world.
and there are other libraries if you're on the Linux world.
(you can use https post for example).

Related

Secure way to access DB on Raspberry pi outside home network

I have a postgres database installed on my raspberry pi that works fine locally within my home network. I would like to be able to access this from outside my home network. I've done some research and from what ive seen port forwarding on my router or using a service like localtunnel or ngrok seem like viable solutions.
However, my question is if these open up any security risks on my home network? If not, then great i can move forward with setting this up (i was leaning towards port forwarding on my router). But if there are concerns, what exactly are they and what steps can I take to have a secure setup?
If you expose your database to the world with a weak password for a database superuser, that will definitely lower your security in a substantial way. Hackers routinely patrol for such weak settings and exploit them, mostly for cryptocurrency mining but also to add you to botnets. In those cases they don't care about your database itself, it is just a way in to get at your CPU/network connection. They might also probe for valuable information appearing in your database, in which case they don't even need to be a superuser.
If you always run the latest bugfix version and use a strong password (like the output of pwgen 20 -sy -1) and use SSL or if you correctly use some other method of authentication and encryption, then it will lower security by only a minimal amount.
If you personally control every password, and ensure they are strong, and test that they are configured correctly to be required for log on (e.g. intentionally enter it wrong once to make sure you get rejected), I wouldn't worry too much the port forwarding providing bad guys access to the machine. If you care about people being able to eavesdrop on the data being sent back and forth, then you also need SSL.
Encrypted tunnels of course are another solution which I am not addressing.

If I have authorization enabled, why is it dangerous to open up MongoDB to all remote IPs?

MongoDB by default only listens to traffic coming in from 127.0.0.1 (and port 27017). I'm the only one who can send traffic to that IP address, so this prevents random internet people from messing with my database and stealing data. Cool. But then I enable authorization by creating an admin user:
mongo
use admin
db.createUser(
{
user: "ADMINUSERNAME",
pwd: "ADMINPASSWORD",
roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
}
)
exit
and setting security.authorization to "enabled" in /etc/mongod.conf.
Now, if I set net.bindIp in /etc/mongod.conf to 127.0.0.1,<serveripaddress>, and open up port 27017 with ufw allow 27017, what method could attackers use (other than brute force user/password stuff) to break in to my database?
Is the recommendation to have an IP white-list just an extra layer of security, or is there something that I'm missing here? Advantages that I can think of:
If an exploit is discovered in MongoDB, you still have an extra layer of defence.
If you have bugs in your code, or mess up something (e.g. accidentally add a user with a weak password), you've got that extra layer.
Safe from brute force user/password attacks - but assuming my password is 50 random ASCII characters long, this wouldn't be a problem, right?
Bad actors can't directly DDOS/flood the mongodb server directly - but this is easy to solve in other ways I think (fail2ban or something like that).
So points #1 and #2 seem to be the only real problems - and I can definitely see the dangers there, but am I missing anything else?
Note: I don't think this question is suited to the security stackexchange site because it's a fairly simple program-specific question.
Edit: I was originally saying "authentication" when I meant "authorization".
I thought long about wether to answer the question here or mark it as off-topic, but since “DevOps” seems to be ubiquitous these days, it might prevent serious damage when more easily accessible.
Disclaimer: there are books written about the general topic and a whole industry of engineers concerned with it. Here, only a brief overview and some hints can be given. Furthermore, some topics are heavily simplified. Do not rely solely on the information given here.
Assumption as per best practice: An attacker knows at least as much about your system (network, software, OS etc) as you do.
So, let us recap (some of) the general risks.
Unless you monitor failed login attempts and set up automatic action after a few failed attempts from a specific client (fail2ban being the simpelest example), one could brute force your accounts and passwords.
Furthermore, unless you use TLS, your connections are prone to eavesdropping. When successful, an attacker knows any credentials sent to the server.
Using exploits, for example of the ssh server, an attacker might hijack you machine. This risk increases for every exposed service. This attack is by far the most dangerous as your data will be compromised and the attacker might use your system to do Very Bad Things™ . In your name and in some jurisdictions even on your legal account.
Whitelisting IPs is surely helpful, but a determined attacker might get around this via IP spoofing. And you have to assume that the attacker knows valid source and destination IPs. Furthermore, when one of the whitelisted IPs is (partially) hijacked, an attack might well originate from there.
Now there are three general ways of dealing with those risks:
Risk mitigation, such as proper patch management and proper firewall setup.
Intrusion prevention, such as fail2ban
Intrusion detection
The latter deserves a bit of explanation. It is considered best practice to assume that sooner or later a successful attack is mounted against the system in question. It is imperative that a system's administrator can detect the intrusion in order to be able to take countermeasures. However, as soon as an attacker gets root privileges, all countermeasures can be undone. Therefor, it is imperative that an attacker can only acquire non-privileged access during the initial phases of an attack so you can detect his actions for privilege escalation. Conversely, this means that you have to make absolutely sure that the attacker can not acquire root privileges during the initial phases of an attack (which is one of the many reasons why no exposed service should ever run as root).
Assuming that you do not control the network, you have to resort to what is called host based intrusion detection or HIDS for short. HIDS in turn belong to two categories: behaviour based HIDS and state based HIDS. OSSEC belongs to the latter category and is (relatively) easy to set up and maintain while offering a broad range of features.
On a different level, exposing a database service to the public internet is almost always a sure sign that something is wrong with the system design. Unless you specifically want to provide database services, there is no reason I can think of for doing so.
If you only want to expose the data: write a REST-/SOAP-/whatever-API for it. If your database servers are only accessible via the public internet for some strange reason: Use VPN or stunnel.
Hint: depending on the application you create or the jurisdiction you live in, exposing the database service to the public internet might violate rules, regulations or even the law
tl;dr: if you have to ask, you probably should not do it.

Encrypted Password Accessible via API Call, Is this Secure?

I am working through some security concepts right now and I was curious if this method has been tried and/or if it is safe taking into consideration "Brute Forcing" is still possible.
Take for example a Microsoft WebAPI Template in Visual Studio where you access a endpoint using a "GET".
The Endpoint would be accessible by any user/application
The String value that a user/application would get from this endpoint would be the password they need, but encrypted using a "KeyValue"
After a TLS Transmission of this Encrypted Value, the user/application would decrypt the String using their "KeyValue"
Is this a secure practice?
Thanks for indulging me and look forward to your responses.
EDIT: Added Further Clarification with Image to Help Illustrate
Suppose the following 2 Scenarios:
Communication between Server and Client
a. Your Server serves the Client application with an encrypted password.
b. The Client can request any password.
c. The passwords are encrypted with a shared Key that is known by both server and client application
As James K Polk already pointed out:
A knowledgable Attacker can and will analyse your deployed application and at some point will find your hardcoded decryption key ("KeyValue"). What prevents him from requesting every password that is stored on the Server?
Rule of thumb here would be: "Do not trust the client side."
Communication between Server and Server
a. You have 2 server applications. Application A is acting as some kind of database server. Application B is your Back-End for a user application of some kind.
b. Application A serves paswords to any requester, not only Server B. With no type of authentication whatsoever.
c. Confidentiality is guaranteed through a shared and hard-coded Key.
I think you are trying to overcomplicate things hoping that no one is able to piece together the puzzle.
Someone with enough time and effort might be able to get information about your server compilation and/or be able to get the Code of Application B. Which again defaults in the scenario of 1. Another point is that there are enough bots out there randomly scanning ips to check responses. Application A might be found and even-though they do not have the shared key might be able to piece together the purpose of Application A and make this server a priority target.
Is this a safe practice?
No. It is never a good idea to give away possibly confidential information for free. Encrypted or not. You wouldn't let people freely download your database would you?
What you should do
All Authentication/Authorization (for example a user login, that's what I expect is your reason to exchange the passwords) should be done on the server side since you're in control of this environment.
Since you didn't tell us what you're actually trying to accomplish I'd recommend you read up on common attack vectors and find out about common ways to mitigate these.
A few suggestions from me:
Communication between 2 End-points -> SSL/TLS
Authorization / Authentication
Open Web Application Security Project and their Top 10 (2017)

Simplest server to server authentication

I have microservice on a new server/vps that will only ever be called via REST by monolith app to perform some heavy lifting and then post the operation results back to the monolith in few minutes.
How should I protect these two endpoints? I think my main goal, for now, is just preventing someone that found servers address to be able to do anything.
Almost every solution I google seems like overkill/premature optimization.
Is it sufficient that I generate random long token once on each machine and then just pass it to headers and check it's presence on the other end?
Do I even need to SSL this? As far as I understand we need SSL encryption for clients that are trying to send sensitive data via wireless or unsafe shared networks.
What are the chances(is it even possible?) that somebody is gonna eavesdrop between two digitalocean vps's sending data via http? Did it ever happen before ?
Q: Is it sufficient that I generate random long token once on each machine and then just pass it to headers and check it's presence on the other end?
A: Generally microservices are behind a GateKeeper/Gateway(nginx,haproxy) so you can expose the endpoints you want. In your case I would recommend to create a private network between the two vps's and expose your microservice on that internal IP.
Q: Do I even need to SSL this? As far as I understand we need SSL encryption for clients that are trying to send sensitive data via wireless or unsafe shared networks.
A: No. If you use internal networks and don't expose to the public then there is no need for SSL/TLS. If you would do something with Tier 3/4 then you would need encryption for cross datacenter communication.
Q: What are the chances(is it even possible?) that somebody is gonna eavesdrop between two digitalocean vps's sending data via http? Did it ever happen before ?
A: There are bots that scan for open ports on servers/computers and try to penetrate them with exploits. In all cases always use a firewall like UFW/firewalld.
So let's say you have two servers with these microservices using the internal private network from your favorite provider:
VPS1 (ip = 10.0.1.50)
FooBarService:1337
BarFooService:7331
VPS2 (ip = 10.0.1.51)
AnotherMicroService:9999
Now both VPS's can access each other's services by simply calling the ip + port.
Good luck.
There's a few simple solutions you could use to authenticate both servers back and forth. The one I would recommend if you want to keep it simple, as you say, is Basic Auth. As long as you're utilizing that over an SSL/HTTPS connection, it suffices as a super simple way to authenticate each end.
You state it is your main goal to protect these endpoints, but then ask if SSL/HTTP is even needed. If these servers are vulnerable to the web in any way, then I would say yes, your endpoints need to be protected, and if you're transmitting sensitive data, then you need to be sending it through a secure stream.
If you believe the data you're sending is not very sensitive, and is likely that no one that knows these two endpoints will even know how to properly manipulate your data by sending fake requests, then sure, you don't need any of this, but then you assume the risk and responsibility for if and when it ever is exposed. Basic Auth is super easy, and with LetsEncrypt it's incredibly easy to obtain an SSL certificate for free. It's good experience, so may as well try it out and protect these endpoints and ensure that they're safe.

Server for iPhone; continuous connection

Ok lets say I want to create a connection between my iPhone app and my server (i'd like to try and use GoDaddy servers for this) to server real time location data to users.
I've seen plenty of good stuff online about using sockets, streams, ASIHttpmessage, CFHTTPMessageRef, etc., but what I'm unclear about is how to set up a server that continuously servers real time data to users (I believe you'd need a stream of data going to the user for this, not just a single http request and response). How does one take a host like GoDaddy and run server code on it. I know you can set up a server like this using terminal, but I don't have access to command line or the ability to run this "server program" from my web host as far as I know. Is there software I can download on my cpanel for this? Do I need a virtual private server and different hosting via GoDaddy maybe?
Does anyone know how I can do this or if my understanding of this whole thing is wrong. Please keep in mind I need this real time (or close to). Please, educate me. I really just need a better understanding of how this works.