I have a client a application which is distributed to multiple clients. Sometimes this application acts as server for some processes.I want the communication to be over ssl. I want to insert the server certificate inside the application and publish to multiple clients. Is this design a good idea?
Is there any real time product example which is using this design?
You could possibly add TLS communication this way, but as I understand your question, all application instances would receive the same certificate and they could thus impersonate each other. If someone extracts the private key for the certificate in one app they can decrypt all the communication for all processes and applications. Just the fact that the key is distributed to multiple environments outside your control could justify revocation of the certificate an any time.
The design is not a good idea if you want proper TLS communication with a publicly trusted root. The processes and applications will likely have to communicate using untrusted certificates, possibly self signed certificates.
Related
I am aware of certificate chains when validating a client certificate. Still, this either puts a lot of burden on the server administrator or restricts clients, which can be unfavorable when implementing a public OPC UA server.
An implementation of the client certificate validator that accepts all certificates for message encryption/signing is certainly possible. But would such an implementation be considered insecure in that matter?
If yes, how?
Yes, it is considered insecure.
Aside from the (hopefully) obvious use case, where certificates ensure you know exactly what client applications are allowed to connect to the server, certificates are also the first line of defense against malicious clients and are part of a "defense in depth" strategy.
A malicious actor that can't establish a secure channel with the server doesn't have much to work with. A malicious actor that can establish a secure channel can, e.g., open many connections, create many sessions (without activating, potentially causing a DoS are you use resources), attempt to guess credentials, re-use default credentials that an application may ship with, etc...
Further... in the face of the recent CIS alert re: ICS/SCADA devices + OPC UA servers, you'd be a bit of a fool to willingly ship a less secure product for the sake of convenience.
I am working through some security concepts right now and I was curious if this method has been tried and/or if it is safe taking into consideration "Brute Forcing" is still possible.
Take for example a Microsoft WebAPI Template in Visual Studio where you access a endpoint using a "GET".
The Endpoint would be accessible by any user/application
The String value that a user/application would get from this endpoint would be the password they need, but encrypted using a "KeyValue"
After a TLS Transmission of this Encrypted Value, the user/application would decrypt the String using their "KeyValue"
Is this a secure practice?
Thanks for indulging me and look forward to your responses.
EDIT: Added Further Clarification with Image to Help Illustrate
Suppose the following 2 Scenarios:
Communication between Server and Client
a. Your Server serves the Client application with an encrypted password.
b. The Client can request any password.
c. The passwords are encrypted with a shared Key that is known by both server and client application
As James K Polk already pointed out:
A knowledgable Attacker can and will analyse your deployed application and at some point will find your hardcoded decryption key ("KeyValue"). What prevents him from requesting every password that is stored on the Server?
Rule of thumb here would be: "Do not trust the client side."
Communication between Server and Server
a. You have 2 server applications. Application A is acting as some kind of database server. Application B is your Back-End for a user application of some kind.
b. Application A serves paswords to any requester, not only Server B. With no type of authentication whatsoever.
c. Confidentiality is guaranteed through a shared and hard-coded Key.
I think you are trying to overcomplicate things hoping that no one is able to piece together the puzzle.
Someone with enough time and effort might be able to get information about your server compilation and/or be able to get the Code of Application B. Which again defaults in the scenario of 1. Another point is that there are enough bots out there randomly scanning ips to check responses. Application A might be found and even-though they do not have the shared key might be able to piece together the purpose of Application A and make this server a priority target.
Is this a safe practice?
No. It is never a good idea to give away possibly confidential information for free. Encrypted or not. You wouldn't let people freely download your database would you?
What you should do
All Authentication/Authorization (for example a user login, that's what I expect is your reason to exchange the passwords) should be done on the server side since you're in control of this environment.
Since you didn't tell us what you're actually trying to accomplish I'd recommend you read up on common attack vectors and find out about common ways to mitigate these.
A few suggestions from me:
Communication between 2 End-points -> SSL/TLS
Authorization / Authentication
Open Web Application Security Project and their Top 10 (2017)
I have microservice on a new server/vps that will only ever be called via REST by monolith app to perform some heavy lifting and then post the operation results back to the monolith in few minutes.
How should I protect these two endpoints? I think my main goal, for now, is just preventing someone that found servers address to be able to do anything.
Almost every solution I google seems like overkill/premature optimization.
Is it sufficient that I generate random long token once on each machine and then just pass it to headers and check it's presence on the other end?
Do I even need to SSL this? As far as I understand we need SSL encryption for clients that are trying to send sensitive data via wireless or unsafe shared networks.
What are the chances(is it even possible?) that somebody is gonna eavesdrop between two digitalocean vps's sending data via http? Did it ever happen before ?
Q: Is it sufficient that I generate random long token once on each machine and then just pass it to headers and check it's presence on the other end?
A: Generally microservices are behind a GateKeeper/Gateway(nginx,haproxy) so you can expose the endpoints you want. In your case I would recommend to create a private network between the two vps's and expose your microservice on that internal IP.
Q: Do I even need to SSL this? As far as I understand we need SSL encryption for clients that are trying to send sensitive data via wireless or unsafe shared networks.
A: No. If you use internal networks and don't expose to the public then there is no need for SSL/TLS. If you would do something with Tier 3/4 then you would need encryption for cross datacenter communication.
Q: What are the chances(is it even possible?) that somebody is gonna eavesdrop between two digitalocean vps's sending data via http? Did it ever happen before ?
A: There are bots that scan for open ports on servers/computers and try to penetrate them with exploits. In all cases always use a firewall like UFW/firewalld.
So let's say you have two servers with these microservices using the internal private network from your favorite provider:
VPS1 (ip = 10.0.1.50)
FooBarService:1337
BarFooService:7331
VPS2 (ip = 10.0.1.51)
AnotherMicroService:9999
Now both VPS's can access each other's services by simply calling the ip + port.
Good luck.
There's a few simple solutions you could use to authenticate both servers back and forth. The one I would recommend if you want to keep it simple, as you say, is Basic Auth. As long as you're utilizing that over an SSL/HTTPS connection, it suffices as a super simple way to authenticate each end.
You state it is your main goal to protect these endpoints, but then ask if SSL/HTTP is even needed. If these servers are vulnerable to the web in any way, then I would say yes, your endpoints need to be protected, and if you're transmitting sensitive data, then you need to be sending it through a secure stream.
If you believe the data you're sending is not very sensitive, and is likely that no one that knows these two endpoints will even know how to properly manipulate your data by sending fake requests, then sure, you don't need any of this, but then you assume the risk and responsibility for if and when it ever is exposed. Basic Auth is super easy, and with LetsEncrypt it's incredibly easy to obtain an SSL certificate for free. It's good experience, so may as well try it out and protect these endpoints and ensure that they're safe.
For implementing a RESTful API via HTTP I need a way to secure communication (encryption of communication, prevention of man-in-the-middle and replay attacks).
The API is supposed to be used for communication between software PC clients (Windows, Linux), smart phones, hardware clients on the one hand and an embedded device (the server) on the other hand.
If I use HTTPS with one (all embedded devices ever manufactured can use the same one I think) self-signed certificate (that clients have embedded/store somewhere) I get all the benefits I want.
Now I have got one issue:
As the embedded devices are always accessed by IP address, the client side host check for the certificate is going to fail. Whatever is written in the certificate is NOT going to be the host that answers.
E.g. with libcurl I have to disable the check via
curl_easy_setopt(curlEasyHandle, CURLOPT_SSL_VERIFYHOST, 0L);
This doesn't hurt too bad for self-written clients - but clients are also supposed to be written by 3rd party developers. What I find awkward now, is that 3rd parties have to know, that they have to disable the host check (and they have to do it...).
Also, I am not sure if disabling this check is always possible with whatever http/TLS lib that 3rd parties are using.
A certificate issued for an IP address (if even possible?!) is not an option, as the IP address can be changed by the user of the device.
Is there a way for a certificate to be "host neutral"? Or is a part of my approach incorrect and I should do something differently? Or is there nothing that can be done about it and everybody implements it like this?
I'm writing an iPhone application that needs to send small bits of information (two strings of under 128 characters each, at a time, and this doesn't happen too frequently) to a server when users interact with it. I would like this information to remain confidential, so I'm thinking of some sort of encryption or secure connection would be necessary.
My question is about the server side of things. The server the iPhone app has to communicate with is written in django and is running on lighttpd. What is the most appropriate way (or what is a standard way) of doing this. I was thinking https, which I know on the iPhone I can use ASIHTTPRequest to do a POST request, but I don't know what it requires on the server side. Do I need a certificate? How does the data get encrypted/secured? Are there any django modules to help with this? Do I have to do something to configure lighttpd?
Would something like xml-rpc or json-rpc be simpler? Is it possible to secure such communication? At what level would that occur?
Any help would be much appreciated.
Using xml-rpc or json-rpc are only means to encapsulate your data into a form that is easy to transport. Your iPhone app can transform the Objective C data using one of those formats and your Django server app can transform the data back into Python objects.
Neither of these have anything to do with security.
Creating an HTTPS (SSL) connection encrypts all communication between the client (iPhone) and the server (Django). You will need to get a certificate for the server side. This indicates to the client that the server is who it claims to be. Your next line of research down this path should be about how to configure lighttpd to handle SSL traffic. Once lighttpd negotiates the SSL communication, your Django app will operate as it does for non-secured traffic.
This is your best choice.
If, for whatever reason, you don't want to use SSL, then you could find strong encryption libraries for both ends of the communication. The iPhone app could encrypt the data, send it over an HTTP connection and the Django app could decrypt it. For example, the pycrypto Python library implements strong encryption ciphers such as AES and Blowfish. You might be able to find an implementation of one of these ciphers written in Objective C.
Did you notice that this is getting increasingly complex?
Go with SSL. It's the way security is done for HTTP-based communication.
Hmm it looks like this might be what you're after, have you seen it?
Setting up SSL for Lighttpd/Django
If I read that right, that setup allows your server to answer https and http requests (?)
Then if your whole app isn't going to be https there's this SSL Middleware to help configure some paths as ssl and some not.
If you use https (SSL) on the server side it shouldn't matter if you use XML-RPC or JSON-RPC. All the data you transfer will be encrypted and secure.
I can only speak from our Rails application and nginx. I bought a SSL certificate from GoDaddy (very cheap) and nginx is setup to encrypt the content (Rails is not doing this itself) on the fly when it sends it out. On the iPhone ASIHTTPRequest will be responsible to decrypt the data. All other layers shouldn't be concerned about the encryption, you can send anything you want.
You might also be able to use a self-signed certificate. We decided to use GoDaddy as we also use the SSL certificate for regular browsers, and those show a warning message to the user if they encounter a self-signed certificate, which obviously scares people away.