Simplest server to server authentication - rest

I have microservice on a new server/vps that will only ever be called via REST by monolith app to perform some heavy lifting and then post the operation results back to the monolith in few minutes.
How should I protect these two endpoints? I think my main goal, for now, is just preventing someone that found servers address to be able to do anything.
Almost every solution I google seems like overkill/premature optimization.
Is it sufficient that I generate random long token once on each machine and then just pass it to headers and check it's presence on the other end?
Do I even need to SSL this? As far as I understand we need SSL encryption for clients that are trying to send sensitive data via wireless or unsafe shared networks.
What are the chances(is it even possible?) that somebody is gonna eavesdrop between two digitalocean vps's sending data via http? Did it ever happen before ?

Q: Is it sufficient that I generate random long token once on each machine and then just pass it to headers and check it's presence on the other end?
A: Generally microservices are behind a GateKeeper/Gateway(nginx,haproxy) so you can expose the endpoints you want. In your case I would recommend to create a private network between the two vps's and expose your microservice on that internal IP.
Q: Do I even need to SSL this? As far as I understand we need SSL encryption for clients that are trying to send sensitive data via wireless or unsafe shared networks.
A: No. If you use internal networks and don't expose to the public then there is no need for SSL/TLS. If you would do something with Tier 3/4 then you would need encryption for cross datacenter communication.
Q: What are the chances(is it even possible?) that somebody is gonna eavesdrop between two digitalocean vps's sending data via http? Did it ever happen before ?
A: There are bots that scan for open ports on servers/computers and try to penetrate them with exploits. In all cases always use a firewall like UFW/firewalld.
So let's say you have two servers with these microservices using the internal private network from your favorite provider:
VPS1 (ip = 10.0.1.50)
FooBarService:1337
BarFooService:7331
VPS2 (ip = 10.0.1.51)
AnotherMicroService:9999
Now both VPS's can access each other's services by simply calling the ip + port.
Good luck.

There's a few simple solutions you could use to authenticate both servers back and forth. The one I would recommend if you want to keep it simple, as you say, is Basic Auth. As long as you're utilizing that over an SSL/HTTPS connection, it suffices as a super simple way to authenticate each end.
You state it is your main goal to protect these endpoints, but then ask if SSL/HTTP is even needed. If these servers are vulnerable to the web in any way, then I would say yes, your endpoints need to be protected, and if you're transmitting sensitive data, then you need to be sending it through a secure stream.
If you believe the data you're sending is not very sensitive, and is likely that no one that knows these two endpoints will even know how to properly manipulate your data by sending fake requests, then sure, you don't need any of this, but then you assume the risk and responsibility for if and when it ever is exposed. Basic Auth is super easy, and with LetsEncrypt it's incredibly easy to obtain an SSL certificate for free. It's good experience, so may as well try it out and protect these endpoints and ensure that they're safe.

Related

Encrypted Password Accessible via API Call, Is this Secure?

I am working through some security concepts right now and I was curious if this method has been tried and/or if it is safe taking into consideration "Brute Forcing" is still possible.
Take for example a Microsoft WebAPI Template in Visual Studio where you access a endpoint using a "GET".
The Endpoint would be accessible by any user/application
The String value that a user/application would get from this endpoint would be the password they need, but encrypted using a "KeyValue"
After a TLS Transmission of this Encrypted Value, the user/application would decrypt the String using their "KeyValue"
Is this a secure practice?
Thanks for indulging me and look forward to your responses.
EDIT: Added Further Clarification with Image to Help Illustrate
Suppose the following 2 Scenarios:
Communication between Server and Client
a. Your Server serves the Client application with an encrypted password.
b. The Client can request any password.
c. The passwords are encrypted with a shared Key that is known by both server and client application
As James K Polk already pointed out:
A knowledgable Attacker can and will analyse your deployed application and at some point will find your hardcoded decryption key ("KeyValue"). What prevents him from requesting every password that is stored on the Server?
Rule of thumb here would be: "Do not trust the client side."
Communication between Server and Server
a. You have 2 server applications. Application A is acting as some kind of database server. Application B is your Back-End for a user application of some kind.
b. Application A serves paswords to any requester, not only Server B. With no type of authentication whatsoever.
c. Confidentiality is guaranteed through a shared and hard-coded Key.
I think you are trying to overcomplicate things hoping that no one is able to piece together the puzzle.
Someone with enough time and effort might be able to get information about your server compilation and/or be able to get the Code of Application B. Which again defaults in the scenario of 1. Another point is that there are enough bots out there randomly scanning ips to check responses. Application A might be found and even-though they do not have the shared key might be able to piece together the purpose of Application A and make this server a priority target.
Is this a safe practice?
No. It is never a good idea to give away possibly confidential information for free. Encrypted or not. You wouldn't let people freely download your database would you?
What you should do
All Authentication/Authorization (for example a user login, that's what I expect is your reason to exchange the passwords) should be done on the server side since you're in control of this environment.
Since you didn't tell us what you're actually trying to accomplish I'd recommend you read up on common attack vectors and find out about common ways to mitigate these.
A few suggestions from me:
Communication between 2 End-points -> SSL/TLS
Authorization / Authentication
Open Web Application Security Project and their Top 10 (2017)

Restrict REST API to Mobile App Only - Proposed Method

I read a lot that you can't restrict your Public REST API to only your mobile application, but I have an idea and I want opinions on it:
Variable App Key Method
Mobile App
Get IP address of current connection
Use a secret algorithm to generate a hashed AppKey from IP address.
Send the AppKey with each API request
Server Side
Check the IP address of incoming request
Generate AuthKey from that IP address using same secret algorithm.
Compare AuthKey with AppKey, if they match then you know that your
Application is talking to you, because only the application knows
the secret algorithm.
When IP address changes:
On mobile App regenerate the AppKey using the new IP address
Server side will always generate same key because it depends
on IP address of the request
The main advantage of this is that the AppKey will always change, which is better than hardcoding 1 application key inside the code, which can be easily stolen by reading request headers. And even if you stole the AppKey from a user you must be using the same IP address where that key was generated.
Any thoughts?
The "secret" algorithm would have to be in the app... given to everybody. It is not secret at all. Security by obscurity is bad anyway, you should not have supposedly secret algorithms, because it's hopeless to keep them confidential. In this case the algorithm is trivially revealed.
Also refer to Schneier's law. :)
Edit:
So theoretically, this can never be secure, because any algorithm you put into the app needs to run on the client, and hence can be analysed and decompiled. But one can argue that it doesn't need to be theoretically secure, it is ok if it's "secure enough", ie. the risk is low enough, because for example the effort needed to get to the algorithm is too much.
But then consider the two possible options:
This is a really cool API that everybody totally wants to build a client for, or there is at least one such party. In this case, no effort is too much, and the algorithm will be "broken", that is, it will be decompiled and implemented in a different client.
People don't care all that much about this API, it is not very important for anybody else to build a client. In this case they probably won't build a client anyway, and much simpler methods can be used to prevent other clients from popping up with a reasonable success.
But obviously, it's your choice, an approach similar to what you describied does raise the bar somewhat - just don't expect it to be actually secure, because it will not be.

Microservice, amqp and service registry / discovery

I m studying Microservices architecture and I m actually wondering something.
I m quite okay with the fact of using (back) service discovery to make request able on REST based microservices. I need to know where's the service (or at least the front of the server cluster) to make requests. So it make sense to be able to discover an ip:port in that case.
But I was wondering what could be the aim of using service registry / discovery when dealing with AMQP (based only, without HTTP possible calls) ?
I mean, using AMQP is just like "I need that, and I expect somebody to answer me", I dont have to know who's the server that sent me back the response.
So what is the aim of using service registry / discovery with AMQP based microservice ?
Thanks for your help
AMQP (any MOM, actually) provides a way for processes to communicate without having to mind about actual IP addresses, communication security, routing, among other concerns. That does not necessarily means that any process can trust or even has any information about the processes it communicates with.
Message queues do solve half of the process: how to reach the remote service. But they do not solve the other half: which service is the right one for me. In other words, which service:
has the resources I need
can be trusted (is hosted on a reliable server, has a satisfactory service implementation, is located in a country where the local laws are compatible with your requirements, etc)
charges what you want to pay (although people rarely discuss cost when it comes to microservices)
will be there during the whole time window needed to process your service -- keep in mind that servers are becoming more and more volatile. Some servers are actually containers that can last for a couple minutes.
Those two problems are almost linearly independent. To solve the second kind of problems, you have resource brokers in Grid computing. There is also resource allocation in order to make sure that the last item above is correctly managed.
There are some alternative strategies such as multicasting the intention to use a service and waiting for replies with offers. You may have reverse auction in such a case, for instance.
In short, the rule of thumb is that if you do not have an a priori knowledge about which service you are going to use (hardcoded or in some configuration file), your agent will have to negotiate, which includes dynamic service discovery.

Multiple service connections vs internal routing in MMO

The server consists of several services with which a user interacts: profiles, game logics, physics.
I heard that it's a bad practice to have multiple client connections to the same server.
I'm not sure whether I will use UDP or TCP.
The services are realtime, they should reply as fast as possible so I don't want to include any additional rerouting if there are no really important reasons. So are there any reasons to rerote traffic through one external endpoint service to specific internal services in my case?
This seems to be multiple questions in one package. I will try to answer the ones I can identify as separate...
UDP vs TCP: You're saying "real-time", this usually means UDP is the right choice. However, that means having to deal with lost packets and possible re-ordering of packets. But, using UDP leaves a couple of possible delay-decreasing tricks open.
Multiple connections from a single client to a single server: This consumes resources (end-points, as it were) on both the client (probably ignorable) and on the server (possibly a problem, possibly ignorable). The advantage of using separate connections for separate concerns (profiles, physics, ...) is that when you need to separate these onto separate servers (or server farms), you don't need to update the clients, they just need to connect to other end-points, using code that's already tested.
"Re-router" (or "load balancer") needed: Probably not going to be an issue initially. However, it will probably become an issue later. Depending on your overall design and server OS, using UDP may actually become an asset here. UDP packet arrives at the load balancer, dispatched to the right backend and that could then in theory send back a reply with the source IP of the load balancer.
An alternative would be to have a "session broker". The client makes an initial connection to a well-known endpoint, says "I am a client, tell me where my profile, physics, what-have0-you servers are", the broker considers the current load, possibly the location of the client and other things that may make sense and the client then connects to the relevant backends on its own. The downside of this is that it's harder (not impossible, but harder) to silently migrate an ongoing session to a new backend, when there's a load-balancer in the way, this can be done essentially-transparently.

How secure is it to call "secret" URLs in an iOS app?

We want to use a web service in our app which obviously requires to call a URL. It's not HTTPS, just plain old HTTP, using NSURLConnection.
The problem is: This web service is VERY expensive and every thousand calls costs us real money. The fear is that someone could figure out which URL we call and then misuse that, letting the costs explode. There is no way for us to track if a call to that web service was legitimate.
We're calculating based on how many apps we sell, multiplied by an assumption of how often that app will be used per user in average. We have some good statistics on which we base our assumptions.
Are there known ways of figuring out which URL an app is calling on the Internet to retrieve information?
You could easily use a network sniffer while the phone is on WiFi to figure out this information. It sounds like it is actually critical that you use SSL with some sort of secure token in the URL.
If this is not an option perhaps you can provide your own proxy service that would use SSL and security tokens? Proxy also grants the ability to throttle requests and block users known to be malicious. Throttling puts an upper bound on the expense each user can incur within a given time interval. Another benefit of a proxy is that it allows one to gather statistics and measure the costs incurred by different users facilitating malicious user detection and business planning. Proxy could also save you some money if the service behind it is stateless by adding a cache that would remove a lot of expensive calls.
If the Web service is not encrypted, it would be trivial to use a proxy to intercept the Web requests made by the phone. If the expensive Web service does not offer at least some form of basic authentication, I would seriously reconsider including its URL in a public app.
Using plain URLs is a sure way of letting script kiddies run you out of business. If there is no way for you to track if a call to the expensive web service was legitimate, set up your own web service that fronts the real web service to make sure that your own web service can verify the legitimacy of the call before forwarding the request to the real web service.
Yes, there's plenty of ways to do this. For one example, hook up the iPhone to a wifi network, in which the router has a transparent proxy. Examine the proxy's logs. You'll see all URLs. Depends how determined your users are, but this is rather easy.
Ignoring the fact that people who jailbreak their devices could possibly look at your application, I believe it is possible to examine traffic like any other device (laptop, tablet, etc.) if someone was sniffing traffic over a WiFi hotspot using applications such as WireShark. However, I doubt there would be much risk of this over a cellular 3G network.
Good question.
As many have said, yes, it's easy to figure out the urls your app requests.
Note about HTTPS:
But since you are using HTTPS you are okay because over HTTPs the domain will be obscured to the IP address, and people cannot see the URL query string parameters. For example, if your URL was https://somewebsite.com?uid=mylogin&pass=mypass, they definitely won't be able to see "uid=mylogin&pass=mypass", and they probably can only see the IP address, not the domain name itself. (see https://serverfault.com/questions/186445/can-an-attacker-sniff-data-in-a-url-over-https)
Sidenote:
Might be safe to assume that Apple performs some sort of HTTP request diagnostics when they review your app -- which would make sense because it's in their best interest to try and figure out what your app does from many angles.