Make server inaccessible to anything but REST requests - rest

Is there a way to make a machine that is connected to the internet unreachable by any means except REST requests? For background as to my quetion:
I have a really nice REST server/client project, with encrypted communication, in which the weak spot in the encryption technique is the code. Without seeing the code it is impossible to break the encryption, but if you see the code it is probably possible to figure out how to decrypt intercepted communication.
Due to this, I would like the server code at least to be as well protected as possible. Is there a way to make a machine that is connected to the internet unreachable by any means except REST requests to stop any hope of an attacker gaining access to the code or data that the server is serving up? I'm open to the idea of a bare metal solution if that would make for the safest system, the server's only job really will be just to server information requested via REST requests and nothing else.
(I'm aware this will still leave the client code as a weak spot, but I'm taking this one step at a time. Protecting the client code is presumably going to be a much bigger problem and probably impossible as I intend to eventually distribute the client executables).

Related

low connectivity protocols or technologies

I'm trying to enhance a server-app-website architecture in reliability, another programmer has developed.
At the moment, android smartphones start a tcp connection to a server component to exchange data. The server takes the data, writes them into a DB and another user can have a look on the data through a website. The problem is that the smartphones very regularly are in locations where connectivity is really bad. The consequence is that the smartphones lose the tcp connection and it's hard to reconnect. Now my question is, if there are any protocols that are so lightweight or accomodating concerning bad connectivity that the data exchange could work better or more reliable.
For example, I was thinking about replacing the raw TCP interface with a RESTful API, but I don't really know how well REST works in this scenario, as I don't have any experience in this area.
Maybe useful to know for answering this question: The server component is programmed in c#. The connecting components are android smartphones.
Please understand that I dont add some code to this question, because in my opinion its just a theoretically question.
Thank you in advance !
REST runs over HTTP which runs over TCP so it would have the same issues with connectivity.
Moving up the stack to the application you could perhaps think in terms of 'interference'. I quite often have to use technical stuff in remote areas with limited reception and it reminds of trying to communicate in a storm. If you think about it, if you're trying to get someone to do something in a storm where they can hardly hear you and the words get blown away (dropped signal), you don't read them the manual on how to fix something, you shout key words such as 'handle', 'pull', 'pull', 'PULL', 'ok'. So the information reaches them in small bursts you can repeat (pull, what? pull, eh? PULL! oh righto!)
Can you redesign the communications between the android app and the server so the server can recognise key 'words' with corresponding data and build up the request over a period of time? If you consider idempotency, each burst of data would not alter the request if it has already been received (pull, PULL!) and over time the android app could send/receive smaller chunks of the request. If the signal stays up, just keep sending. If it goes down, note which parts of the request haven't been sent and retry them when the signal comes back.
So you're sending the request jigsaw-style but the server knows how to reassemble the pieces in the right order. A STOP word at the end tells the server ok this request is complete, go work on it. Until that word arrives the server can store the incomplete request or discard it if no more data comes in.
If the server respond to the first request chunk with an id, the app can use the id to get the response and keep trying until the full response comes back, at which point the server can remove the response from its jigsaw cache. A fair amount of work though.

Simplest server to server authentication

I have microservice on a new server/vps that will only ever be called via REST by monolith app to perform some heavy lifting and then post the operation results back to the monolith in few minutes.
How should I protect these two endpoints? I think my main goal, for now, is just preventing someone that found servers address to be able to do anything.
Almost every solution I google seems like overkill/premature optimization.
Is it sufficient that I generate random long token once on each machine and then just pass it to headers and check it's presence on the other end?
Do I even need to SSL this? As far as I understand we need SSL encryption for clients that are trying to send sensitive data via wireless or unsafe shared networks.
What are the chances(is it even possible?) that somebody is gonna eavesdrop between two digitalocean vps's sending data via http? Did it ever happen before ?
Q: Is it sufficient that I generate random long token once on each machine and then just pass it to headers and check it's presence on the other end?
A: Generally microservices are behind a GateKeeper/Gateway(nginx,haproxy) so you can expose the endpoints you want. In your case I would recommend to create a private network between the two vps's and expose your microservice on that internal IP.
Q: Do I even need to SSL this? As far as I understand we need SSL encryption for clients that are trying to send sensitive data via wireless or unsafe shared networks.
A: No. If you use internal networks and don't expose to the public then there is no need for SSL/TLS. If you would do something with Tier 3/4 then you would need encryption for cross datacenter communication.
Q: What are the chances(is it even possible?) that somebody is gonna eavesdrop between two digitalocean vps's sending data via http? Did it ever happen before ?
A: There are bots that scan for open ports on servers/computers and try to penetrate them with exploits. In all cases always use a firewall like UFW/firewalld.
So let's say you have two servers with these microservices using the internal private network from your favorite provider:
VPS1 (ip = 10.0.1.50)
FooBarService:1337
BarFooService:7331
VPS2 (ip = 10.0.1.51)
AnotherMicroService:9999
Now both VPS's can access each other's services by simply calling the ip + port.
Good luck.
There's a few simple solutions you could use to authenticate both servers back and forth. The one I would recommend if you want to keep it simple, as you say, is Basic Auth. As long as you're utilizing that over an SSL/HTTPS connection, it suffices as a super simple way to authenticate each end.
You state it is your main goal to protect these endpoints, but then ask if SSL/HTTP is even needed. If these servers are vulnerable to the web in any way, then I would say yes, your endpoints need to be protected, and if you're transmitting sensitive data, then you need to be sending it through a secure stream.
If you believe the data you're sending is not very sensitive, and is likely that no one that knows these two endpoints will even know how to properly manipulate your data by sending fake requests, then sure, you don't need any of this, but then you assume the risk and responsibility for if and when it ever is exposed. Basic Auth is super easy, and with LetsEncrypt it's incredibly easy to obtain an SSL certificate for free. It's good experience, so may as well try it out and protect these endpoints and ensure that they're safe.

Socket has timed out

We are trying to access our web-app (through web server, IHS). When we use http we are fine ;https protocol is working as it submits the requests, however we observe Socket Time Out Exception continuously after some requests have been processed. Thereafter the request processing resumes again. We have tested the application with quite large concurrent load using https earlier; but in this case we are not sure why we are getting this error.
Oh boy, this can be due to thousands of different things. I would suggest a layer analysis approach starting off by the Web Server logs, you need to make sure the requests are reaching your web server and what is happening to the ones dictating a time out, you could be facing anything from network latency to a resource bounded host, contention or who knows, it all depends on your application's design.
Start off by checking out the network layer. Maybe if you provide some more information I can help you out.
Also check out http and https time out configurations on your web server.

Faulty-connection Proof File Transfer Protocol?

I frequently do website development live over an FTP connection. That is to say, I use a code editor with a built in FTP window and push/pull files to work on them, upload the changes, etc. This is mostly because it's unreasonable to try to create a local development server, and I use too many computers for that to be practical anyway without a lot of work.
My trouble is, the internet connection at our home is not exactly... stable. It's fast and mostly reliable, but it has a tendancy to glitch far more frequently than any other connection I've worked on (it's wireless DSL) and as a result, dropped connections are far too frequent. (It's about as reliable as AT&T is with phone calls in that regard.) When working with FTP, I find that if it drops the connection mid-file transfer, it can be difficult to recover. First of all, when the connection is dropped, it saves a blank file to the server (how is this helpful?) breaking the page I was working on completely, and the icing on the cake is that depending on the timing, vsftpd will get itself stuck in a timeout and I have to SSH in and restart it before I can access that file again.
This process alone has only been beneficial because it's taught me to build up some data protection techniques clientside, to prevent the server from eating my recent changes if the dropped connection happens to hang or crash my client. Overall though, it's a pretty failed situation, and I'm surprised I get any work done at all.
Long, long context, I know, but my question is this: Is there a file transfer protocol that is designed to handle "flakey" connections like mine? I'd imagine that, for example, trying to transfer files over a 3G tethered connection would yield the same results, especially while traveling. It seems like FTP and SFTP both rely on a persistant connection, and can deal with dropped packets but not the loss of the entire socket through a reconnect. It seems to me like a file transfer daemon should be able to store the state of the user interacting with it, and thus detect failed transfers and be ready to "resume" if the user reconnects in a reasonable amount of time.
Thanks if anyone knows anything. I'm seriously considering trying to write such a protocol myself (I've had a lot of success coding the ajax on my page to handle faulty connections, for example) but I don't want to dive in if there's already a solution available.
You want rsync. If the connection drops, you just repeat the command and it picks up right where it left off. Built in error checking and everything. Works over SSH, Windows client exists. Somebody's probably written a GUI front end.
BitTorrent works well with flakey connections. I hear that it is fast, too!

What is a RESTful way of monitoring a REST resource for changes?

If there is a REST resource that I want to monitor for changes or modifications from other clients, what is the best (and most RESTful) way of doing so?
One idea I've had for doing so is by providing specific resources that will keep the connection open rather than returning immediately if the resource does not (yet) exist. For example, given the resource:
/game/17/playerToMove
a "GET" on this resource might tell me that it's my opponent's turn to move. Rather than continually polling this resource to find out when it's my turn to move, I might note the move number (say 5) and attempt to retrieve the next move:
/game/17/move/5
In a "normal" REST model, it seems a GET request for this URL would return a 404 (not found) error. However, if instead, the server kept the connection open until my opponent played his move, i.e.:
PUT /game/17/move/5
then the server could return the contents that my opponent PUT into that resource. This would both provide me with the data I need, as well as a sort of notification for when my opponent has moved without requiring polling.
Is this sort of scheme RESTful? Or does it violate some sort of REST principle?
Your proposed solution sounds like long polling, which could work really well.
You would request /game/17/move/5 and the server will not send any data, until move 5 has been completed. If the connection drops, or you get a time-out, you simply reconnect until you get a valid response.
The benefit of this is it's very quick - as soon as the server has new data, the client will get it. It's also resilient to dropped connections, and works if the client is disconnected for a while (you could request /game/17/move/5 an hour after it's been moved and get the data instantly, then move onto move/6/ and so on)
The issue with long polling is each "poll" ties up a server thread, which quickly breaks servers like Apache (as it runs out of worker-threads, so can't accept other requests). You need a specialised web-server to serve the long-polling requests.. The Python module twisted (an "an event-driven networking engine") is great for this, but it's more work than regular polling..
In answer to your comment about Jetty/Tomcat, I don't have any experience with Java, but it seems they both use a similar pool-of-worker-threads system to Apache, so it will have that same problem. I did find this post which seems to address exactly this problem (for Tomcat)
I'd suggest a 404, if your intended client is a web browser, as keeping the connection open can actively block browser requests in the client to the same domain. It's up to the client how often to poll.
2021 Edit: The answer above was in 2009, for context.
Today, I would suggest using a WebSocket interface with push notifications.
Alternatively, in the above suggestion, I might suggest holding the connection for 500-1000ms and check twice at the server before returning the 404, to reduce the overhead of creating multiple connections at the client.
I found this article proposing a new HTTP header, "When-Modified-After", that essentially does the same thing--the server waits and keeps the connection open until the resource is modified.
I prefer a version-based approach rather than a timestamp-based approach, since it's less prone to race conditions and gives you a little more information about what it is you're retrieving. Any thoughts to this approach?