Imagine an application where there are multiple RESTFUL servers exist with different resources.
When a client makes a resource request, currently a blocking call is made such that the resource request is relayed from Server to Server until the resource is found at some Server. Which is very time consuming . Now all the clients are run in constrained environment and Servers are moderately powerful.
Is there a way to do REST resource lookup service to avoid long blocking calls ?
Client should know where to look for a resource without relaying in a happy flow. So build the logic for getting a resource in the client.
Solution 1:
Client A has a list of all the resource servers and has a directory to know which resource is on which server.
Solution 2:
Client A does not know anything so it will query proxy server B which does the look up. This server B has a directory to map a particular resource to a specific server.
Server B will then query Resource server C/D/E/F etc. depending on the resource. They will respond to Server B.
Server B sends the requested resource to Client A
Update 1: Since you do not have control over your clients, go with solution 2 where B acts as a client in relation to your resource servers. As stated before either use a dictionary where each specific resource points to a particular server or use consistent hashing. Since I do not know what language you are using I have no idea whether there is an existing library for you to use.But there are so many already so probably it will fit your needs.
Related
My customer has 2 Windows Server 2019.
On both of them, an instance of a SOAP Web Service is running.
URLs:
https://host1.domainname.com/SOAPService
and
https://host2.domainname.com/SOAPService
Now, the requirement of the customer is to provide a single, unique URL that the clients can use to consume the SOAP WebService(s).
I read through several websites and if I got it right, I need a tool that is called "reserve proxy"... Using this tool, clients can access the webservice by using an URL such as https://host.domainname.com/SOAPService and the tool will automatically route the request to the available webservice.
Correct?
I also have an architectural question:
On which machine do I have to run such a Reserve Proxy?
Is it on host1 or host2 or do I need a dedicated machine (like a supervisor)?
If it is a dediciated machine, how can I apply high availability of this Reverse Proxy? E.g. is it possible to run 2 Reserve Proxies in parallel on different machines? Which tool could afford this?
Thanks
When watching kubernetes resources for changes, what exactly is happening under the hood? Does the http suddenly change to a wss connection?
To solve a problem of too many requests to the kube-apiserver I am rewriting some code to what I think is more of an operator pattern.
In our multi-tenant microservice architecture all services use the same library to look up connection details to tenant-specific DBs. The connection details are saved in secrets within the same namespace as the application. Every tenant DB has its own secret.
So on every call all secrets with the correct label are read and parsed for the necessary DB connection details. We have around 400 services/pods...
My idea: instead of reading all secrets on every call, create a cache and update the cache everytime a relevant secret was changed via a watcher.
My concerns: am I just replacing the http requests with equally expensive websockets? As I understand I will now have an open websocket connection for every service/pod, which still is 400 open connections.
Would it be better to have a proxy service to watch the secrets (kube-apiserver requests) and then all services query that service for connection details (intranet requests, kube-apiserver unreleated)?
From the sources:
// ServeHTTP serves a series of encoded events via HTTP with Transfer-Encoding: chunked
// or over a websocket connection.
It pretty much depends on the client which protocol is used (either chunked http or ws), both of them having their cost, which you'll have to compare to your current request frequency.
You may be better of with a proxy cache that either watches or polls in regular intervals, but that depends a lot on your application.
I got a webservice endpoint and I stumple upon how to correctly implement it.
It seems to be an parameterized exe-file which returns an XML Reply.
There is no documentation.
I am used to soap, wcf and rest but this is completely unknown to me, has anyone a guide or a best case how to implement such a service?
I can consume it with a HTTP GET but there are some questions left to me:
I know the questions are quite broad... But I could not find anything about it in the interwebz.
Is there a secure way to publish exe files as webservice?
Are there any critical downsides implementing such an interface?
Make I myself a fool and this is just an alias?
Example Url:
http://very.exhausting.company/Version/SuperStrange.exe?parameter=String
Web servers
What you call a webservice endpoint is nothing else than a web server listening on some host (normally 0.0.0.0) and some port on a physical or virtual machine and responding with some HTTP response to HTTP requests sent to that host, port and URIs that the web server cares to process.
Any web server is itself an application or a static or dynamic component of an application as the following examples illustrate:
JBoss, Glassfish, Tomcat etc. are applications, known as application servers, into which containers/servlets/plugins implementing web servers and corresponding endpoints are deployed. These listen on some port exposing generic web servers routing requests to those containers and their servlets;
a fat jar started with java -jar on a JVM which deploys a vert.x verticle featuring a vert.x HttpServer listening on some port is nothing else than a web server;
an interpreter such as node.js parsing and executing JavaScript code based on the express module will most likely deploy a web server on some port;
finally, a statically or dynamically linked application written in languages such as C++ or Go can expose a web server listing on some port.
All of the above cases feature different deployment mechanisms, but what they deploy is essentially the same: a piece of software that listens for HTTP requests on some port, executes some logic based on request and returns HTTP responses to the caller.
Your windows exe file is most likely a statically linked application that provides a web server.
Protocols
So we know you have a web server as it reacts to an HTTP GET. How does it relate to REST, SOAP etc? Effectively, REST, SOAP etc are higher level protocols. TCP is the low level, HTTP is based on top of that and your server supports that. REST, SOAP and everything else that you mention are higher level protocols that are based, among others, on HTTP. So all you know is that your application (web server) supports HTTP, but you do not know which higher level data exchange protocol it implements. It definitely implements some, at least a custom one that its author came up with to exchange data between a client and this application.
You can try to reverse engineer it, but it is not clear how would you find out about all possible endpoints, arguments, payload structures, accepted headers etc. Essentially, you have a web server publishing some sort of an API, but there is no generic way of telling what that API is.
Security
The world around you does not have to know how the API is published. You can put any of the above 4 web server implementations behind exactly the same firewall or a reverse proxy with SSL termination exposing just one host and port over SSL. So there is no difference in security, with respect to the world, whether you deploy it as exe or as a war into JBoss. This is not to say, that your exe file is secure: depending on how it is implemented it may allow all sorts of attacks, but again, this is equally true for any mechanism.
So, generally HTTP methods like PUT and DELETE are considered to be insecure.
However, it is recommended to use PUT and DELETE methods for RESTful API's.
Why is that these methods PUT and DELETE are not considered as insecure for RESTful API's
TL;DR
They are considered insecure because a web-server's default behavior would directly impact files on the servers filesystem -- allowing executable code attacks.
A RESTful service doesn't (have to) create files based on the original request.
Internal / firewalled / proxied
An internal API -- is protected by the fact that it's in a private LAN. It is only accessible to other internal (trusted) tools.
Similarly a firewalled internal or external API only accepts requests from certain IPs (trusted servers).
A proxy server can handle encryption and user authentication as well as authorization and then forward the request to the RESTful service.
But still what are the security risks?
If PUT would create executable files on the server that would be very insecure** -- because of the risk of code injection / executable injection...
...but when receiving PUT or DELETE operations we're not talking about file-management per se. We're talking about a specific handler code which analyses the request and does whatever you told it to do with the data (eg.: puts it into a database).
**Especially since after you execute HTTP PUT on a resource (in a RESTful context) one would expect to have access to execute HTTP GET on that same resource (meaning the resource would be directly accessible).
I was recently working quite a lot on SOAP web services and one question bothers me in that context. What would be better?
A. Get the WSDL and store it locally on client side and then only make calls to the service
if server keeps backward compatibility the client will still work with the old WSDL even when server side provided new version (of service and WSDL).
you are not able to get endpoint URL from WSDL so if service endpoint location has changed (but WSDL not) you need to reconfigure the client.
no additional call to the server
B. Use WSDL location as remote resource (HTTP) and download WSDL each time client instance is created?
What are some pros and cons?
Which is better depends on your setup and your needs but personally I would prefer having the WSDL locally, inside the client for these reasons:
no extra call to the server to get the WSDL (as you mentioned);
if server keeps backward compatibility the local WSDL will still be OK to use (as you mentioned);
if the service WSDL changes in an incompatible way and your client suddenly starts to fail you still have the old WSDL locally and you can compare it with the new one to see what's different.
The following point is usually not an issue:
you are not able to get endpoint URL from WSDL so if service endpoint location has changed (but WSDL not) you need to reconfigure the client.
The endpoint URL in the WSDL is not always correct and even if it was, you normally have the WSDL accessible at the same URL as the service by just sticking a ?wsdl parameter after it so if the location changes you won't find the service but you wont find the WSDL either. The service endpoint URL needs to be configurable in your client anyways.