So, generally HTTP methods like PUT and DELETE are considered to be insecure.
However, it is recommended to use PUT and DELETE methods for RESTful API's.
Why is that these methods PUT and DELETE are not considered as insecure for RESTful API's
TL;DR
They are considered insecure because a web-server's default behavior would directly impact files on the servers filesystem -- allowing executable code attacks.
A RESTful service doesn't (have to) create files based on the original request.
Internal / firewalled / proxied
An internal API -- is protected by the fact that it's in a private LAN. It is only accessible to other internal (trusted) tools.
Similarly a firewalled internal or external API only accepts requests from certain IPs (trusted servers).
A proxy server can handle encryption and user authentication as well as authorization and then forward the request to the RESTful service.
But still what are the security risks?
If PUT would create executable files on the server that would be very insecure** -- because of the risk of code injection / executable injection...
...but when receiving PUT or DELETE operations we're not talking about file-management per se. We're talking about a specific handler code which analyses the request and does whatever you told it to do with the data (eg.: puts it into a database).
**Especially since after you execute HTTP PUT on a resource (in a RESTful context) one would expect to have access to execute HTTP GET on that same resource (meaning the resource would be directly accessible).
Related
Initial disclosure:
I’m new to nginx and reverse proxy configuration in general.
Background
I have a Swagger-derived, FOSS, https-accessible REST API [written by another party] running on a certain port of an EC2 CentOS 7 instance behind an nginx 1.16.1 reverse proxy (to path https://foo_domain/bar_api/); for my purposes, this API needs to be reachable from a broad variety of services not all of which publish their IP ranges, i.e., the API must be exposed to traffic from any IP.
Access to the API’s data endpoints (e.g., https://foo_domain/bar_api/resource_id) is controlled by a login function located at
https://foo_domain/bar_api/foobar/login
supported by token auth, which is working fine.
Problem
However, the problem is that an anonymous user is able to GET
https://foo_domain/bar_api
without logging in, which results in potentially sensitive data about the API server configuration being returned, such as the API’s true port, server version, some of the available endpoints and parameters, etc. This is not acceptable for the purpose, from a security standpoint.
Question
How do I prevent anonymous GET requests to the /bar_api/ endpoint, while allowing login and authenticated data requests to endpoints beyond /bar_api/ to proceed unhindered? Or, otherwise, how do I prevent any data from being returned upon such requests?
I got a webservice endpoint and I stumple upon how to correctly implement it.
It seems to be an parameterized exe-file which returns an XML Reply.
There is no documentation.
I am used to soap, wcf and rest but this is completely unknown to me, has anyone a guide or a best case how to implement such a service?
I can consume it with a HTTP GET but there are some questions left to me:
I know the questions are quite broad... But I could not find anything about it in the interwebz.
Is there a secure way to publish exe files as webservice?
Are there any critical downsides implementing such an interface?
Make I myself a fool and this is just an alias?
Example Url:
http://very.exhausting.company/Version/SuperStrange.exe?parameter=String
Web servers
What you call a webservice endpoint is nothing else than a web server listening on some host (normally 0.0.0.0) and some port on a physical or virtual machine and responding with some HTTP response to HTTP requests sent to that host, port and URIs that the web server cares to process.
Any web server is itself an application or a static or dynamic component of an application as the following examples illustrate:
JBoss, Glassfish, Tomcat etc. are applications, known as application servers, into which containers/servlets/plugins implementing web servers and corresponding endpoints are deployed. These listen on some port exposing generic web servers routing requests to those containers and their servlets;
a fat jar started with java -jar on a JVM which deploys a vert.x verticle featuring a vert.x HttpServer listening on some port is nothing else than a web server;
an interpreter such as node.js parsing and executing JavaScript code based on the express module will most likely deploy a web server on some port;
finally, a statically or dynamically linked application written in languages such as C++ or Go can expose a web server listing on some port.
All of the above cases feature different deployment mechanisms, but what they deploy is essentially the same: a piece of software that listens for HTTP requests on some port, executes some logic based on request and returns HTTP responses to the caller.
Your windows exe file is most likely a statically linked application that provides a web server.
Protocols
So we know you have a web server as it reacts to an HTTP GET. How does it relate to REST, SOAP etc? Effectively, REST, SOAP etc are higher level protocols. TCP is the low level, HTTP is based on top of that and your server supports that. REST, SOAP and everything else that you mention are higher level protocols that are based, among others, on HTTP. So all you know is that your application (web server) supports HTTP, but you do not know which higher level data exchange protocol it implements. It definitely implements some, at least a custom one that its author came up with to exchange data between a client and this application.
You can try to reverse engineer it, but it is not clear how would you find out about all possible endpoints, arguments, payload structures, accepted headers etc. Essentially, you have a web server publishing some sort of an API, but there is no generic way of telling what that API is.
Security
The world around you does not have to know how the API is published. You can put any of the above 4 web server implementations behind exactly the same firewall or a reverse proxy with SSL termination exposing just one host and port over SSL. So there is no difference in security, with respect to the world, whether you deploy it as exe or as a war into JBoss. This is not to say, that your exe file is secure: depending on how it is implemented it may allow all sorts of attacks, but again, this is equally true for any mechanism.
I am sending a Basic Auth Post request to neo4j REST
x.x.x.85:7474/db/data/transaction/commit
I am using unity www at x.x.x.15 which requires crossdomain.xml to be present at x.x.x.85:7474/crossdomain.xml. Where and how should I get crossdomain.xml at the desired location?
You can't add arbitrary resources to be served by Neo4j.
You could put it behind a HTTP server with reverse proxy capabilities (Apache HTTP Server, nginx) to serve the file and proxy the rest of the requests to Neo4j.
However, the real question is whether you should be exposing your database directly, to be used by a client browser (which is the reason why you need a crossdomain file) which could send any query, including MATCH(n) DETACH DELETE n, a.k.a. the new DROP TABLE (or DROP DATABASE).
I have implemented the supporting of SAML SSO to have my application act as the Service Provider using Spring Security SAML Extension. I was able to integrate my SP with different IDPs. So for example I have HostA,HostB, and HostC, all these have different instances of my application. I had an SP metadata file specified for each host and set the AssertionConsumerServiceURL with the URL of that host( EX:https:HostA.com/myapp/saml/sso ). I added each metadata file to the IDP and tested all of them and it is working fine.
However, my project also supports High Availability by having an IBM HTTP Server configured for load balancing. So in this case the HTTP Server will configure the hosts(A,B,C) to be the hosts used for load balancing, the user will access the my application using the URL of the HTTP server: https:httpserver.com/myapp/
If I defined one SP metadata file and had the URL of the HTTP Server specified in the AssertionConsumerServiceURL( https://httpserver.com/saml/sso ) and changed my implementation to accept assertions targeted to my HTTP Server, what will be the outcome of this scenario:
User accesses the HTTPServer which dispatched the user to HostA(behind the scenes)
My SP application in HostA sends a request to the IDP for authentication.
The IDP sends back the response to my httpserver as: https://httpserver.com/saml/sso .
Will the HTTP Server redirect to HostA, to have it like this: https://HostA.com/saml/sso
Thanks.
When deploying same instance of application in a clustered mode behind a load balancer you need to instruct the back-end applications about the public URL on the HTTP server (https://httpserver.com/myapp/) which they are deployed behind. You can do this using the SAMLContextProviderLB (see more in the manual). But you seem to have already successfully performed this step.
Once your HTTP Server receives a request, it will forward it to one of your hosts to URL e.g. https://HostA.com/saml/sso and usually will also provide the original URL as an HTTP header. The SAMLContextProviderLB will make the SP application think that the real URL was https://httpserver.com/saml/sso which will make it pass all the SAML security checks related to destination URL.
As the back-end applications store state in their HttpSessions make sure to do one of the following:
enable sticky session on the HTTP server (so that related requests are always directed to the same server
make sure to replicate HTTP session across your cluster
disable checking of response ID by including bean EmptyStorageFactory in your Spring configuration (this option also makes Single Logout unavailable)
Imagine an application where there are multiple RESTFUL servers exist with different resources.
When a client makes a resource request, currently a blocking call is made such that the resource request is relayed from Server to Server until the resource is found at some Server. Which is very time consuming . Now all the clients are run in constrained environment and Servers are moderately powerful.
Is there a way to do REST resource lookup service to avoid long blocking calls ?
Client should know where to look for a resource without relaying in a happy flow. So build the logic for getting a resource in the client.
Solution 1:
Client A has a list of all the resource servers and has a directory to know which resource is on which server.
Solution 2:
Client A does not know anything so it will query proxy server B which does the look up. This server B has a directory to map a particular resource to a specific server.
Server B will then query Resource server C/D/E/F etc. depending on the resource. They will respond to Server B.
Server B sends the requested resource to Client A
Update 1: Since you do not have control over your clients, go with solution 2 where B acts as a client in relation to your resource servers. As stated before either use a dictionary where each specific resource points to a particular server or use consistent hashing. Since I do not know what language you are using I have no idea whether there is an existing library for you to use.But there are so many already so probably it will fit your needs.