Zuul RibbonRoutingFilter Content-Encoding gzip response - spring-cloud

In our setup, We are accessing the Node JS server thorough ZUUL and Sidecar.
When we invoke the java scripts resources with Accept-Encoding=[gzip] header, Node JS returns compressed file with Content-Encoding=[gzip] header. But if the same request is routed through Zuul we are getting a decompressed response.
Based on our analysis we found that
When the request is forwarded to downstream systems based on service id, zuul is using the ribbon load balanced routing filter. In this process, the apache HTTP client removes the below headers from response in ResponseContentEncoding class
o "Content-Length"
o "Content-Encoding"
o "Content-MD5"
Because of that the content is automatically decompressed in zuul and send it to the caller.
When the request is forwarded to downstream systems based on URL, zuul is using simple host routing filter. In this flow, disableContentCompression method is used while building the HTTP client so the content is sent to the caller without decompressing.
Please let me know of any specific reason for not using disableContentCompression in ribbonloadbalancedroutingfilter route and let me know of any workaround to resolve this?
Environment :
Spring Cloud version: Dalston.SR2
Spring Boot: 1.5.4.RELEASE

Related

Vertx reverse proxy redirect handling

I'm pretty new to Vertx, I'm building a reverse proxy on Quarkus.
I need to handle a redirect response from my Apache to my Quarkus reverse proxy, so that my Client doesn't get redirected directly to the Apache server (bypassing the proxy).
Resource is located in custom.url/myResource/index.php
My reverse proxy is running on localhost:8080
Basically what happens is:
Browser sends a GET request on localhost:8080/myResource, Quarkus is listening on 8080 so he receives the request, remaps the url tocustom.url/myResource and forwards to Apache.
Apache creates a redirect response, because a slash was missing at the end of the url, so he sends a 301 response with the Location header set to custom.url/myResource/ (with slash at the end) to the Quarkus reverse proxy.
Quarkus will forward the redirect response (301 custom.url/myResource/) to the Client, so he will make a GET call straight to custom.url/myResource/ bypassing the Reverse Proxy.
This behavior is not acceptable, since I can't allow the client to know the resource address of my backend service.
Code snippet
Route route = this.proxyRouter.route(method, path)
.handler(CorsHandler.create("*"))
.handler(LoggerHandler.create())
.handler(ctx ->{ //need to create an handler to handle this behaviour })
.handler(ProxyHandler.create(myProxy);
What i have to do is basically setting the Location header of the response to the correct path, including the slash.
I tried to get the request.absoluteURI() hostname, the response subdomain (with the slash) and merge them together.
request URI: localhost:8080/myResource -> localhost:8080 (1)
response Location: custom.url/myResource/ -> /myResource/ (2)
So i get the wanted Location header merging (1) and (2): localhost:8080/myResource/
Logically this works, but I don't know where and if I'm able to do this inside the handler, or if I need to do it some other way. I tried to implement this logic inside the handler, but I'm only able to get the request URI, there was no way to find the 301 response.
Need help plz.

Custom endpoint path for AWS API Gateway WebSocket

I have created an API Gateway with Websocket protocol.
After I deploy the API, I get a WebSocket URL and a connection URL.
e.g.
WebSocket URL: wss://xxxx.execute-api.us-west-2.amazonaws.com/test
Connection URL: https://xxxx.execute-api.us-west-2.amazonaws.com/test/#connections
Now everything is fine, I am able to connect to the API, and send and receive messages.
But when I try to access a different path, I get an HTTP 403 error.
e.g. If I try to connect to wss://xxxx.execute-api.us-west-2.amazonaws.com/test/some/path
, I get 403 error.
Is it possible to configure API gateway in such a way that it accepts connections to all paths and passes on the path, i.e. /some/path in my case, to the $connect route handler?
This is not yet supported by AWS. See the article and comments here https://medium.com/#lancers/using-parameter-mapping-in-websocket-api-67b414376d5e
There is a workaround with using an additional server, author of the article proposes the following:
you may put your own server that accepts an URI with path parameters, then return 302 to redirect the client to the WebSocket API endpoint with query string instead.

How to support Feign in the html http request?

Feign now does load balancing using Eureka, now I want to invoke calls outside the Eureka system.
For example, I want to invoke a url in a html request, how to use Feign to support load balancing anyway without using the nginx to forward?

How and where to add crossdomain.xml in neo4j so that its available on localhost:7474/crossdomain.xml?

I am sending a Basic Auth Post request to neo4j REST
x.x.x.85:7474/db/data/transaction/commit
I am using unity www at x.x.x.15 which requires crossdomain.xml to be present at x.x.x.85:7474/crossdomain.xml. Where and how should I get crossdomain.xml at the desired location?
You can't add arbitrary resources to be served by Neo4j.
You could put it behind a HTTP server with reverse proxy capabilities (Apache HTTP Server, nginx) to serve the file and proxy the rest of the requests to Neo4j.
However, the real question is whether you should be exposing your database directly, to be used by a client browser (which is the reason why you need a crossdomain file) which could send any query, including MATCH(n) DETACH DELETE n, a.k.a. the new DROP TABLE (or DROP DATABASE).

Behaviour of SAML when HTTP Server used for high availability

I have implemented the supporting of SAML SSO to have my application act as the Service Provider using Spring Security SAML Extension. I was able to integrate my SP with different IDPs. So for example I have HostA,HostB, and HostC, all these have different instances of my application. I had an SP metadata file specified for each host and set the AssertionConsumerServiceURL with the URL of that host( EX:https:HostA.com/myapp/saml/sso ). I added each metadata file to the IDP and tested all of them and it is working fine.
However, my project also supports High Availability by having an IBM HTTP Server configured for load balancing. So in this case the HTTP Server will configure the hosts(A,B,C) to be the hosts used for load balancing, the user will access the my application using the URL of the HTTP server: https:httpserver.com/myapp/
If I defined one SP metadata file and had the URL of the HTTP Server specified in the AssertionConsumerServiceURL( https://httpserver.com/saml/sso ) and changed my implementation to accept assertions targeted to my HTTP Server, what will be the outcome of this scenario:
User accesses the HTTPServer which dispatched the user to HostA(behind the scenes)
My SP application in HostA sends a request to the IDP for authentication.
The IDP sends back the response to my httpserver as: https://httpserver.com/saml/sso .
Will the HTTP Server redirect to HostA, to have it like this: https://HostA.com/saml/sso
Thanks.
When deploying same instance of application in a clustered mode behind a load balancer you need to instruct the back-end applications about the public URL on the HTTP server (https://httpserver.com/myapp/) which they are deployed behind. You can do this using the SAMLContextProviderLB (see more in the manual). But you seem to have already successfully performed this step.
Once your HTTP Server receives a request, it will forward it to one of your hosts to URL e.g. https://HostA.com/saml/sso and usually will also provide the original URL as an HTTP header. The SAMLContextProviderLB will make the SP application think that the real URL was https://httpserver.com/saml/sso which will make it pass all the SAML security checks related to destination URL.
As the back-end applications store state in their HttpSessions make sure to do one of the following:
enable sticky session on the HTTP server (so that related requests are always directed to the same server
make sure to replicate HTTP session across your cluster
disable checking of response ID by including bean EmptyStorageFactory in your Spring configuration (this option also makes Single Logout unavailable)