Browser can't load SBAs static resources (http instead of https) - spring-boot-admin

I deployed Spring-Boot-Admin on a Kubernetes cluster where it get's accessed via TLS/HTTPS. The problem is, that the SBA-frontend seems to load it's static resources via http instead of https. This is blocked by the browser:
Locally, everything works as it should. In fact, the resources are reachable via https (I can verify this by double-clicking on a failed request where it just loads fine).
The problem seems, that SBA sets a base path with 'http' prefix instead of 'https'?
So how can I force the frontend to load it's static resources via HTTPS, if available?
PS: I already set server.use-forward-headers: true, but that didn't change anything...

Was able to resolve this by setting the following spring properties:
forward-headers-strategy: framework # 'native' may work also
use-forward-headers: true

Related

AWS Classic Load Balancer + EC2: web API requests returns 404

I have an AWS EC2 Jira instance running behind an AWS Classic load balancer. The site loads in the browser fine, but all API requests are returning 404 for some reason. It is not a Jira 404, but a generic 404 response with no body and minimal headers. Only response useful header seems to be Server: nginx.
Tried white-listing my client IP, opening up all ports, sending request to the LB and directly to the instance with proper security group settings, etc., but same 404 response is returned. I'm using Postman to test the API. I noticed when I load the EC2 instance directly in the browser, it redirects to the load balancer.
Returns 200 with HTML. Basic auth works, too.
GET http://jira (home page)
Returns 404:
GET http://jira/rest/api/2/issue/ticket-num (or any other /rest/ endpoints)
Where should I start looking to debug this 404 issue? I feel like I'm missing something basic. I'm not seeing any Jira configuration for setting up its rest API. I feel like perhaps it's a server configuration issue, although I've never come across manual web server configuration while installing Jira, so maybe on the AWS's side?
EDIT: still waiting to get ssh access to the instance, so I'll update as I get more info and access.
This HTTP 404 responses with very limited set of headers could be from the default (the bottom one) rule in ELB. I experienced similar issue getting HTTP 404 because instead of host header I set path and provided the host domain name in one of ELB rules. So the rule did not work and default rule returned 404 because there is no such path exists on the instance.
I would recommend to try to use Redirect to or Return fixed response options for default rule to check out if it goes to the default rule.

Advanced Tweak on Undertow-handlers.conf for http https redirect

I use WildFly behind an AWS load balancer. I want the Undertow server in WildFly to redirect http traffic to https, and I can do this mostly successfully with the following line placed in undertow-handlers.conf:
equals('http', %{i,X-Forwarded-Proto}) -> redirect(https://app.server.com%U)
Thanks to these folks for getting me this far! Now here's my desired tweak. Sometimes I run my web application behind a testing load balancer using 'dev.server.com' and sometimes I run it behind a production load balancer using 'app.server.com.' Currently, I have to remember to manually edit undertow-handlers.conf any time I switch balancers. I'm hoping there is a way to change the hard-coded 'dev' and 'app' to something mechanical. Is there a way to tell Undertow to just use the domain name that was originally requested?
Thanks.
Thankfully the undertow configuration gives you access to the request headers via Exchange Attributes, which you're already using to access the X-Forwarded-Proto header. So the solution is to simply use the Host header from the request like so:
equals('http', %{i,X-Forwarded-Proto}) -> redirect(https://%{i,Host}%U)
If you want to keep it as part of the deployment try using the %h in the redirect expressions. For example:
equals('http', %{i,X-Forwarded-Proto}) -> redirect(https://%h%U)
Another option would be to configure the server to handle the redirect for you. The CLI commands would look something like the following assuming the default ports of 8080 for http and 8443 for https.
/subsystem=undertow/configuration=filter/rewrite=http-to-https:add(redirect=true, target="https://%h:8443%U")
/subsystem=undertow/server=default-server/host=default-host/filter-ref=http-to-https:add(predicate="equals(%p, 8080)")
You can see all the possible exchange attributes in the Undertow documentation.

WS Federation (single sign on) module - redirect issue when using SSL offloading

We have a site that we are trying to configure as a client in a SSO scenario, using WS Federation and SAML.
Our site sits behind a load balancer that is doing SSL offloading - the connection to the balancer is under https, but decrypted and forwarded (internally) to the actual site under http and port 81.
Somewhere the WS federation module is attempting to redirect us, but is building up the URL based on the port and incoming protocol to the website:
We request:
https://www.contoso.com/application
and are getting redirected to:
http://www.contoso.com:81/Application
Which doesn't work as the load balancer (correctly) won't respond on this port.
And it seems to be related to the casing of the virtual directory. Browsing to
https://www.contoso.com/Application
seems to work without issue.
(Note for completeness, attempting to browse to http://www.contoso.com/Application with no port will correctly redirect us to the SSL secured URL).
I am trying to find out:
a) Where this redirect is happening in the pipeline and
b) How to configure it to use the correct external address.
If anybody is able to point me in the right direction, I would very much appreciate it.
EDIT 14:19: Seems to be either the WsFederationAuthenticationModule or the SessionAuthenticationModule. These do a case sensitive comparison of the incoming url to what it expects and redirects otherwise:
https://brockallen.com/2013/02/08/beware-wif-session-authentication-module-sam-redirects-and-webapi-services-in-the-same-application/
So that seems to be happening, its a matter now of trying to get the site to behave nicely and redirect to the correct external url.
The following seems to be related and ultimately points to the culprit in the default CookieHandler:
Windows Identity Foundation and Port Forwarding
Looking at that code decompiled in VS, it compares HttpContext.Current.Request.Url against the targetUrl and will redirect to the expected 'cased' version otherwise (in our case including the errant port number).
It would seem that explicitly setting the path attribute of the cookie fixes this issue. Either an empty string or the virtual directory name seems to work:
<federationConfiguration>
<cookieHandler requireSsl="true" name="ContosoAuth" path="/Application/"/>
<wsFederation passiveRedirectEnabled="true" issuer="https://adfsSite" realm="https://www.contoso.com/Application/" reply="https://www.contoso.com/Application/Home" requireHttps="true"/>
</federationConfiguration>

Azure Websites: socket reading port 80 returns 404 error

I have a php script that run perfectly when requested by the browser (example):
http://www.kwiksher.com/k3Serial.php?key="XXXXX"
in this case, I get the information of an user with the key XXXXX, which is the expected behavior.
However, inside my Photoshop plugin, I must to call it via socket, having to force a port in the connection:
http://www.kwiksher.com:80/k3Serial.php?key="XXXXX"
Doing that, I get the the content of Azure default 404 page (it is not even my customized 404 page).
If I use the same call (with the port added to the domain) on a browser, it works fine as well.
Any idea on how to fix it? I tried to flushDNS on my machine as well without success.
Thanks a lot,
Alex
It's likely that the socket library won't be using HTTP and therefore isn't sending a host header and the web tier on Azure can't actually figure out which Website it should serve the content from.
As you using this with a plug-in perhaps try and use the default hostname issued by Azure instead of a custom domain.

How to use S3 as static web page and EC2 as REST API for it together? (AWS)

With AWS services we have the Web application running from the S3 bucket and accessing the data through the REST API from Load Balancer (which is set of Node.js applications running on EC2 instance).
Currently we have specified URL's as following:
API Load Balancer: api.somedomain.com
Static Web App on S3: somedomain.com
But having this setup brought us a set of problems since requests are CORS with this setup. We could workaround CORS with special headers, but that doesn't work with all browsers.
What we want to achieve is running API on the same domain but with different path:
API Load Balancer: somedomain.com/api
Static Web App on S3: somedomain.com
One of the ideas was to attach the API Load Balancer to the CDN and forward all request to Load Balancer if query is coming on the "/api/*" path. But that doesn't work since our API is using not only HEAD and GET requests, but also POST, PUT, DELETE.
Another idea is using second EC2 instance instead of S3 bucket to host website (using some web server like nginx or apache). But that gives too much overhead when everything is in place already (S3 static content hosting). Also if using this scenario we wouldn't get all the benefits of Amazon CloudFront performance.
So, could your recommend how to combine Load Balancer and S3, so they would run on same domain, but with different paths? (API on somedomain.com/api and Web App on somedomain.com)
Thank you!
You can't have an EC2 instance and an S3 bucket with the same host name. Consider what happens when a web browser makes a request to that host name. DNS resolves it to an IP address (or addresses) and the packets of the request are delivered to that address. The address either terminates at the EC2 instance or the S3 bucket, not both.
As I understand your situation, you have static web pages hosted on S3 that include JavaScript code that makes various HTTP requests to the EC2 instance. If the S3 web pages are on a different host than the EC2 instance then the same origin policy will prevent the browser from even attempting some of the requests.
The only solutions I can see are:
Make all requests to the EC2 instance, with it fetching the S3 contents and delivering it to the browser whenever a web page is asked for.
Have your JavaScript use iframes and change the document.domain in the the web pages to a common parent origin. For example, if your web pages are at www.example.com and your EC2 instance is at api.example.com, the JavaScript would change document.domain to just example.com and the browser would permit iframes from from www.example.com to communicate with api.example.com.
Bite the bullet and use CORS. It's really not hard, and it's supported in all remotely recent browsers (IE 8 and 9 do it, but not in a standard way).
The first method is no good, because you almost might as well not use S3 at all in that case.
The second case should be okay for you. It should work in any browser, because it's not really CORS. So no CORS headers are needed. But it's tricky.
The third, CORS, approach should be just fine. Your EC2 instance just has to return the proper headers telling web pages from the S3 bucket that it's safe for them to talk to the EC2 instance.
Just wanted to add an additional bit to the answer that, if we go with CORS approach and preflight requests adds an overhead to the server and network bandwidth, we may even consider adding header "Access-Control-Max-Age" to the CORS response
Access-Control-Max-Age