How to configure JBoss EAP (6.4.x) for a combined HTTP/HTTPS reverse proxy? - jboss

Our application runs in a JBoss EAP 6.4. Our development setup provides JBoss instance running in HTTP mode on port 8080 and a reverse proxy with both HTTP (port 9090) and HTTPS (port 9443) endpoints to help test different scenarios.
A problem arises when I try to use "current" URL by injecting the UriInfo into my request handlers. The scheme part of the URI inside is always dependent on the scheme attribute of the connector setting in the standalone.xml and not on the actual used scheme. So for example, if I call https://localhost:9443 and http://localhost:9090 when connector's scheme is set to https, both URLs are converted to HTTPS, i.e. https://localhost:9443 but also https://localhost:9090. If I switch connector's scheme to http, both URLs change to HTTP. Needless to say, X-Forwarded-Proto is also ignored.
Is there a way to make JBoss behave more like most other application servers, i.e. without making any assumptions about used environment and especially reverse proxies and load balancers?

RemoteIpValve should do everything you need.
Source code from the JBossWeb 7.5.20 (EAP 6.4.20):
http://anonsvn.jboss.org/repos/jbossweb/tags/JBOSSWEB_7_5_20_FINAL/src/main/java/org/apache/catalina/valves/RemoteIpValve.java
Here's more readable documentation at the upstream Apache Tomcat 7.0 project website:
https://tomcat.apache.org/tomcat-7.0-doc/api/org/apache/catalina/valves/RemoteIpValve.html
The minimum config in your case would be the following global valve configuration in the web subsystem:
<valve name="remoteip-valve" module="org.jboss.as.web" class-name="org.apache.catalina.valves.RemoteIpValve">
<param param-name="protocolHeader" param-value="X-Forwarded-Proto"/>
</valve>
This would set the scheme based on the value of the X-Forwarded-Proto header.
For https it would also set the secure flag to true and port to 443.
Since you seem to require the HTTPS port to be set to 9443, you can do it via additional httpsServerPort parameter (and I think you'll also need to set the httpServerPort to 9090 as you mention above, because the RemoteIpValve would override it to 80 otherwise), e.g.
<valve name="remoteip-valve" module="org.jboss.as.web" class-name="org.apache.catalina.valves.RemoteIpValve">
<param param-name="protocolHeader" param-value="X-Forwarded-Proto"/>
<param param-name="httpServerPort" param-value="9090"/>
<param param-name="httpsServerPort" param-value="9443"/>
</valve>
And you can do more with that valve if you need, just check the documentation for more details.
It's also briefly described for example here (RH login required): https://access.redhat.com/solutions/629863
BTW If you'd be able to use the AJP protocol (from the proxy to the app. server) instead, this wouldn't be needed as AJP is designed for these cases and all the required information should be transferred to the app. server pretty much transparently.

Related

Wildfly HTTP-only redirect

Quick and dirty fix needed here if possible...
We've been running a bunch of REST services on a Wildfly installation for a few years. The server isn't for public use -- on the main https://ourserver.com page we have a redirect which points wandering users to our main website. It's a very simple standalone config.
But the server has always been HTTPS only. And now thanks to a domain reshuffle, we need to make it possible for users who go to http://ourserver.com without SSL to hit the redirect to ourserver.net. So we basically need to expose just the welcome-content directory on this server over the "http" interface (which was previously firewalled off), while not letting non-SSL users reach any of the webservice subdirectories.
What's the simplest way to ensure that accessing any URL via plain HTTP gets redirected?
You can try adding a filter to the standalone.xml under the subsystem urn:jboss:domain:undertow:4.0
e.g. for http (8080) to https (8443), it would be:
<filter-ref name="http-to-https" predicate="equals(%p,8080)"/>
under the tag filters, you add it:
<rewrite name="http-to-https" target="https://myhttpsurl.com:8443%U" redirect="true"/>

Exe as Webservice Endpoint

I got a webservice endpoint and I stumple upon how to correctly implement it.
It seems to be an parameterized exe-file which returns an XML Reply.
There is no documentation.
I am used to soap, wcf and rest but this is completely unknown to me, has anyone a guide or a best case how to implement such a service?
I can consume it with a HTTP GET but there are some questions left to me:
I know the questions are quite broad... But I could not find anything about it in the interwebz.
Is there a secure way to publish exe files as webservice?
Are there any critical downsides implementing such an interface?
Make I myself a fool and this is just an alias?
Example Url:
http://very.exhausting.company/Version/SuperStrange.exe?parameter=String
Web servers
What you call a webservice endpoint is nothing else than a web server listening on some host (normally 0.0.0.0) and some port on a physical or virtual machine and responding with some HTTP response to HTTP requests sent to that host, port and URIs that the web server cares to process.
Any web server is itself an application or a static or dynamic component of an application as the following examples illustrate:
JBoss, Glassfish, Tomcat etc. are applications, known as application servers, into which containers/servlets/plugins implementing web servers and corresponding endpoints are deployed. These listen on some port exposing generic web servers routing requests to those containers and their servlets;
a fat jar started with java -jar on a JVM which deploys a vert.x verticle featuring a vert.x HttpServer listening on some port is nothing else than a web server;
an interpreter such as node.js parsing and executing JavaScript code based on the express module will most likely deploy a web server on some port;
finally, a statically or dynamically linked application written in languages such as C++ or Go can expose a web server listing on some port.
All of the above cases feature different deployment mechanisms, but what they deploy is essentially the same: a piece of software that listens for HTTP requests on some port, executes some logic based on request and returns HTTP responses to the caller.
Your windows exe file is most likely a statically linked application that provides a web server.
Protocols
So we know you have a web server as it reacts to an HTTP GET. How does it relate to REST, SOAP etc? Effectively, REST, SOAP etc are higher level protocols. TCP is the low level, HTTP is based on top of that and your server supports that. REST, SOAP and everything else that you mention are higher level protocols that are based, among others, on HTTP. So all you know is that your application (web server) supports HTTP, but you do not know which higher level data exchange protocol it implements. It definitely implements some, at least a custom one that its author came up with to exchange data between a client and this application.
You can try to reverse engineer it, but it is not clear how would you find out about all possible endpoints, arguments, payload structures, accepted headers etc. Essentially, you have a web server publishing some sort of an API, but there is no generic way of telling what that API is.
Security
The world around you does not have to know how the API is published. You can put any of the above 4 web server implementations behind exactly the same firewall or a reverse proxy with SSL termination exposing just one host and port over SSL. So there is no difference in security, with respect to the world, whether you deploy it as exe or as a war into JBoss. This is not to say, that your exe file is secure: depending on how it is implemented it may allow all sorts of attacks, but again, this is equally true for any mechanism.

Use haproxy as a reverse proxy with an application behind Internet proxy

I need to integrate several web applications on-premise and off-site under a common internally hosted URL. The on-premise applications are in the same data center as the haproxy, but the off-site applications can only be reached via a http proxy because the server on which haproxy is running has no direct Internet access. Therefore I have to use a http Internet proxy, SOCKS might be an option too.
How can I tell haproxy that a backend can only be reached via proxy ?
I would rather not use an additional component like socksify / proxifier / proxychains / tsocks / ... because this introduces additional overhead.
This picture shows the components involved in the setup:
When I run this on a machine with direct Internet connection I can use this config and it works just fine:
frontend main
bind *:8000
acl is_extweb1 path_beg -i /policies
acl is_extweb2 path_beg -i /produkte
use_backend externalweb1 if is_extweb1
use_backend externalweb2 if is_extweb2
backend externalweb1
server static www.google.com:80 check
backend externalweb2
server static www.gmx.net:80 check
(Obviously these are not the URLs I am talking to, this is just an example)
Haproxy is able to check the external applications and routes traffic to them:
In the safe environment of the company I work at I have to use a proxy and haproxy is unable to connect to the external applications.
How can I enable haproxy to use those external web application servers behind a http proxy (no authentication needed) while providing access to them through a common http page / via browser ?
How about to use delegate ( http://delegate.org/documents/ ) for this, just as an idea.
haproxy -> delegate -f -vv -P127.0.0.1:8081 PROXY=<your-proxy>
http://delegate9.org/delegate/Manual.shtml?PROXY
I know it's not that elegant but it could work.
I have tested this setup with a local squid and this curl call
echo 'GET http://www.php.net/' |curl -v telnet://127.0.0.1:8081
The curl call simluates the haproxy tcp call.
I was intrigued to make it work but i really could not find anything in the haproxy documentation, so i googled a bit and found that nginx might do the trick, but it didn't for me, after a bit more of googleing i ended up finding a configuration for apache that works.
here is the important part:
Listen 80
SSLProxyEngine on
ProxyPass /example/ https://www.example.com/
ProxyPassReverse /example/ https://www.example.com/
ProxyRemote https://www.example.com/ http://corporateproxy:port
ProxyPass /google/ https://www.google.com/
ProxyPassReverse /google/ https://www.google.com/
ProxyRemote https://www.google.com/ http://corporateproxy:port
i'm quite sure there should be a way to translate this configuration to nginx and even to haproxy... if i manage to find the time i will update the answer with my findings.
for apache to work you should also enable a few modules, i put up a github repository with a basic docker configuration that showcases feel free to have a look at that to see the full working configuration.

Advanced Tweak on Undertow-handlers.conf for http https redirect

I use WildFly behind an AWS load balancer. I want the Undertow server in WildFly to redirect http traffic to https, and I can do this mostly successfully with the following line placed in undertow-handlers.conf:
equals('http', %{i,X-Forwarded-Proto}) -> redirect(https://app.server.com%U)
Thanks to these folks for getting me this far! Now here's my desired tweak. Sometimes I run my web application behind a testing load balancer using 'dev.server.com' and sometimes I run it behind a production load balancer using 'app.server.com.' Currently, I have to remember to manually edit undertow-handlers.conf any time I switch balancers. I'm hoping there is a way to change the hard-coded 'dev' and 'app' to something mechanical. Is there a way to tell Undertow to just use the domain name that was originally requested?
Thanks.
Thankfully the undertow configuration gives you access to the request headers via Exchange Attributes, which you're already using to access the X-Forwarded-Proto header. So the solution is to simply use the Host header from the request like so:
equals('http', %{i,X-Forwarded-Proto}) -> redirect(https://%{i,Host}%U)
If you want to keep it as part of the deployment try using the %h in the redirect expressions. For example:
equals('http', %{i,X-Forwarded-Proto}) -> redirect(https://%h%U)
Another option would be to configure the server to handle the redirect for you. The CLI commands would look something like the following assuming the default ports of 8080 for http and 8443 for https.
/subsystem=undertow/configuration=filter/rewrite=http-to-https:add(redirect=true, target="https://%h:8443%U")
/subsystem=undertow/server=default-server/host=default-host/filter-ref=http-to-https:add(predicate="equals(%p, 8080)")
You can see all the possible exchange attributes in the Undertow documentation.

WebSphere, sendRedirect and HTTPS

Environment: WebSphere App Server / WebSphere Portal 7, fronted by IBM IHS/Apache httpd using was_ap20_module / mod_was_ap20_http.
I have a servlet or JSP page with a redirect like
response.sendRedirect("/wps/myportal/....")
The generated HTTP ends up with the right host and port for the IHS/Apache endpoint but the wrong protocol. It is http instead of https.
For example, if IHS/Apache is listening on https://myserver.com and WAS is on http://192.168.12.34:12345 (all ports/hosts fake), then my redirect comes back as http://myserver.com - correct host and port but wrong protocol.
How does WebSphere figure out the right host/port to use but not the protocol? How can I force the desired behavior?
Add Apache mod_headers to add a custom header before the request is forwarded to websphere, in websphere, set the httpsIndicatorHeader to that custom header, then websphere will know to switch to https
http://www.ibmconnections.org/wordpress/index.php/tag/was-ssl-http-https/
http://pic.dhe.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=%2Fcom.ibm.websphere.express.doc%2Finfo%2Fexp%2Fae%2Frweb_custom_props.html