Is it possible to set up a reverse proxy in ISPConfig?
I tried this setting on a subdomain, but I only receive a error 500.
The /var/www/influxdb2.*******.***/log/error.log says the following:
==> error.log <==
[Fri Jan 01 21:24:15.963158 2021] [proxy:warn] [pid 30333] [client ***.***.***.***:59356] AH01144: No protocol handler was valid for the URL /favicon.ico (scheme 'http'). If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule., referer: https://influxdb2.*******.***/
For me, the proxy_http mod was missing.
Enable it via sudo a2enmod proxy_http and restart your apache with systemctl restart apache2 (thanks to https://serverfault.com/questions/773449/no-protocol-handler-valid-for-the-url-with-httpd-mod-proxy-balancer).
Also note that the "redirect type" setting sometimes seems to reset itself to "none" on saving (or at least does not display the correct value on loading the page as of ISPConfig 3.2.1). So double check that setting if something does not work.
For the "Domain" tab, settings are pretty straightforward. Just enter your domain and probably enable Let's Encrypt.
Note that this seems to use mod_rewrite for proxying. The Apache2 documentation on mod_rewrite states that better ProxyPass of mod_proxy should be used instead. So if anything breaks with some applications, this might be a starting point for further investigations (worked for me for reverse proxying to the HTTP endpoint of InfluxDB 2.0.3 at http://localhost:8086).
Related
I have the following allowed redirect uri set for my client: exp://192.168.2.212:19000
After a code exchange using the following URL:
GET /auth/realms/xxxxx/protocol/openid-connect/auth?code_challenge=m71Cl...D4hw&redirect_uri=exp%3A%2F%2F192.168.2.212%3A19000&client_id=3B03...
X-Forwarded-For: 178.84.x.x
X-Forwarded-Host: oidc.production.my.domain.com
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: 09918a799a23
X-Real-Ip: 178.84.x.x
I get a HTTP/1.1 302 Found with the following Location field:
Location: exp://192.168.2.212?state=T0pvzPyHF6&session_state=b1cf16ad-b.....
The port is missing. My (Expo) client in android emulator then barfs about not being able to connect to 192.168.2.212 port 80. Naturally.
I am using the docker hub images 11.0.0
How can I prevent this? Is it a bug?
(The iOS version of my app uses a different redirect_uri (exp://127.0.0.1:19000), but although Keycloak strips the port there as well and it receives a Location: exp://127.0.0.1?state=T0p... it does connect to port 19000 and works fine for some reason.)
EDIT: Note that authentication works fine on iOS, and I run exactly the same Keycloak settings in iOS as Android (It's a React Native application).
Keycloak logs no error, and the following debug message:
13:24:33,365 DEBUG [org.keycloak.events] (default task-47) type=LOGIN, realmId=neemop, clientId=3B03FD35, userId=28619cd3-c51d-4756-9d06-fb47********, ipAddress=178.84.x.x, auth_method=openid-connect, auth_type=code, response_type=code, redirect_uri=exp://192.168.2.212:19000, consent=no_consent_required, code_id=a0faa4d4-6826-4c2f-9243-*******, response_mode=query, username=ron.arts#mydomain.com, authSessionParentId=a0faa4d4-6826-4c2f-9243-*******, authSessionTabId=-Pn******
shows the redirect_uri is parsed correctly. It's just that in the actual HTTP response the Location: header omits the port. Which imho should not happen.
Seems like a bug: https://issues.redhat.com/browse/KEYCLOAK-9405?_sscc=t
Tested on 12.0.4 and it still occurs. It appears to be an issue with any non-http(s) protocol
another bug has been submitted to keycloak team:
https://issues.redhat.com/browse/KEYCLOAK-17141
a fix is available in keycloak version >= 13.0.0
I have a CGI script to load publications from BibBase:
#!/usr/bin/perl
use LWP::UserAgent;
my $url = 'https://bibbase.org/show?bib=http://www.example.com/pubs.bib';
my $ua = LWP::UserAgent->new;
my $can_accept = HTTP::Message::decodable;
my $response = $ua->get($url, 'Accept-Encoding' => $can_accept);
print "Content-type: text/html\n\n";
print $response->decoded_content;
(This is copied from BibBase with the exception that the URL is hard-coded.)
I have three webservers running RHEL7 and Apache 2.4 that are configured the same way by Puppet. On all three I can run the script on the command line and get the expected results:
[root#server1 cgi-bin]# ./bibbase_proxy2.cgi | head
Content-type: text/html
<img src="//bibbase.org/img/ajax-loader.gif" id="spinner" style="display: none;" alt="Loading.." />
<div id="bibbase">
<script type="text/javascript">
var bibbase = {
params: {"bib":"http://www.example.com/pubs.bib","host":"bibbase.org"},
When I try to run the script with CGI, I get three different results:
Server1
Unrecognised protocol tcp at /usr/share/perl5/LWP/Protocol/http.pm line 31.
Server2
Can't connect to bibbase.org:443 System error at /usr/share/perl5/LWP/Protocol/http.pm line 51.
Server3
No http output and the error log says AH01215: Out of memory!.
I can't find anything different between the three servers and I can't figure out why the script works fine on the command line and doesn't work when run as a CGI.
I have selinux in permissive mode and it is logging the outgoing request, so I know the script gets that far:
type=AVC msg=audit(1532465859.921:331235): avc: denied { name_connect } for pid=161178 comm="perl" dest=80 scontext=system_u:system_r:httpd_sys_script_t:s0 tcontext=system_u:object_r:http_port_t:s0 tclass=tcp_socket
For testing, I have set selinux to disabled and restarted the server.
SE-Linux denied the TCP connection.
avc: denied { name_connect }
The default access controls for networking by SELinux are based on the labels assigned to TCP and UDP ports and sockets. For instance, the TCP port 80 is labeled with http_port_t (and class tcp_socket). Access towards this port is then governed through SELinux access controls, such as name_connect and name_bind.
When an application is connecting to a port, the name_connect permission is checked. However, when an application binds to the port, the name_bind permission is checked.
Permissive mode or not, Perl is acting like it was denied a TCP connection. Unrecognised protocol tcp means getprotobyname("tcp") failed inside IO::Socket::IP. That's very, very unusual. One of the ways that can happen is via exactly that SELinux denial.
I'm no SELinux expert, but according to RedHat and Gentoo some SELinux aware applications will ignore the global permissive setting and go it alone. RHEL 7 Apache appears to be one of them. It appears to have its own domain which must be set permissive.
On all three I can run the script on the command line and get the expected results:
There's two reasons for that, and they both have to do with users.
When you run the program you're running as your own user with your own configuration, permissions, and environment variables. In fact, you ran it as root which usually bypasses restrictions. When it runs on the server it runs as a different user, probably the web server user with severe restrictions.
In order to do a realistic test, you need to run it as the same user the web server will. You can use sudo -u for this. For example, if the user is apache...
sudo -u apache ./bibbase_proxy2.cgi
BTW Do not test software as root! Not only is it not going to give you sensible results, but if there's a bug in the software there are no safeguards preventing it from wrecking your system.
The second problem is #!/usr/bin/env perl. That means to run whatever perl is in your PATH. PATH will be different for different users. Running ./bibbase_proxy2.cgi may run with one Perl on the command line and a different one via the web server.
In a server environment, use a hard coded path to Perl like #!/usr/bin/perl.
We tested by rewriting the same script in Python and PHP. Both of them showed error which pointed us in the right direction.
Python urllib2 produced the error
<class 'urllib2.URLError'>: <urlopen error [Errno 16] Device or resource busy>
args = (error(16, 'Device or resource busy'),)
errno = None
filename = None
message = ''
reason = error(16, 'Device or resource busy')
strerror = None
PHP (run as CGI) wouldn't even start:
[Wed Jul 25 15:24:52.988582 2018] [cgi:error] [pid 10369] [client 172.28.6.200:44387] AH01215: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/curl.so' - libssh2.so.1: failed to map segment from shared object: Cannot allocate memory in Unknown on line 0
[Wed Jul 25 15:24:52.988980 2018] [cgi:error] [pid 10369] [client 172.28.6.200:44387] AH01215: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/dba.so' - libtokyocabinet.so.9: failed to map segment from shared object: Cannot allocate memory in Unknown on line 0
---- Similar lines for all extensions. ----
It appears that RLimitMEM blocks access to shared memory and that is required for opening sockets. I can't find any documentation, but removing that line makes it work.
My server configuration is a TYPO3 installation Vs. 6.2.31 combined with a reverse proxy. The system is running fine with http.
When we try to switch to https we geht this Error Message in the backend:
"Connection Problem
Sorry, but an error occurred while connecting to the server. Please check your network connection."
And the page tree is not loading.
When switching back to http all is working one again.
Our settings:
[SYS][reverseProxyIP] = (IP of our reverse Proxy)
[SYS][reverseProxyHeaderMultiValue] = last
[SYS][reverseProxySSL] = *
What i tried:
deactivate all extensions apart from the system extensions
no entry in syslog (error reporting is on development)
no entries in the server logs
lockSSL in install tool 3 results in never ending 302 redirects
lockSSL with option 2 results in this error message:
Fatal error: Uncaught exception 'RuntimeException' with message 'TYPO3 Backend not accessed via SSL: TYPO3 Backend is configured to only be accessible through SSL. Change the URL in your browser and try again.' in /srv/httpd/sites/fland_ww1/typo3_src-6.2.31/typo3/sysext/core/Classes/Core/Bootstrap.php:897 Stack trace: #0 /srv/httpd/sites/fland_ww1/typo3_src-6.2.31/typo3/init.php(54): TYPO3\CMS\Core\Core\Bootstrap->checkSslBackendAndRedirectIfNeeded() #1 /srv/httpd/sites/fland_ww1/typo3_src-6.2.31/typo3/index.php(21): require('/srv/httpd/site...') #2 {main} thrown in /srv/httpd/sites/fland_ww1/typo3_src-6.2.31/typo3/sysext/core/Classes/Core/Bootstrap.php on line 897
It seems that some requests e.g. for the page tree are made without ssl - ajax calls i presume - but i dont have a clue how to debug it.
Andy ideas?
Thanks!
I have the same version at a customer and with a load balancer / proxy.
The only difference is [SYS][reverseProxyHeaderMultiValue] = first.
Also, [BE][lockSSL] = 1 is set.
Maybe it helps?
This thread is quite old, but because many people do read until now i will try an answer. We could solve the problem (and once again in a different installation) with the following settings:
[SYS][reverseProxyIP] = (IP of our reverse Proxy)
[SYS][reverseProxyHeaderMultiValue] = firt
[SYS][reverseProxySSL] = *
AND - that is important - changes in the server config too:
RequestHeader set X-Forwarded-Proto "https"
SetEnv proxy-nokeepalive 1
SetEnv proxy-initial-not-pooled 1
I assume it was the first:
RequestHeader set X-Forwarded-Proto "https"
So in the end the problem was in the server config.
Usually i try to resolve issue by my self, but in this case i am lost ;-)
I had install suiteCRM 7.8.2 on my server (managed with plesk onyx)
Everything work great except one thing :
When i am trying to save a pdf template or an email template, i get an 403 error (Fobidden acces)
Things i have already done :
trying chmod 777 for all files and folders of suiteCRM => Not working
Change permission in config.php => Not working
Quick Repair => Not working
Delete cache folder => Not working
hitting on my laptop => Not working ... grrr..
I have no access to more information, in browser console i can see that SuiteCRM trying to send POST request to index.php and index.php answer 403 error, nothing in log file in debug mode...
I don't have more ideas ....
Thank you.
RĂ©mi.
Solved :
I have look "/var/www/vhosts/system/YOUR-DOMAIN.COM/logs"
[Sun Apr 02 21:34:58.173943 2017] [:error] [pid 29185] [client 82.227.112.246] ModSecurity: Access denied with code 403 (phase 2). Match of "rx ((?:submit(?:\\+| )?(request)?(?:\\+| )?>+|<<(?:\\+| )remove|(?:sign ?in|log ?(?:in|out)|next|modifier|envoyer|add|continue|weiter|account|results|select)?(?:\\+| )?>+)$|^< ?\\??(?: |\\+)?xml|^> ?$)" against "ARGS:sample" required. [file "/etc/apache2/modsecurity.d/rules/tortix/modsec/50_plesk_basic_asl_rules.conf"] [line "308"] [id "350147"] [rev "143"] [msg "Protected by Atomicorp.com Basic Non-Realtime WAF Rules: Potentially Untrusted Web Content Detected"] [data ""] [severity "CRITICAL"] [hostname "XXXXXXXX"] [uri "/SuiteCRM/index.php"] [unique_id "WOFSYtX2OSwAAHIBsoAAAAAF"]
It's modsecurity firewall !
So i have disabled the 350147 rules from modsecurity (https://docs.plesk.com/en-US/12.5/administrator-guide/73383/ + Switching off Rules)
It's work !
Thanks to UFHH01 , i love you ;-)
I cannot run on WAMP Zend Projects containing Zend_Session classes.
After checking httpd's error log, I found this entry and other errors all connected with load of Zend_Session.
[ssl:warn] [pid 5340:tid 216] AH01873: Init: Session Cache is not configured [hint: SSLSessionCache]
I've tried to open another project which doesn't contain any Zend_Session and it works. How could I solve this, in order to be able to include Zend_Session classes within my projects and successfully run it with WAMP?
This is a problem with your Apache SSL configuration.
Configure your SSL module as below:
<IfModule ssl_module>
SSLSessionCache "shmcb:C:/wamp/bin/apache/Apache2.2.17/logs/ssl_scache(512000)"
SSLSessionCacheTimeout 300
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
</IfModule>
Maybe you should also read the SSLSessionCache documentation.