Error regarding running projects on WAMP which contain Zend_Session classes - zend-framework

I cannot run on WAMP Zend Projects containing Zend_Session classes.
After checking httpd's error log, I found this entry and other errors all connected with load of Zend_Session.
[ssl:warn] [pid 5340:tid 216] AH01873: Init: Session Cache is not configured [hint: SSLSessionCache]
I've tried to open another project which doesn't contain any Zend_Session and it works. How could I solve this, in order to be able to include Zend_Session classes within my projects and successfully run it with WAMP?

This is a problem with your Apache SSL configuration.
Configure your SSL module as below:
<IfModule ssl_module>
SSLSessionCache "shmcb:C:/wamp/bin/apache/Apache2.2.17/logs/ssl_scache(512000)"
SSLSessionCacheTimeout 300
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
</IfModule>
Maybe you should also read the SSLSessionCache documentation.

Related

How do I set up a reverse proxy in ISPConfig?

Is it possible to set up a reverse proxy in ISPConfig?
I tried this setting on a subdomain, but I only receive a error 500.
The /var/www/influxdb2.*******.***/log/error.log says the following:
==> error.log <==
[Fri Jan 01 21:24:15.963158 2021] [proxy:warn] [pid 30333] [client ***.***.***.***:59356] AH01144: No protocol handler was valid for the URL /favicon.ico (scheme 'http'). If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule., referer: https://influxdb2.*******.***/
For me, the proxy_http mod was missing.
Enable it via sudo a2enmod proxy_http and restart your apache with systemctl restart apache2 (thanks to https://serverfault.com/questions/773449/no-protocol-handler-valid-for-the-url-with-httpd-mod-proxy-balancer).
Also note that the "redirect type" setting sometimes seems to reset itself to "none" on saving (or at least does not display the correct value on loading the page as of ISPConfig 3.2.1). So double check that setting if something does not work.
For the "Domain" tab, settings are pretty straightforward. Just enter your domain and probably enable Let's Encrypt.
Note that this seems to use mod_rewrite for proxying. The Apache2 documentation on mod_rewrite states that better ProxyPass of mod_proxy should be used instead. So if anything breaks with some applications, this might be a starting point for further investigations (worked for me for reverse proxying to the HTTP endpoint of InfluxDB 2.0.3 at http://localhost:8086).

LWP Won't Run in CGI Script

I have a CGI script to load publications from BibBase:
#!/usr/bin/perl
use LWP::UserAgent;
my $url = 'https://bibbase.org/show?bib=http://www.example.com/pubs.bib';
my $ua = LWP::UserAgent->new;
my $can_accept = HTTP::Message::decodable;
my $response = $ua->get($url, 'Accept-Encoding' => $can_accept);
print "Content-type: text/html\n\n";
print $response->decoded_content;
(This is copied from BibBase with the exception that the URL is hard-coded.)
I have three webservers running RHEL7 and Apache 2.4 that are configured the same way by Puppet. On all three I can run the script on the command line and get the expected results:
[root#server1 cgi-bin]# ./bibbase_proxy2.cgi | head
Content-type: text/html
<img src="//bibbase.org/img/ajax-loader.gif" id="spinner" style="display: none;" alt="Loading.." />
<div id="bibbase">
<script type="text/javascript">
var bibbase = {
params: {"bib":"http://www.example.com/pubs.bib","host":"bibbase.org"},
When I try to run the script with CGI, I get three different results:
Server1
Unrecognised protocol tcp at /usr/share/perl5/LWP/Protocol/http.pm line 31.
Server2
Can't connect to bibbase.org:443 System error at /usr/share/perl5/LWP/Protocol/http.pm line 51.
Server3
No http output and the error log says AH01215: Out of memory!.
I can't find anything different between the three servers and I can't figure out why the script works fine on the command line and doesn't work when run as a CGI.
I have selinux in permissive mode and it is logging the outgoing request, so I know the script gets that far:
type=AVC msg=audit(1532465859.921:331235): avc: denied { name_connect } for pid=161178 comm="perl" dest=80 scontext=system_u:system_r:httpd_sys_script_t:s0 tcontext=system_u:object_r:http_port_t:s0 tclass=tcp_socket
For testing, I have set selinux to disabled and restarted the server.
SE-Linux denied the TCP connection.
avc: denied { name_connect }
The default access controls for networking by SELinux are based on the labels assigned to TCP and UDP ports and sockets. For instance, the TCP port 80 is labeled with http_port_t (and class tcp_socket). Access towards this port is then governed through SELinux access controls, such as name_connect and name_bind.
When an application is connecting to a port, the name_connect permission is checked. However, when an application binds to the port, the name_bind permission is checked.
Permissive mode or not, Perl is acting like it was denied a TCP connection. Unrecognised protocol tcp means getprotobyname("tcp") failed inside IO::Socket::IP. That's very, very unusual. One of the ways that can happen is via exactly that SELinux denial.
I'm no SELinux expert, but according to RedHat and Gentoo some SELinux aware applications will ignore the global permissive setting and go it alone. RHEL 7 Apache appears to be one of them. It appears to have its own domain which must be set permissive.
On all three I can run the script on the command line and get the expected results:
There's two reasons for that, and they both have to do with users.
When you run the program you're running as your own user with your own configuration, permissions, and environment variables. In fact, you ran it as root which usually bypasses restrictions. When it runs on the server it runs as a different user, probably the web server user with severe restrictions.
In order to do a realistic test, you need to run it as the same user the web server will. You can use sudo -u for this. For example, if the user is apache...
sudo -u apache ./bibbase_proxy2.cgi
BTW Do not test software as root! Not only is it not going to give you sensible results, but if there's a bug in the software there are no safeguards preventing it from wrecking your system.
The second problem is #!/usr/bin/env perl. That means to run whatever perl is in your PATH. PATH will be different for different users. Running ./bibbase_proxy2.cgi may run with one Perl on the command line and a different one via the web server.
In a server environment, use a hard coded path to Perl like #!/usr/bin/perl.
We tested by rewriting the same script in Python and PHP. Both of them showed error which pointed us in the right direction.
Python urllib2 produced the error
<class 'urllib2.URLError'>: <urlopen error [Errno 16] Device or resource busy>
args = (error(16, 'Device or resource busy'),)
errno = None
filename = None
message = ''
reason = error(16, 'Device or resource busy')
strerror = None
PHP (run as CGI) wouldn't even start:
[Wed Jul 25 15:24:52.988582 2018] [cgi:error] [pid 10369] [client 172.28.6.200:44387] AH01215: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/curl.so' - libssh2.so.1: failed to map segment from shared object: Cannot allocate memory in Unknown on line 0
[Wed Jul 25 15:24:52.988980 2018] [cgi:error] [pid 10369] [client 172.28.6.200:44387] AH01215: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/dba.so' - libtokyocabinet.so.9: failed to map segment from shared object: Cannot allocate memory in Unknown on line 0
---- Similar lines for all extensions. ----
It appears that RLimitMEM blocks access to shared memory and that is required for opening sockets. I can't find any documentation, but removing that line makes it work.

Apache - Mod Perl - Unknown Authz provider 'access'

I am trying to set up and run an old Web application(written in 2010) in a new Linux environment. The Apache server is not starting because of the error Unknown Authz provider access, caused by the configuration given below.
<Directory /srv/webapp>
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
SetOutputFilter DEFLATE
ExpiresActive On
ExpiresDefault "3 Months"
AuthType security::AuthCookieHandler
AuthName Maxio
PerlAuthenHandler security::AuthCookieHandler->authenticate
PerlAuthzHandler security::AuthCookieHandler->authorize
require access
</Directory>
I couldn't find any documentation for this, or any apache module that defines access , but security::AuthCookieHandler has
sub access
{
...
...
}
I understand that this is mod_perl based authentication, but haven't worked on this before. Apache starts if this authentication is disabled, and the application loads in the browser.
So the questions are
Is require access supposed to get the return value from sub access ?
If so, why sub access is not visible to the configuration ?
If not so, what is access here ?
After researching for a few hours I found out that this is because of changes in the latest versions of Apache and mod_perl.
From the Apache-AuthCookie documentation and Apache 2.4 porting notes, I learned that Apache 2.4 needs mod_perl version 2.0.9 or higher.
Also, a custom Authz Provider has to be added using PerlAddAuthzProvider. I was able to solve my issue by doing
PerlAddAuthzProvider access security::AuthCookieHandler->access
...
...
<Directory /srv/webapp>
...
...
require access
</Directory>

installation error typo3-neos- 500 internal server error

I have downloaded typo3-neos using php c:/xampp/Composer/bin/composer.phar create-project --dev --stability alpha typo3/neos-base-distribution TYPO3-Neos-1.0-alpha
my httpd.conf is :
<VirtualHost *:80>
ServerName neos.demo
DocumentRoot c:/xampp/htdocs/Typo3-Neos/Web/
SetEnv APPLICATION_ENV "development"
<Directory c:/xampp/htdocs/Typo3-Neos/Web/>
DirectoryIndex index.php
AllowOverride FileInfo Options=MultiViews
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
and vhost is: 127.0.0.1 neos.demo
I am geting the follwing 500 Internal Server Error (a snippet)
1355480641: Execution of subprocess failed with exit code 1 without any further output.
(Please check your PHP error log for possible Fatal errors)
More information
TYPO3\Flow\Core\Booting\Exception\SubProcessException thrown in file
C:\xampp\htdocs\TYPO3-Neos\Packages\Framework\TYPO3.Flow\Classes\TYPO3\Flow\Core\Booting\Scripts.php in line 532.
Reference code: 201310091327354b04b0
I have divided screenshot of the complete error page into three parts (error1.png, error2.png, error3.png) as the error stack is quite long, which is attached here
How can this be solved
After setting your System up, start NEOS with http://neos.demo/setup first.
I was having the same issue on my Mac machine after a successful installation. The point was that my php installation was not linked correct to the php binary, although it was set correctly in /user/bin/php and "active"
So make sure /opt/local/etc/select/php/current points to a valid php installation using the command "sudo port select php php54" (for php 5.4)
I solved this error with setup this lines in neos\Packages\Framework\TYPO3.Flow\Configuration\Settings.yaml
TYPO3:
Flow:
core:
phpBinaryPathAndFilename: 'C:/path/to/php.exe'
TYPO3:
Flow:
core:
subRequestPhpIniPathAndFilename: '/path/to/your/php.ini'
This error occur because typo3flow may be not find php and php.ini files in server.
for more help follow this link: http://wiki.typo3.org/Exception/Flow/1355480641

How to make browser stop caching GWT nocache.js

I'm developing a web app using GWT and am seeing a crazy problem with caching of the app.nocache.js file in the browser even though the web server sent a new copy of the file!
I am using Eclipse to compile the app, which works in dev mode. To test production mode, I have a virtual machine (Oracle VirtualBox) with a Ubuntu guest OS running on my host machine (Windows 7). I'm running lighttpd web server in the VM. The VM is sharing my project's war directory, and the web server is serving this dir.
I'm using Chrome as the browser, but the same thing happens in Firefox.
Here's the scenario:
The web page for the app is blank. Accorind to Chrome's "Inspect Element" tool, it's because it is trying fetch 6E89D5C912DD8F3F806083C8AA626B83.cache.html, which doesn't exist (404 not found).
I check the war directory, and sure enough, that file doesn't exist.
The app.nocache.js on the browser WAS RELOADED from the web server (200 OK), because the file on the server was newer than the browser cache. I verified that file size and timestamp for the new file returned by the server were correct. (This is info Chrome reports about the server's HTTP response)
However, if I open the app.nocache.js on the browser, the javascript is referring to 6E89D5C912DD8F3F806083C8AA626B83.cache.html!!! That is, even though the web server sent a new app.nocache.js, the browser seems to have ignored that and kept using its cached copy!
Goto Google->GWT Compile in Eclipse. Recompile the whole thing.
Verify in the war directory that the app.nocache.js was overwritten and has a new timestamp.
Reload the page from Chrome and verify once again that the server sent a 200 OK response to the app.nocache.js.
The browser once again tries to load 6E89D5C912DD8F3F806083C8AA626B83.cache.html and fails. The browser is still using the old cached copy of app.nocache.js.
Made absolutely certain in the war directory that nothing is referring to 6E89D5C912DD8F3F806083C8AA626B83.cache.html (via find and grep)
What is going wrong? Why is the browser caching this nocache.js file even when the server is sending it a new copy?
Here is a copy of the HTTP request/response headers when clicking reload in the browser. In this trace, the server content hasn't been recompiled since the last GET (but note that the cached version of nocache.js is still wrong!):
Request URL:http://192.168.2.4/xbts_ui/xbts_ui.nocache.js
Request Method:GET
Status Code:304 Not Modified
Request Headersview source
Accept:*/*
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
Host:192.168.2.4
If-Modified-Since:Thu, 25 Oct 2012 17:55:26 GMT
If-None-Match:"2881105249"
Referer:http://192.168.2.4/XBTS_ui.html
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4
Response Headersview source
Accept-Ranges:bytes
Content-Type:text/javascript
Date:Thu, 25 Oct 2012 20:27:55 GMT
ETag:"2881105249"
Last-Modified:Thu, 25 Oct 2012 17:55:26 GMT
Server:lighttpd/1.4.31
The best way to avoid browser caching is set the expiration time to now and add the max-age=0 and the must-revalidate controls.
This is the configuration I use with apache-httpd
ExpiresActive on
<LocationMatch "nocache">
ExpiresDefault "now"
Header set Cache-Control "public, max-age=0, must-revalidate"
</LocationMatch>
<LocationMatch "\.cache\.">
ExpiresDefault "now plus 1 year"
</LocationMatch>
your configuration for lighthttpd should be
server.modules = (
"mod_expire",
"mod_setenv",
)
...
$HTTP["url"] =~ "\.nocache\." {
setenv.add-response-header = ( "Cache-Control" => "public, max-age=0, must-revalidate" )
expire.url = ( "" => "access plus 0 days" )
}
$HTTP["url"] =~ "\.cache\." {
expire.url = ( "" => "access plus 1 years" )
}
We had a similar issue. We found out that timestamp of the nocache.js was not updated with gwt compile so had to touch the file on build. And then we also applied the fix from #Manolo Carrasco MoƱino. I wrote a blog about this issue. http://programtalk.com/java/gwt-nocachejs-cached-by-browser/
We are using version 2.7 of GWT as the comment also points out.
There are two straightforward solutions (second is modified version of first one though)
1) Rename your *.html file which has a reference to *.nocache.js to i.e. MyProject.html to MyProject.jsp
Now search the location of you *.nocache.js script in MyProject.html
<script language="javascript" src="MyProject/MyProject.nocache.js"></script>
add a dynamic variable as a parameter for the JS file, this will make sure actual contents are being returned from the server every time. Following is example
<script language="javascript" src="MyProject/MyProject.nocache.jsp?dummyParam=<%= "" + new java.util.Date().getTime() %>"></script>
Explanation: dummyParam will be of no use BUT will get us our intended results i.e. will return us 200 code instead of 304
Note: If you will use this technique then you will need to make sure that you are pointing to right jsp file for loading your application (Before this change you was loading your app using HTML file).
2) If you dont want to use JSP solution and want to stick with your html file then you will need java script to dynamically add the unique parameter value on the client side when loading the nocache file. I am assuming that should not be a big deal now for you given the solution above.
I have used first technique successfully, hope this will help.
The app.nocache.js on the browser WAS RELOADED from the web server (200 OK), because the file on the server was newer than the browser cache. I verified that file size and timestamp for the new file returned by the server were correct. (This is info Chrome reports about the server's HTTP response)
I wouldn't rely on this. I've seen a bit of strange behaviour in Chrome's dev tools with the network tab in combination with caching (at least, it's not 100% transparent for me). In case of doubt, I usually still consult Firebug.
So probably Chrome still uses the old version. It may have decided long ago, that it will never have to reload the resource again. Clearing the cache should resolve this. And then make sure to set the correct caching headers before reloading the page, see e.g. Ideal HTTP cache control headers for different types of resources.
Open the page in cognito mode just to get-rid of cache issue and unblock yourself.
You need to configure cache time as mentioned in others comments.
After unsuccessfully preventing caching via Apache I created a bash script that root runs every minute in a cron job on my Linux Tomcat server.
#!/bin/bash
#
# Touches GWT nocache.js files in the Tomcat web app directory to prevent caching.
# Execute this script every minute in a root cron job.
#
cd /var/lib/tomcat7/webapps
find . -name '*nocache.js' | while read file; do
logger "Touching file '$file'"
touch "$file"
done