i am using using activator project which has play 2.4.2 . just for testing i deployed raw project which only listen on port 80 0r 9000 and returning Ok("abc").
but when i check the output of
$ sudo lsof -i | wc -l
the number increasing gradually with time, and after some time let say 24-48 hours. the server crashes with exception too many file open.
i tested with apache benchmark also, after completion of benchmarking, there is still some connections open and never close.
please someone help.
There seems to be some debate around this issue, when sometime back I was working with playframework.
First verify that if your client is asking for connection to be kept alive. In this case playframework would honor the client and keep the connection open. See this disscussion . The takeaway from discussion was play can handle a lot of request, which is questionable if you think about DoS attacks.
The other thing there seems to be options to kill the connection from the action with the header, but I have never tried with those. See this. I am not able to pull any documentation around this option at this moment.
Edit : Seems to be mentioned in 2.2. hightlight.
Related
First, I've finaly found out what the problem was but still, I decided to write this question+answer for others (because I spent 6 hours with this issue).
So, what's the problem...
I have a Cloud Foundry app (on public Bluemix) based on binary-buildpack. Two days ago, everything was OK. But not since yesterday. My app crashed (probably during restaging or something similar) and never started again. I tried to push the app again and still the same result. Really frustrating...
Something about the backend... There is a shell script in my instance that runs one binary application. Generaly, the application should connect to database server (also on public Bluemix).
The problem: Everytime I tried to start the app, it crashed immediately. This is what I found in logs: dial tcp: lookup databaseserverdomain.com on 0.0.0.0:53: server misbehaving.
There are a couple of similar problems on StackOverflow but no answer that would be helpful for me.
So, the error means that something went wrong with TCP connection. Ok, but what exactly? That's the question I'm going to answer myself...
Sounds like your binary isn't capable in properly handling connection problems. I would rather fix that part since I guess it will crash anyway when there is a connection issue.
The solution was actually simple...
I edited my shell script and add ping google.com -count 3 before launching the application to test if there is a stable network connection. This worked.
The application got 2 more seconds and it was enough for network/router/whatever to establish the connection.
Hmm.. It seems that there is something wrong with network routing on Cloud Foundry/Bluemix since yesterday.
Once a Login script is executed with few user, I don't see connection reset problem, whereas, when the same is run 100 users, "java.net.SocketException: Connection reset" starts throwing for very first link.
What I don't understand is if there is connection problem, then it should even show the same error for single or few users as well.
This means that your server is rejecting connections because it is either overloaded or misconfigured.
It is regular that you don't face it with 1 user and face it with 100, this is typically what load testing brings, ie simulate traffic on your server
It might be the case described in Connection Reset since JMeter 2.10 ? wiki page.
If you are absolutely sure that your server is not overloaded and is configured to accept 100+ connections (defaults are good for development, not for production, they need to be tweaked) you can try work it around as follows:
In user.properties file add the next 2 lines:
httpclient4.retrycount=1
hc.parameters.file=hc.parameters
In hc.parameters file add the following line:
http.connection.stalecheck$Boolean=true
Both files live in JMeter's bin folder.
You need to restart JMeter to pick the properties up.
Above instructions are applicable for HttpClient4 implementation, make sure you use it, the fastest and the easiest way to set HttpClient4 implementation for all the HTTP Request samplers is using HTTP Request Defaults
I can't figure this one out. I can't connect to a server using MySQL Workbench, I tried any kind of connection methods. The error message I get is
Failed to Connect to MySQL at AT 127.0.0.1:3306 with user root
Invalid for this platform protocol requested(MYSQL_PROTOCOL_SOCKET)
I ran into the same problem, in my case I originally created the connection with the "Local Socket/Pipe" option selected in the "Connection Method" drop down. Trying to switch back to "Standard (TCP/IP)" did not work and caused the error mentioned by OP. I had to delete the connection and start over by selection "Standard (TCP/IP)" from the start. The connection was successful after that.
To solve this problem you must check the "Others" field in Advanced tab
If you had the connection stored with a socket option you will find a "socket=." (or anything similar)
Delete it
e.g. http://prntscr.com/k63pua
This is a very unusal error message which I haven't seen before, especially on Windows. It has probably to do with how the server is installed. As a newbie it would definitely be the best choice to use the Windows Installer for all required parts. This will install the server properly too.
By using xampp you are on your own to check whether a server is installed and running as a service, as well as the proper configuration. For troubleshooting watch my video on Youtube where I tried to explain most common pitfalls for beginners.
Note: you can open the connection without actually being connected. In that case MySQL Workbench allows to do all those things that don't require a valid server connection, e.g. log file viewing, config file editing, service start/stop etc. Use this to check your server's configuration. Make sure it accepts TCP/IP connections (there's also a short section in the video about this).
Update:
Downvoter, please add a comment why you think my answer is bad.
Re-reading the error message I got another idea: could it be that you used local socket/named pipe for the connection? If so try with normal TCP/IP.
Is there a way to get recorder real network traffic to web server, e.g. from web server logs (Apache), and replay this traffic to either profile web application (in Perl) under real load, or benchmark and compare speed of different implementations before choosing one or the other?
If it matters, webapp is written in Perl, and runs under plain CGI, FastCGI, mod_perl (via ModPerl::Registry), PSGI (via Plack::App::WrapCGI).
Crossposted to Pro Webmasters
Similar questions on Server Fault:
How can I replay Apache access logs back at my servers to do real world load testing?
A quick scan on Google for this yielded an interesting blog entry with subsequent, useful comments are at http://www.igvita.com/2008/09/30/load-testing-with-log-replay/. A commenter also mentioned Tsung by Process-One that allows for recording sessions real-time, with the obvious note that you should be able to replay it back. That doesn't help so much with existing Apache access logs though.
Been here lately. I figured that if I dumped tcp traffic with tcpdump I could rewrite the destination of the packages and then replay it to the new app servers. So I started out with something like this:
tcpdump -i eth1 dst -s 0 -w - port 80 | \
tcprewrite --mtu-trunc --infile=- --outfile=- \
--dstipmap=<source_ip>:<destination_ip> | \
tcpslice -w - - | tcpreplay --intf1=eth1 -
It did not work for various reasons, so I started digging some more and found Gor: a small Go project by Leonid Bugaev from Granify, written for exactly what we wanted to accomplish here.
This is how we ended up using Gor: http://devblog.springest.com/testing-big-infrastructure-changes-at-springest/
We have a Chef cookbook for it as well: https://github.com/Springest/gor-chef
Hope this helps.
Short answer was given on the otherside.
Longer answer is that you can't: you will be missing request headers and POST bodies.
Here's a simple perl way to record real http traffic and play it back:
http://patrick.net/sprocket/rwt.html
If only GET requests are needed and there is no session-tracking implemented via query parameters, then this is possible.
One question: do you want to do it this way because (1) you want to emulate real-world distribution of traffic among your pages or (2) there are too many pages to even consider building any sort of test scripts?
Recently discovered that Zend_Session's DbTable SaveHandler is implemented in a way that is not very optimized for high performance, so, I've been investigating changing over to using Memcache for session management.
I found a decent pattern/class for changing the Zend_Session SaveHandler in my bootstrap from DbTable to Memcache here and added it into my web app.
In my bootstrap, I changed the SaveHandler like so:
FROM:
Zend_Session::setSaveHandler(new Zend_Session_SaveHandler_DbTable($config));
TO:
Zend_Session::setSaveHandler(new MyApp_Session_SaveHandler_Memcache(Zend_Registry::get("cache")));
So, my session init looks like this:
Zend_Loader::loadClass('MyApp_Session_SaveHandler_Memcache');
Zend_Session::setSaveHandler(new MyApp_Session_SaveHandler_Memcache(Zend_Registry::get("cache")));
Zend_Session::start();
// set up session space
$this->session = new Zend_Session_Namespace('MyApp');
Zend_Registry::set('session', $this->session);
As you can see, the class provided from that site integrates quickly with a simple loadClass and SaveHandler change in the bootstrap and it works in my local dev env without error (web app and memcache are on the same system).
I also tested my web app hosted in local dev env with a remote memcache server in PROD to see how it performs over the wire and it appears to also work okay.
However, in my staging environment (which mimics production) my zend app is hosted on server1 and memcache on hosted on server2 and it seems that nearly every other request completely bombs out with specific error messages.
The error information I capture includes the message "session has already been started by session.auto-start or session_start()" and second/related indicates that Zend_Session::start() got a Connection Refused with an "Error #8 MemcachePool::get()" implicated on line 180 in the framework file ../Zend/Cache/Backend/Memcached.php.
I have confirmed that my php.ini has session.auto_start set to 0 and the only instance of Zend_Session::start() in my code is in my bootstrap. Also, I init my Cache, Db and Helpers before I init my Session (to make sure my Zend_Registry::get("cache") argument for instantiating my new SaveHandler is valid.
I found only about two valuable resources for how to successfully employ Memcache for Zend_Session and I have also reviewed ZF's Zend_Cache_Backend and Zend_Session "Advanced Usage" docs but I haven't been able to identify the source of why I get this error using Memcache or why it won't work consistently with a dedicated/remote memcache server.
Does anyone understand this problem?
Does anyone have experience with solving this problem?
Does anyone have Memcache working in their ZF web app for session management in a way that they can recommend?
Please be sure to include any/all Zend_Session and/or Zend_Cache configurations you made or other trickeration or witchcraft you used to get this working.
Thanks!
This one just nearly exploded my head.
First, sorry for the book of a question...I wanted to paint a complete picture of the situation. Unfortunately, I missed a few key details which my wonderful coworker found.
So, once you install, most likely when you are just starting to test out the deamon, you will do this:
root# memcached -d -u nobody -m 512 127.0.0.1 -p 11211
This command will start up memcached, using 512MB on the localhost and the default port 11211.
Did you see what I did there? That means it's set to only process requests sent to the LOOPBACK network interface.
ugh
My problem was, I couldn't get my web app to work with a REMOTE memcached server.
So, when you actually want to fire up your memcached server to accept requests from remote systems, you execute something like the following:
root# memcached -d -u nobody -m 512 -l 192.168.0.101 -p 11211
This fixed my problem. This starts my memcached daemon, setting it to use 512MB bound to IP 192.168.0.101 and listening on the default port 11211.
Now, any requests SENT to that IP and Port will be accepted by the server and handled as you might expect.
Here's a networking doc reference...RTFM...a second time!