I have a problem with mysql db connection..
My application does a lot of queries to the database.
And loads results of the query in the page.
Before, I used mysql_connect for the connection. I separated
it in a different file the connection and then just
include the file in my subsequent php files.
So it seems like, i open the dbase and query everytime.
The problem came when the server's max connection per hour to the db was set to like 50. then just a couple of tries in my page, it reached the max connection per hour. It turned out
I have not been closing my connections.
So i tried closing the connections every time my php scripts end. I am using AJAX (async calls) to php. The problem was still the same, just a couple of views and then it reaches the max.
So i tried using the mysql_pconnect(). In my wamp, i set the max connections to 2. It kinda worked well, but when transferred to server, in linux, the connection was set to 2 also, it again had the max connection problem.
Apache is used as the webserver. Its really frustrating.
Can somebody give light on this?
Thanks a lot :)
Related
Currently I use supabase PostgreSQL hosting for my site and node-postgres to make queries. After some time (about 5-7 min) of inactivity connection is terminated and it's impossible to get/send data to DB. It is not node-Postgres problem, because connection has same behaviour even if you make queries from psql terminal.
I even tried to change hosting provider, but second provider (bit.io) had the same problem.
I came up with idea to make interceptor function that catches connection_error and makes request again. But I don't think that it is the most efficient approach.
How to solve this problem?
So I ran into a network problem the other day and I was trying to find a way to test for this problem in the future.
I had a lot of users online at once and hit my routers max IP connection limit (not DHCP! TCP/UDP connections.)
Once I figured out what the problem was it was fairly simple to fix however I was wondering if there is any way to simulate this kind of activity? Everything worked fine when I tested it, it wasn't until I had 150+ users that I discoved I had a problem.
I have spent the last 3-4hrs looking for such a test/audit tool. Here is what I found:
http://bittwist.sourceforge.net/ -DDoS simulator (can't make it work, barly get +300 connections)
http://stevesouders.com/hpws/max-connections.php -Browser concurrent connection tester (This hits the browser limit (6 in chrome) w/o making a dent on my router even open in 70+ tabs at the same time)
http://www.smallnetbuilder.com/lanwan/lanwan-howto/31103-how-we-test-hardware-routers-revision-3 -Some tool linked about halfway down the page (Reads like its exactly what I want, however it barely has a noticable effect on my router.)
http://www.http-kit.org/600k-concurrent-connection-http-kit.html -Concurrent HTTP connection simulator (This one seems like it would do what I want, but my linux-foo is limited and I can't get it working. tear)
So do you guys have a tool to test your routers with? I would love something that does both TCP/UDP.
(btw, for anyone misunderstanding I'm not trying to test "speed", just sheer number of connections)
Thanks!
Kz
Running postgresql 9.x (9.1 - 9.3)
I have a custom web app built using php's PDO library. Every query in our app uses prepared statements (via our internal PDO wrapper library).
Our production system uses AWS EC2 small instances for the web server and RDS for the app server.
For local development, my local machine serves as the web-server, and an office machine running Mac OSX (Mavericks) serves up the DB.
When I'm at work, I can typically ping my local office DB server and get 1-5ms ping times. Everything performs well, page load times are very speedy, my internal timer shows that PHP runs the page from start to end in about 12ms.
Now, the issue comes in when I take my work laptop home-- from home, I get about 50-60ms ping times to the office DB server. If I run my development machine at home, page times now take 5-10 seconds to load-- every time. Granted, there are 4 db queries running per page load, it's very, very little data. I've tried TCP_NOWAIT settings, I've tried loading pgbouncer on my local machine with persistent connections to the remote db-- nothing has helped so far.
When timing the queries, we have a simple query that returns 100 bytes of data that runs in .0006 seconds locally to taking around 1 second to run remotely. Lastly, I'll say it appears to be related to latency only, no matter how much data a query returns, it's like it takes around 1 second longer than it normally would if running locally.. give or take.
I was simply wondering if anyone could help me resolve what this delay might be. It seems that every single query, no matter how much data, imparts a delay of around a second, give or take. The odd thing is that when I run PGAdmin on the my machine connecting to the remote DB, it takes nowhere near that much time to run simple queries.
Does anyone have any idea of other things to try? I'm not runnig an SSL DB connection, or using any compression, I'm willing to try if necessary, however, that's one thing I haven't gotten to work before, and I doubt that'll help with latency anyway.
I'm developing a project here and everytime I need to send a query to MYSQL I'm opening a new connection.
Is this right or should I only connect once? How should I proceed?
Thank you
Probably you should not open a new connection for every query.
There are exceptions to every general rule, of course, but typically you should connect once, sometime before the first query, and then re-use the same mysqli connection object for multiple queries during the given PHP request.
There is no limit to the number of queries you can run in serial using a given connection. The only limitation is that you can run only one query at a time.
Think of it this way: if you were writing a PHP script to simply read a file, and you knew you were going to read multiple lines from the file, you would keep the file handle open and make multiple read requests from it before you close the file. You would not re-open the file every time you wanted to read from it during a single PHP request.
The overhead of opening new connections to the database is reasonably low (at least for MySQL), but if you have an opportunity to easily reduce that overhead, it's likely worth it to do so.
Re your comment:
You're right, there's no way to keep your $mysqli object across pages. That's what is meant by the term request. Objects and resources are cleaned up at the end of a request.
When you said you were creating a new database connection for every query, I assume you meant that if you execute more than one SQL query during a single request (that is, page view), that you would create a new $mysqli object for each query. That would be unnecessary.
There's one other way you can reuse the database connections from one page view to the next. That is to use persistent connections. This doesn't preserve the $mysqli object -- you still have to run new mysqli on each page load. But internally it is reusing the database connection from a previous PHP request.
All you have to do to open a persistent connection in this way is to add the prefix p: to your hostname.
Servers and databases have a finite number of connections available. If every one of your users keeps an open connection for no reason (like when they are reading a blog post for a page that already loaded) then it will cap the number of people who can connect to your project in production. Unless there is a very specific need to keep a connection open, I recommend not doing so.
Again though, it really depends on the scope of your project. If you are just talking about a single page of a website, typically it's fine to leave the connection open until you are done loading the page.
I have a application built on Zend Framework I am trying to optimize.
I did some Xdebug profiling and although i cant say i understand every nitty gritty of the results i got, some things were quite obvious from the result.
For instance, the file Bootstrap.php seems to be the one gulping most of the time taking 4,553MS seconds which accounts for 92.49% of the total time.
And if i dig further, I could see that Zend_Application_Bootstrap_Boostrap->run takes the bulk of the time. Checking this out again, I found out that Zend_Controller_Front->Dispatch might actually be the function inside the Boostrap.php that takes time to execute.
Question is, from these indices that i have, how best can I go about Optimizing the application? If it caching, how do i go about applying Caching to this situation?
Thanks
From the look of the callgrinds, on the login page the app is spending most of it's time in curl_exec, which is to be expected if you're doing a remote login. But it is doing 10 separate curl_execs which seems excessive. I'm not familiar with the LinkedIn login auth, but is it possible your app is running the remote login code multiple times?
On the standard page request the app is spending most of its time connecting to MySQL, and it seems to be doing this twice. Are you using a remote DB server, and do you need two separate DB connections?
Assuming you are using a remote DB server and it is on the same network as your web server, there seems to be some networking issue there. I'd check the latency to that server if you can, and try connecting to the IP address instead of a hostname to see if that makes any difference (if doing this is much faster this would suggest an issue with the DNS setup on your web server).