running PHP websockets for over 2000 users getting *** buffer overflow detected ***: php terminated error - sockets

I'm running php project with NginX and php-fpm on ubuntu server.
The site is Symfony2 based and I'm using Clank bundle for the sockets (Ratchet sockets).
I've already recompiled the php to allow open over 1024 files, but now I get an error when too much users saying buffer overflow, I guess its because of the multiple connections to a single PHP process.
My question is, what should I change in order to split the connections to multiple processes\ avoiding this crush.

Related

Cannot open files in FileMaker Server 19(message: The host's capacity was exceeded)

I am just starting out with FileMaker and have run into a problem with FileMaker Server 19. I have a client file that I have shared with the server which is showing up as expected in the FMS Admin console under the Databases tab. However, when I try to open it to make it active in FM Server, it won't open.
The only message I receive is that the host's capacity has been exceeded, which doesn't make sense since it is a fresh install and no clients have been hosted yet.
I have looked around online to try to find a solution but haven't found anything that works. Most solutions refer to the number of simultaneous clients permitted by FM Server.
Any help is much appreciated.

HTTP error 508 from PERL script during redirection of requests from secure to non-secure server

I am getting a strange error in my file upload application.
1) Server 1: It is secure web server on port 443, which is accessible to public hosting a perl script
2) When this server get a request for cgi-bin directory, it will simple redirect the request to other web server (Server 2) running on port 80
3) The perl script in Server 2 will save the file in disk
Issue:
Above mechanism worked for a couple of hours later it has thrown Http error code:508
Observations:
If I directly hit Server2 on port 80, perl script successfully saving the files to disk. But If I directly hit Server1 on port 443, I am getting 508 error
When I first got the issue, I have restarted both webservers and it worked. But when I get the same issue second time, restart of servers did not help. The call to Server1 is throwing 508 error and requests are getting timed out.
ulimits and open files are in control.
If you experience this type of issue, please share your thoughts.
This strange issue has been resolved after moving dynamic service to other Virtual machine. This issue is no longer reproducible after changing the node.

Enabling debug prints for LIGHTTPD server

I was setting up a lighttpd server in my ARM device. Server was set successfully. Now I enabled all the debug prints in the lighttpd config file for tracking the server activities. All these debug prints can be seen in an error.log file. Is there some way that i can print these logs directly to my terminal as they happen.
Since lighttpd is designed to be run as a service I suspect the best way to achieve this would be to run tail -f on error.log. This may not be ideal as if you have multiple virtual hosts running on one lighttpd install you will have every sites debug log in amongst the wanted messages. Sadly, there is currently no way to have a separate error log for each vHost, although this has been requested as a feature.

Having issues with meteorjs app when running with mongodb on localhost

I am having some issues with the MeteorJs app that I am working on. I am working for a client and we are using his dedicated server for our app's deployment. The server has php installed and is already running apache server (a php app is live on server). The server itself is running a version of CentOS.
I bundled my meteor app and uploaded it on server using my cPanel access (it is not root level access). I also created an ssh key and logged into the server using that ssh access.
I used export command to set my MONGO_URL to mongodb://localhost:27017/<db-name> (Version 2.6.3 of MongoDB in installed on server) and PORT to 3000. From here I ran the app using node package "pm2".
Now the issue is that when the app runs it accesses the database for data.
The request is made from client side.
The server receives the request (seen in the live log)
The server fetches data from db and logs it in the terminal.
But then it takes somewhere around 10-15 seconds to send that data back to the client.
There is not extra commands or computation between logging the data fetched from server and returning it to client.
But if I change the mongo URI to my instance of MongoLab, everything works fine and there are no delays. My client prefers that the mongo runs on his dedicated server.
As a programmer I know it would be difficult to answer this question with limited information and no hands-on debugging. But I was hoping someone else experienced this issue and was able to resolve. I just installed mongodb on the server without any further configurations. Is it that I need to install any further packages or do any configurations?
you need to set MONGO_OPLOG_URL to enable oplog tailing feature. when oplog tailing is disabled it takes around 10-15 seconds to send that data to the client.
export MONGO_OPLOG_URL like this.
MONGO_OPLOG_URL=mongodb://localhost/local

PeopleSoft Webserver crashing, losing connection to AppServer

On our Webserver, we're seeing a ton of these errors:
Application Server last connected //psoftapp.company.net_8850
bea.jolt.ServiceException: bea.jolt.JoltRemoteService(GetCertificate)call(): Timeout\nbea.jolt.SessionException: Connection recv error\nbea.jolt.JoltException: [3] NwHdlr.recv(): Timeout Error
and on our Appserver:
PSPUBDSP_dflt.27505 (0) 07/20/11 08:13:33 (JNIUTIL): Java exception thrown: java.net.SocketException: Connection reset
I'm reading some tuning documents from PeopleSoft & I found a suggestion that I've seen in a couple of places -- Reducing the tcp_wait_time_interval to 60 seconds. I think I sort of understand what this is doing - It seems that network (or socket?) connections that are no longer being used are "recycled" or made available? Can someone confirm this? Also, why are these connections unused/stale? Is it caused by people not properly logging out of the app (and just closing the browser)?
Thanks!
PSPUBDP is part of the Integration Broker application messaging framework. You could look at the Tuxedo logs or the Integration Broker Monitor too see what is going on. You may be running a high number of messages and overloading the server or possibly you have a message with errors that is somehow causing the crashes.