ConnectionPool::PoolShuttingDownError thrown once in a while by application_controller rails using Mongodb replicaSet - mongodb

I have a RoR application running on two different server. They run the same version of the app and are similar in configuration. I have a Mongodb replica set running on both the server with a third server as an arbitrary server.
Everything runs fine. The data is syncing perfectly. But after 2 weeks of running, one of the server started giving ConnectionPool::PoolShuttingDownError. I checked the log and I can see the error was raised application controller. I didn't change any code on any of the server.
The server raising the error is fine till it gets 6-7 simultaneous request. Or when you refresh the page 6-7 times together. It gives this error once and then again you refresh the page and its back to normal. This is weird and I can't understand why one server has this problem while the other one don't and that too sometime.
I am using Mongoid with Moped, Rails 4.1.0 and Ruby 2.1.5. I also checked available connections using db.serverStatus().connections which is around 51158 and ulimit of max process is 257185.
I searched a lot but I am still unsure of the cause of this problem. It will be great if someone can put some light on this issue. Any help will be appreciated. Thanks in advance.

Related

Postgres, Prisma Working Fine One Day, 'P1001 Error: Can't Reach Database' the next

For this project, I am using a prisma / Postgres database. I have made no changes to my code, and I have pulled a coworkers working version of the code to no avail. I am unable to do anything with the database, I cannot migrate, I cannot run mutations, and I cannot even open the psql console, as every command is met with
P1001: Can't reach database server at localhost:5432
Please make sure your database server is running at localhost:5432
I am not sure what I could have possibly done, I don't know enough about ports or even the contents of app.json well enough to have messed anything up. Now, no mutations can go through.
Interestingly enough, this all happened after I ran npx primsa migrate deploy on the deployed database which is on a EC2 VM from AWS. Since then, the native app associated with the database refuses to work, though it is worth nothing that the webapp connects to the deployed database just fine. This being said, nothing works locally, as the database / Port / Server don't exist anymore according to my machine, which makes no sense. I have no idea how to try to re-spin it, or why every single query / mutation from my Native App now ONLy returns Response not successful: Received status code 400 despite it having the same exact syntax it did when it worked, as well as the WebApp having the same syntax and server (ExpressJS). Does anyone have any ideas what could be causing this?
The error code 400 refers to a bad request coming from the client: too large request, malformed syntax, invalid request message framing, etc.
First step: make sure that your database server is indeed running. Try connecting to it with other SQL Clients or Libraries. Sometimes Prisma is just being difficult.
Second thing: are you hosting the database on the local server? I can assume you are because of the localhost. Make sure no other programs are using this port or maybe waiting for it.
Sorry if this doesn't help. Good luck!

Connected To XEPDB1 From SQL Developer [duplicate]

I am using ORACLE database in a windows environment and running a JSP/servlet web application in tomcat. After I do some operations with the application it gives me the following error.
ORA-12518, TNS: listener could not hand off client connection
can any one help me to identify the reason for this problem and propose me a solution?
The solution to this question is to increase the number of processes :
1. Open command prompt
2. sqlplus / as sysdba; //login sysdba user
3. startup force;
4. show parameter processes; // This shows 150(some default) processes allocated, then increase the count to 800
5. alter system set processes=800 scope=spfile;
As Tried and tested.
In my case I found that it is because I haven't closed the database connections properly in my application. Too many connections are open and Oracle can not make more connections. This is a resource limitation. Later when I check with oracle forum I could see some reasons that have mentioned there about this problem. Some of them are.
In most cases this happens due to a network problem.
Your server is probably running out of memory and need to swap memory to disk.One cause can be an Oracle process consuming too much memory.
if it is the second one, please verify large_pool_size or check dispatcher were enough for all connection.
You can refer bellow link for further details.
https://community.oracle.com/message/1874842#1874842
I ran across the same problem, in my case it was a new install of the Oracle client on a new desktop that was giving the error, other clients were working so I knew it wouldn't be a fix to the database configuration. tnsping worked properly but sqlplus failed with the ora-12518 listener error.
I had the tnsnames.ora entry with a SID instead of a service_name, then once I fixed that, still the same error and found I had the wrong service_name as well. Once I fixed that, the error went away.
If from one day to another the issue shows for no apparent reasons, add these following lines at the bottom of the listner.ora file. If your oracle_home environment variable is set like this:
(ORACLE_HOME = C:\oracle11\app\oracle\product\11.2.0\server)
The lines to add are:
ADR_BASE_LISTENER = C:\oracle11\app\oracle\
DIRECT_HANDOFF_TTC_LISTENER=OFF
I had the same problem when executing queries in my application. I'm using Oracle client with Ruby on Rails.
The problem started when I accidentally started several connections with the DB and didn't close them.
When I fixed this, everything started to work fine again.
Hope this helps another one with the same problem.
I experienced the same error after upgrading to Windows 10. I solved it by starting services for Oracle which are stopped.
Start all the services as shown in the following image:
I had the same issue. After restarting all Oracle services it worked again.
same problem encountered for me.
And from oracle server listener log, can see more information.
and I found that the SERVICE_NAME is not match the tnsnames.ora configured Service name. so I changed the application's data source configuration from SID value to Service_NAME value and it fixed.
23-MAY-2019 02:44:21 * (CONNECT_DATA=(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=XXXXXX$))(SERVICE_NAME=orclaic)) * (ADDRESS=(PROTOCOL=tcp)(HOST=::1)(PORT=50818)) * establish * orclaic * 12518
TNS-12518: TNS:listener could not hand off client connection
TNS-12560: TNS:protocol adapter error
TNS-00530: Protocol adapter error
64-bit Windows Error: 203: Unknown error
I had the same issue in real time application and the issue gone by itself next day. upon checking, it was found that server ran out of memory due to additional processes running.
So in my case, the reason was server run out of memory
first of all
check the listener log
check the show parameter processes vs select count(*) from v$processes;
increase the process, it may require SGA increase
;

How to remain running status of zopectl and plone?

Last week my lab's power outage occurred and the web server went out.
And after then, my webpage doesn't work anymore. My webpage is using plone and zope.
So I first went to the directory /Plone/zinstance/bin
typed ./instance
then did zopectl start
then I typed ./plonectl start.
But the problem is the following : everytime I start zopectl and plonectl, the daemon process soon died.
The command line is like this.
I don't know what is the problem and what should I do. Anyone who knows well about plone and zopectl please help me.
Try ./instance fg. If you have an error it will be displayed in the console.
(fg - means running it in the foreground)

Cloud Foundry on Bluemix: No network connection when starting new app with binary-buildpack

First, I've finaly found out what the problem was but still, I decided to write this question+answer for others (because I spent 6 hours with this issue).
So, what's the problem...
I have a Cloud Foundry app (on public Bluemix) based on binary-buildpack. Two days ago, everything was OK. But not since yesterday. My app crashed (probably during restaging or something similar) and never started again. I tried to push the app again and still the same result. Really frustrating...
Something about the backend... There is a shell script in my instance that runs one binary application. Generaly, the application should connect to database server (also on public Bluemix).
The problem: Everytime I tried to start the app, it crashed immediately. This is what I found in logs: dial tcp: lookup databaseserverdomain.com on 0.0.0.0:53: server misbehaving.
There are a couple of similar problems on StackOverflow but no answer that would be helpful for me.
So, the error means that something went wrong with TCP connection. Ok, but what exactly? That's the question I'm going to answer myself...
Sounds like your binary isn't capable in properly handling connection problems. I would rather fix that part since I guess it will crash anyway when there is a connection issue.
The solution was actually simple...
I edited my shell script and add ping google.com -count 3 before launching the application to test if there is a stable network connection. This worked.
The application got 2 more seconds and it was enough for network/router/whatever to establish the connection.
Hmm.. It seems that there is something wrong with network routing on Cloud Foundry/Bluemix since yesterday.

MongoDB server keeps timing out

I have recently moved to a Mongo server that is running on a CentOS Google Cloud machine I've setup myself (the Mongo service started with systemct). Previously I've been running my mongo DB either locally, or via a server hosted by mlab.
Everything is working fine, except my client keeps getting StopIterator exceptions errors on any non-trivial query. I never encountered these previously, either running local or with the mlab server. Is there a timeout setting on the server I should be setting? (the client timeout settings don't seem to effect the issue)
So I have (sort of) answered my question. The reason my client app was dying was I was running from the Visual Studio debugger. That was catching the StopIterator and asserting, even though (I think) the StopIterator exception was being handled by the pymongo library which is re-trying and continuing successfully. If I disable that StopIterator exception in the "Python Exceptions" section of the VisualStudio "Exception Settings" panel, then my client code will continue and complete successfully.
That said, I am pretty sure this was not happening before I setup my own Mongo server (as well as the assert in VS there is an noticeable hitch when that exception occurs, both in my python code and in the mongo command line client). So I still believe there is something I am doing wrong with how I setup my Mongo server, so any suggestions on that would be welcome!