mongodb connection timeout issues with only 3 users - mongodb

I'm writing an new version of a website using fuel PHP 1.5 and mongodb 2.4.3. Right now we're just getting the initial batch of pages set up to verify that the technology works. The issue that I'm running into, and this is more so an issue for my coworkers then myself, is that periodically it just won't connect to the server. It returns a generic -
"Fuel\Core\Mongo_DbException [ Error ]: Unable to connect to MongoDB: Failed to connect to: 166.78.248.139:27017: Timed out after 0 ms"
but... if you refresh the page this often goes away. I should mention that the overall DB size right now is tiny, (we're using 'newsite'): [
otherhook 0.203125GB
local 0.078125GB
newsite 0.203125GB
test 0.203125GB
]
and the server has 2GB of RAM. There are a grand total of 3 of us trying to connect to and use the box. I might also add that I've only seen this error once the 3rd person started working on it, but not before. ...alright that's as much information as I've got.
Anyone have any ideas as to what's really causing this? Any idea on how to fix it so that we don't have these intermittent connection errors?

Take a look in the MongoDB logs, and in particular look for issues with it running out of resources when trying to open connections (there will usually be a warning printed at start up too relating to ulimits being too low or similar). You haven't mentioned what OS you are running on, but if it Linux, then the settings you are looking for are documented here:
http://docs.mongodb.org/manual/reference/ulimit/
For OS X, take a look here:
https://superuser.com/questions/433746/is-there-a-fix-for-the-too-many-open-files-in-system-error-on-os-x-10-7-1

Related

Heroku Postgres Connection Limit?

I'm building a website attached to a Heroku Postgres database and am using the free hobby dev plan. Per Heroku, this means there's a "Maximum of 20 connections." Does this mean that a maximum of 20 people can be using the website with data being collected by the database on the back end? Any idea what happens if connections go above that level? The paid plans go up to a maximum connection limit of 500, but even that seems low to me if people are using this at the enterprise level. Any color on this would be greatly appreciated. There was a prior question on this but the answer wasn't quite clear to me.
Thanks!
What does database connection limit mean?
PostgreSQL could be configured to limit the number of simultaneous connections to the database. The Heroku comes with plans having connection limits. The 'Hobby' plans come with 20 connections whereas standard plans comes starting with 120 connections. When we start developing and testing, especially automated testings, the hobby plans raise the error PG::Error (FATAL: too many connections for role "xxxxxxx"). If we check the connections with Heroku CLI, we get
Heroku CLI
The immediate solution is to kill all connections with the command :
$ heroku pg:killall --app <app name>
This is not a permanent solution. We had the same issue with this website also. We tried many solutions available in the internet, especially in stack overflow.
It is very important to know how to calculate the no of connections required. Heroku documentation says...
Assuming that you are not manually creating threads in your application code, you can use your web server settings to guide the number of connections that you need. The Unicorn web server scales out using multiple processes, if you aren’t opening any new threads in your application, each process will take up 1 connection. So in your unicorn config file if you have worker_processes set to 3 like this:
worker_processes 3
Then your app will use 3 connections for workers. This means each dyno will require 3 connections. If you’re on a “Dev” plan, you can scale out to 6 dynos which will mean 18 active database connections, out of a maximum of 20. However, it is possible for a connection to get into a bad or unknown state.
Solution - Limit connections with PgBouncer
The easiest fix is to limit the connections with PG bouncer. For many frameworks, you must disable prepared statements in order to use PgBouncer. Then add the PgBouncer buildpack to your app.
$ heroku buildpacks:add https://github.com/heroku/heroku-buildpack-pgbouncer
The output will be something like
Buildpack added. Next release on will use:
heroku/python
https://github.com/heroku/heroku-buildpack-pgbouncer
Run git push heroku master to create a new release using these buildpacks.
Now you must modify your Procfile to start PgBouncer. In your Procfile add the command bin/start-pgbouncer-stunnel to the beginning of your web entry. So if your Procfile was
web: gunicorn .wsgi:application --worker-class gevent
Change it to:
web: bin/start-pgbouncer-stunnel gunicorn .wsgi:application --worker-class gevent
Commit the results to git, test on a staging app, and then deploy to production.
On deployment, you will see
OUTPUT
Depending on the web-framework you are using this can be different, but:
Typically you will have a maximum of one database connection per server process. This could be one per running web- or worker-dyno. Or more if your framework runs multiple thread / worker processes per dyno (most do).
These connections are then only used if there is an actual request to your application, not when the use is just viewing a page.
When you're running an async framework (node.js for example, or greenlets in python) this get's a little more complicated.
The easy way: just test it. You'll see the current connection count in the heroku interfaces. There are frameworks and services in the wild that let you test concurrent users.
The even easier way (since this runs on hobby plans, it seems like a hobby application): just see when it breaks :) .

Connected To XEPDB1 From SQL Developer [duplicate]

I am using ORACLE database in a windows environment and running a JSP/servlet web application in tomcat. After I do some operations with the application it gives me the following error.
ORA-12518, TNS: listener could not hand off client connection
can any one help me to identify the reason for this problem and propose me a solution?
The solution to this question is to increase the number of processes :
1. Open command prompt
2. sqlplus / as sysdba; //login sysdba user
3. startup force;
4. show parameter processes; // This shows 150(some default) processes allocated, then increase the count to 800
5. alter system set processes=800 scope=spfile;
As Tried and tested.
In my case I found that it is because I haven't closed the database connections properly in my application. Too many connections are open and Oracle can not make more connections. This is a resource limitation. Later when I check with oracle forum I could see some reasons that have mentioned there about this problem. Some of them are.
In most cases this happens due to a network problem.
Your server is probably running out of memory and need to swap memory to disk.One cause can be an Oracle process consuming too much memory.
if it is the second one, please verify large_pool_size or check dispatcher were enough for all connection.
You can refer bellow link for further details.
https://community.oracle.com/message/1874842#1874842
I ran across the same problem, in my case it was a new install of the Oracle client on a new desktop that was giving the error, other clients were working so I knew it wouldn't be a fix to the database configuration. tnsping worked properly but sqlplus failed with the ora-12518 listener error.
I had the tnsnames.ora entry with a SID instead of a service_name, then once I fixed that, still the same error and found I had the wrong service_name as well. Once I fixed that, the error went away.
If from one day to another the issue shows for no apparent reasons, add these following lines at the bottom of the listner.ora file. If your oracle_home environment variable is set like this:
(ORACLE_HOME = C:\oracle11\app\oracle\product\11.2.0\server)
The lines to add are:
ADR_BASE_LISTENER = C:\oracle11\app\oracle\
DIRECT_HANDOFF_TTC_LISTENER=OFF
I had the same problem when executing queries in my application. I'm using Oracle client with Ruby on Rails.
The problem started when I accidentally started several connections with the DB and didn't close them.
When I fixed this, everything started to work fine again.
Hope this helps another one with the same problem.
I experienced the same error after upgrading to Windows 10. I solved it by starting services for Oracle which are stopped.
Start all the services as shown in the following image:
I had the same issue. After restarting all Oracle services it worked again.
same problem encountered for me.
And from oracle server listener log, can see more information.
and I found that the SERVICE_NAME is not match the tnsnames.ora configured Service name. so I changed the application's data source configuration from SID value to Service_NAME value and it fixed.
23-MAY-2019 02:44:21 * (CONNECT_DATA=(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=XXXXXX$))(SERVICE_NAME=orclaic)) * (ADDRESS=(PROTOCOL=tcp)(HOST=::1)(PORT=50818)) * establish * orclaic * 12518
TNS-12518: TNS:listener could not hand off client connection
TNS-12560: TNS:protocol adapter error
TNS-00530: Protocol adapter error
64-bit Windows Error: 203: Unknown error
I had the same issue in real time application and the issue gone by itself next day. upon checking, it was found that server ran out of memory due to additional processes running.
So in my case, the reason was server run out of memory
first of all
check the listener log
check the show parameter processes vs select count(*) from v$processes;
increase the process, it may require SGA increase
;

MongoDB sometimes stops

I have a Linux Server (CentOS) with 32GB RAM.
I installed the MongoDB with a Java application. But sometimes, the MongoDB stops to work. So I need to restart it.
I already used the ulimit linux command to change te open files limit to 64000, but the problem still happens.
I'd like to know if somebody have some experience with MongoDB and can give me some tip about this problem.
If you're getting "too many connections," it's quite possible you're opening too many MongoClients. In general, you need only open the one client and its internal connection pool will manage everything for you. This isn't always possible, though, so you'll want to make sure you properly manage the scope of each MongoClient and call close() on it when you're done.

ConnectionPool::PoolShuttingDownError thrown once in a while by application_controller rails using Mongodb replicaSet

I have a RoR application running on two different server. They run the same version of the app and are similar in configuration. I have a Mongodb replica set running on both the server with a third server as an arbitrary server.
Everything runs fine. The data is syncing perfectly. But after 2 weeks of running, one of the server started giving ConnectionPool::PoolShuttingDownError. I checked the log and I can see the error was raised application controller. I didn't change any code on any of the server.
The server raising the error is fine till it gets 6-7 simultaneous request. Or when you refresh the page 6-7 times together. It gives this error once and then again you refresh the page and its back to normal. This is weird and I can't understand why one server has this problem while the other one don't and that too sometime.
I am using Mongoid with Moped, Rails 4.1.0 and Ruby 2.1.5. I also checked available connections using db.serverStatus().connections which is around 51158 and ulimit of max process is 257185.
I searched a lot but I am still unsure of the cause of this problem. It will be great if someone can put some light on this issue. Any help will be appreciated. Thanks in advance.

DB2 Transaction log is full. How to flush / clear it?

I’m working on a experiment regarding to a course I’m taking about tuning DB2. I’m using the EC2 from Amazon (aws) to conduct the experiment.
My problem is, however, that I have to test a non-compression against row-compression in DB2 and to do that I’ve created a bsh file that run those experiments. But when I reach to my compression part I get the error ”Transaction log is full”; and no matter how low I set the inserts for it is complaining about my transaction log.
I’ve scouted Google for a day now trying to find some way to flush / clear the log or just get rit of it, i don’t need it. I’ve tried to increase the size but nothing has helped.
Please, I hope someone has an answer to solve this frustrating problem
Thanks
- Mestika
There is no need to "clear the log" in DB2. When a transaction is rolled back, DB2 releases the log space used by the transaction.
If you've increased the log size and it has not helped, please post more information about what you're trying to do.
No need of restarting. Just try to force the applications using DB2 force applications all.
Increase the Actie Log File Size and try to force application connections and terminate the connections.
Try to run the job now.
db2 force applications all
db2 update db cfg for sample using logfilsiz 5125
db2 force applications all
db2 terminate
db2 connect to sample
Run your job and monitor.
Just restart the instance, it would release the pending logs and you should be fine