Exceeded Maximum idle time in jasperserver when using oracle(10g) for generating reports - jasperserver

Our customer's security requires the users used to query the database to have a maximum idle time of no more than 45 minutes. The problem we are seeing with our JasperServer is that after the server has been up for longer than that, queries start failing with an Oracle exception:
ORA-02396: Exceeded maximum idle time
And I am forced to restart the server more than 4 times a day. As it is a live server it is causing so much of problem.
How can I solve this problem without restarting?

Related

Increasing the recovery time while postgreSQL startup

How can I increase the recovery time of postgreSQL while starting up after an immediate stop?
I started the postgreSQL normally and tried to insert the huge number of data. while inserting, I stopped the server with immediate stop command. While starting the server again, it moved to the recovery mode and it takes few seconds to startup.
Is there any possibility to move the server to recovery mode for long time (say 10-15 mins)?. If yes, how can I achieve it?
You can raise checkpoint_timeout and max_wal_size and generate a lot of write activity for a long time.

SSIS Transfer Objects task fails when run from Agent

I am using the SSIS Transfer Objects task to transfer a database from one server to another. This is a nightly task as the final part of ETL.
If I run the task manually during the day, there is no problem. It completes in around 60 to 90 minutes.
When I run the task from Agent, it always starts but often fails . I have the agent steps set up to rety on failure, but most nights it is taking 3 attempts. On some nights 5 or 6 attempts.
The error message returned is two fold (both error messages show in the log for the same row):-
1) An error occurred while transferring data. See the inner exception for details.
2) Timeout expired: The timeout period elapsed prior to completion of the operation or the server is not responding
I can't find any timeout limit to adjust that I haven't already adjusted.
Anyone have any ideas?

Too many threads are already waiting for connection. Max number of threads (maxWaitQueueSize) of 500 has been exceeded

I am using mongodb database. After running my project the initial connections to the mongodb is aroung 21-25, after 5 to 6 hrs connections reached to 230-240 and my application is unable to connect to mongodb. It throws an error saying Too many threads are already waiting for connection. Max number of threads (maxWaitQueueSize) of 500 has been exceeded. My application is not able to access any collection from mongodb. To connect to DB we are using two technique one in spring data JPA and other is MongoClient
I am puzzled up by searching when this is occurring and whats going on. How this many connections are getting created.
Please see the attached image for error.
Thanks in advance.

Sidekiq / Rails : PG::ConnectionBad: PQconsumeInput() SSL error: system lib

I'm getting this error from sidekiq / rails / postgresql combination after it processes about 2000 jobs.
PG::ConnectionBad: PQconsumeInput() SSL error: system lib
It's on simple / random SQL queries, sometimes the ActiveRecord table schema queries. Things that work OK for 2000 or so queries suddenly start to fail for some unknown reason.. I get about 50 failures per 10,000 requests, and then at about 50,000 requests sidekiq falls overs and I need to restart it.
I often get before the crash something like..
Celluloid::TimeoutError: linking timeout of 5 seconds exceeded
Has anyone run into this? Hosting is Amazon AWS with RDS for PostgreSQL. It's a recent issue I didn't use to have it and I'm out of ideas so any suggestions would be appreciated.
I worked with Brett to determine the cause. He'd set his Sidekiq concurrency to 100. This is way too high for MRI. He turned it back down to the default of 25 and stability returned. Instead we're going to run 4 processes of 25 threads instead of a single process with 100 threads.

Google Cloud SQL: Periodic Read Spikes Associated With Loss of Connectivity

I have noticed that my Google Cloud SQL instance is losing connectivity periodically and it seems to be associated with some read spikes on the Cloud SQL instance. See the attached screenshot for examples.
The instance is mostly idle, but my application recycles connections in the connection pool every 60 seconds so this is not a wait_timeout issue. I have verified that the connection are recycled Also, it occurred twice in 30 minutes and the wait_timeout is 8 hours.
I would suspect a backup process but you can see from the screenshot that no backups have run.
The first instance lasted 17 seconds from the time the connection loss was detected until it was reestablished. The second was only 5 seconds, but given that my connections are idle for 60 seconds the actual downtime could be up to 1:17 and 1:05 respectively. They occurred at 2014-06-05 15:29:08 UTC and 2014-06-05 16:05:32 UTC respectively. The read spikes are not initiated by me. My app continued to be idle during the issue so this is some sort of internal GCS process.
This is not a big deal for my idle app, but it will become a big deal when the app is not idle.
Has anyone else run into this issue? Is this a known issue with Google Cloud SQL? Is there a known fix?
Any help would be appreciated.
****Update****
The root cause of the symptoms above has been identified as a restart of the MySQL instance. I did not restart the instance and the operations section of the web console does not list any events at that time, so now the question becomes, what would cause the instance to restart twice in 30 minutes? Why would a production database instance restart period?
That was caused by one of our regular release. Because of the way the update takes place an instance might be restarted more than once during the push to production.
Was your instance restarted ? During the restart the spinning down/up of an instance will trigger read/write.
That may be one reason why you are seeing the activity for read/write.