objectdb with jboss 7.1.1 performance issue - jboss

we have a strange performance problem with ObjectDB 2.5.3_01 and JBoss 7.1.1.
We have two ObjectDB databases on one ObjectDB server. Production database is running 5-10 times slower than test database. Their size and number of records are almost identical. When running production database in separate test environment its speed is very good.
We did a performance analysis on our 2 linux servers one running JBOSS and other running ObjectDB.
- CPU utilization of JBOSS server is max 5-10% (per core)
- CPU utilization of JBOSS server is 80-150% (this is per core)
Now is the interesting part, when running a query from ObjectDB explorer CPU utilization is minimal ie. 1%
Running a query from ObjectDB explorer on complete database which has 12000 records takes 30ms which we think is very good.
In our web application this behaviour gives aprox 10-13 s of refreshing a data table vs 2 second with test database.
Does anybody have any idea what could be wrong.

The problem was found to be a circular eager relationship, defined in that specific application, which required loading of many objects recursively with the query results.
The solution was to change the relationship setting from eager to lazy.
More details can be found in this forum thread.

Related

PgPool-II Sorry, too many clients already

Background
I have a PgPool-II cluster (ver 4.1.4) running on 3 centos 8 machines (virtual); SQL1, SQL2 and SQL3 (each on different hardware). On SQL1 and SQL2 PostgreSQL-12 are running (currently SQL1 is master and SQL2 in streaming replication standby).
In the db cluster there are currently 4 databases for 2 different software environments. One customer (cust) environment with quite much traffic and one educational (edu) environment with basically no user activity at all.
Both environments have one db each and also shares two databases (only for reading).
The application is written in net core 3.1 at the moment and uses npgsql and entity framework core for connecting to the pgpool cluster.
Except "normal" application sql requests to the databases with entity framework there are also periodical calls with psql in order to run the "show pool_nodes;" in pgpool. This could not be done with entity framework, hence psql instead.
Each environment also has one "main api" which handles internet traffic, and one "service api" which runs background tasks. Both uses entity framework to call the database. And psql is also sometimes invoked from the "service api" as described above.
On top of this all applications also have an A and B system.
So to sum up:
2 Environments (cust, edu) has A and B system, which also has "main api" and "service api" each => 8 applications, (12 if counting that all "service api" also invokes psql every 5 min).
The applications are on 2 different machines (A and B), "main api" and "service api" runs on the same machine for one environment and system.
Each entity framework application can also make parallell/multiple simultaneous requests depending on the user activity to the api.
My question
Every now and then there is an error from the pgpool cluster: Sorry, too many clients already.
Usually when connecting with psql, but sometimes this also happens from entity framework.
My initial thought was that this was because the databases had to many clients connected, but running pg_stat_activity just seconds after the error shows that there are way less connections (around 50), then the 150 max_connections in psql config. I could not find any errors at all in psql logs in the "log" folder in psql data directory.
But in the pgpool.log file there are an error entry:
Oct 30 16:34:19 sql1 pgpool[4062984]: [109497-1] 2020-10-30 16:34:19: pid 4062984: ERROR: Sorry, too many clients already
pgpool has num_init_children = 32 and max_pool = 4 so I do not really see where the problem might be coming from.
Some files that might be needed for more info:
pg_stat_activity (Taken 11s after the error)
pgpool.log
pgpool.conf
postgresql.conf
This problem might exist from a bug that according to one of the developers of pgpool will have its fix included in the 4.1.5 update november 19, 2020.
The bug makes the counter for existing connections not counting down if a query is cancelled.
Simply upgrade pgpool from 4.1.2 to 4.1.5 (Newer versions of pgpool are available as well)
I had the same problem with pgpool 4.1.2, Sorry, too many clients already was occurring almost every hour.
The problem got fixed after I upgraded pgpool from 4.1.2 to 4.1.5. We never experienced the error afterward

Too many MethodWrapperImpl created in jersey application

Recently my jersey application got high GC problems.
By inspecting the heap dump, I found that there're a lot of MethodWrapperImpl as well as LRUHybridCache$OriginThreadAwareFuture became unreachable objects.
(About 19700 MethodWrapperImpl alive and around 40k+ MethodWrapperImpl in unreacheable objects).
My question is:
Is it a normal behavior?
After all, I have only 1 resource and 1 resource method.
The heap dump shows that:
Total 32 instance of ClassReflectionHelperImpl and total 128 LRUHybridCache.
Btw, I'm using spring boot 2.0.5 and the jersey is on 2.26. (hk2 is 2.5.0-b42)
And it's hard to reproduce in local, only happens on production boxes with real traffic.
Leon

High CPU Utilisation on AWS RDS - Postgres

Attempted to migrate my production environment from Native Postgres environment (hosted on AWS EC2) to RDS Postgres (9.4.4) but it failed miserably. The CPU utilisation of RDS Postgres instances shooted up drastically when compared to that of Native Postgres instances.
My environment details goes here
Master: db.m3.2xlarge instance
Slave1: db.m3.2xlarge instance
Slave2: db.m3.2xlarge instance
Slave3: db.m3.xlarge instance
Slave4: db.m3.xlarge instance
[Note: All the slaves were at Level 1 replication]
I had configured Master to receive only write request and this instance was all fine. The write count was 50 to 80 per second and they CPU utilisation was around 20 to 30%
But apart from this instance, all my slaves performed very bad. The Slaves were configured only to receive Read requests and I assume all writes that were happening was due to replication.
Provisioned IOPS on these boxes were 1000
And on an average there were 5 to 7 Read request hitting each slave and the CPU utilisation was 60%.
Where as in Native Postgres, we stay well with in 30% for this traffic.
Couldn't figure whats going wrong on RDS setup and AWS support is not able to provide good leads.
Did anyone face similar things with RDS Postgres?
There are lots of factors, that maximize the CPU utilization on PostgreSQL like:
Free disk space
CPU Usage
I/O usage etc.
I came across with the same issue few days ago. For me the reason was that some transactions was getting stuck and running since long time. Hence forth CPU utilization got inceased. I came to know about this, by running some postgreSql monitoring command:
SELECT max(now() - xact_start) FROM pg_stat_activity
WHERE state IN ('idle in transaction', 'active');
This command shows the time from which a transaction is running. This time should not be greater than one hour. So killing the transaction which was running from long time or that was stuck at any point, worked for me. I followed this post for monitoring and solving my issue. Post includes lots of useful commands to monitor this situation.
I would suggest increasing your work_mem value, as it might be too low, and doing normal query optimization research to see if you're using queries without proper indexes.

CPU usage of Jboss JVM goes upto 99% and stays there

I am doing load testing on my application using jmeter and I have a situation where the cpu usage by the applications jvm goes to 99% and it stays there. Application still work, I am able to login and do some activity. But, it’s understandably slower.
Details of environment:
Server: AMD Optrom, 2.20 Ghz, 8 Core, 64bit, 24 GB RAM. Windows Server 2008 R2 Standard
Application server: jboss-4.0.4.GA
JAVA: jdk1.6.0_25, Java HotSpot(TM) 64-Bit Server VM
JVM settings:
-Xms1G -Xmx10G -XX:MaxNewSize=3G -XX:MaxPermSize=12G -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+UseCompressedOops -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000
Database: MySql 5.6 (in a different machine)
Jmeter: 2.13
My scenario is that, I make 20 users of my application to log into it and perform normal activity that should not be bringing huge load. Some, minutes into the process, JVM of Jboss goes up and it never comes back. CPU usage will remain like that till JVM is killed.
To help better understand, here are few screen shots.
I found few post which had cup # 100%, but nothing there was same as my situation and could not find a solution.
Any suggestion on what’s to be done will be great.
Regards,
Sreekanth.
To understand the root cause of the high CPU utilization, we need to check the CPU data and thread dumps at same time.
Capture 5-6 thread dumps at the time of the issue. Similarly capture CPU consumption thread-by-thread basis.
Generally the root cause of the CPU issue would be problems with threads like BLOCKED threads, long running threads, dead-lock, long running loops etc. That can be resolved by going through the stacks of the threads.

Put memcached on db or web server instance?

For my Drupal-based site, I have an architecture with 3 instances running nginx, postgresql, & solr, respectively. I'd like to install Memcached. Should I put it on the nginx or postgresql server? What are the performance implications?
Memcached is very light on CPU usage, so it is a great candidate to gobble up spare web server RAM. Also, you will scale out you web tier much more than your other tiers, and Memcached clustering can pool that RAM together into one logical cache.
If you have any spare RAM on the DB, it is almost always best for performance to let the DB gobble it up.
TL;DR Let DB have all of the RAM, colocate memcached on web tier.
Source: http://code.google.com/p/memcached/wiki/NewHardware
The best is to have a separate server (if you can do that).
Otherwise, it depends on your servers CPU & memory utilization and availability requirements. In general I would avoid running anything extra on a DB server machine...since DB is the foundation of the system and has to be available and performing well.
if your Solr server does not have high traffic an don't utilize much memory I'd put it in there. Memcached servers known to be light on CPU. Also you should estimate how much memory memcached instance will need...to make sure its enough on the server.