Getting 100% + CPU usage on Linux server - postgresql

Getting 100% + CPU usage after restoring dbdump file into postgress docker container.
Result of htop command from the server.
Seems like autovacuum reader is consuming more CPU. Can anyone suggest what to do to reduce server CPU load time?

I did restart my postgress docker container and that resolved my problem. but it's for sometime.
Here is how we resolved our this issue: https://dev.to/jaytailor45/the-anatomy-of-a-postgresql-hack-how-it-happened-and-what-we-did-about-it-b9k
Happy coding!!!

Related

Postgresql utilising more CPU and Memory [duplicate]

User postgres is running a process that take all CPUs at 100% usage in a centos machine, the postgresql service is not running so it cannot be a query.
When I try to stop the process it restarts itself. Then name of the process is somewhat strange.
Congratulations!
By exposing a database with a weak superuser password to the internet you invited somebody to break in and use your CPU for their own purposes, probably mining crypto-currencies.
Take the machine from the internet, wipe it clean and re-install the operating system.
I had the same issue on my VPS. I considered to reinstall OS or clone VPS, but have alot of issues on that solution. So, i choose another way:
I did:
backup all data with "pg_dumpall"
backup pqsql configuration(pg_hba.conf,postgresql.conf,...)
Uninstall "everything" of pgsql
reinstall pgsql
restore pgsql data
Done

mongodb high resource usage and configuration hints

I am running a MongoDB instance as a ubuntu service, on a VM with enough resources to handle it with ease, the system is on-premise and not Cloud.
I am a newbie with MongoDB and this is a test/dev environment.
The problem I am getting right now is an abnormal usage of CPU and RAM resources, due to a huge amount of MongoDB threads running and hanging around.
Here's an HTOP resume and a strace of the worse of those bad guys.
sudo strace -p 973
strace: Process 973 attached
futex(0x5644b5e929e8, FUTEX_WAIT_PRIVATE, 0, NULL
strace: Process 973 detached
<detached ...>
other than a possible solution, can you advise me on any interesting articles about setting and configuring Mongo to run for production?
other info:
Distributor ID: Ubuntu
Description: Ubuntu 18.04.4 LTS
> db.version()
4.2.7
Solved.
apparently one of my collections was growing too big and I was executing various aggregate pipelines on it.
It ended up clogging the machine, solved with some tweaks to the structure :)

user postgres launches process that takes all CPUs 100% usage

User postgres is running a process that take all CPUs at 100% usage in a centos machine, the postgresql service is not running so it cannot be a query.
When I try to stop the process it restarts itself. Then name of the process is somewhat strange.
Congratulations!
By exposing a database with a weak superuser password to the internet you invited somebody to break in and use your CPU for their own purposes, probably mining crypto-currencies.
Take the machine from the internet, wipe it clean and re-install the operating system.
I had the same issue on my VPS. I considered to reinstall OS or clone VPS, but have alot of issues on that solution. So, i choose another way:
I did:
backup all data with "pg_dumpall"
backup pqsql configuration(pg_hba.conf,postgresql.conf,...)
Uninstall "everything" of pgsql
reinstall pgsql
restore pgsql data
Done

Mysql server huge memory consumption

On Ubuntu, executing the "top" command shows that mysqld constantly uses 61.9% of the memory.(when idle).
I ran "show processlist" on the mysql server and it is idle.
Can anyone explain what might be happening?
Read this: http://www.chriscalender.com/?p=1278
Open my.cnf and add this at the end: performance_schema=0
Restart your services. From 620MB I have now 38MB memory used.

MongoDb replica-server getting killed due to less memory?

Require a huge help here, since this is affecting our production instance.
One of the replica server is failing due to lack of memory (see below chunk of piece from kern.log)
kernel: [80110.848341] Out of memory: kill process 4643 (mongod) score 214181 or a child
kernel: [80110.848349] Killed process 4643 (mongod)
UPDATE
kernel: mongod invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0
kernel: [85544.157191] mongod cpuset=/ mems_allowed=0
kernel: [85544.157195] Pid: 7545, comm: mongod Not tainted 2.6.32-318-ec2
Insight:
Primary server DB size is 50GB out of which 30GB is filled by index.
Primary server has 7GB Ram whereas secondary server has 3.1GB Ram.
Both servers are 64-bit machine and running Debian/Ubuntu respectively.
Running Mongo 2.0.2 on both servers
Note:
I see a similar issue has been created in Jira-Mongo web-site recently - no answer to that yet.
Have you got swap enabled on these instances? While generally not needed for mongoDB operation it can prevent the process from being killed by the kernel when you hit an OOM situation. That is mentioned here:
http://www.mongodb.org/display/DOCS/Production+Notes#ProductionNotes-Swap
The issue referenced is happening during a full re-sync rather than ongoing production replication - is that what you are doing also?
Once you get things stable, take a look at your Res memory in mongostat or MMS, if that is exceeding or close to 3GB you should consider upgrading your secondary.
I had a similar issue. One of the things to check is how many open connections you have. run the lsof command to see the open files associated with the mongod process. Try disabling journaling and see if you see a smaller number of open files. If so, let the replica catch up and then re-enable journaling. That might help. Adding swap should help too or if possible temporarily up the RAM.