Vesta graphs and Digitalocean graphs on RAM usage are not the same - server

The server of my website is on Digitalocean
Vesta graphs shows all of the RAM capacity is used in a situation that Digitalocean graphs shows only 33% of RAM is uses. Which one is correct?
Why there is this difference ?

Vesta likes to display cached RAM as used up RAM. Where DigitalOcean usually shows cached RAM as free RAM.
I think you should go with DigitalOcean's statistics as cached RAM is generally counted as free RAM.

Related

Effects of upgrading MongoDB server tiers

I have an AWS ec2 server that is running an application that is connected to a MongoDB atlas sharded cluster. Periodically, the application will slow down and I will receive alerts from MongoDB about high CPU steal %. I am looking to upgrade my MongoDB server tier and see the only difference in the options is more storage space and more RAM, but the number of vCPUs is the same. I'm wondering if anyone has any insight on whether this increased RAM will help with the CPU steal % alerts I am receiving and whether it will help speed up the app? Otherwise, am I better off upgrading my AWS server tier for more CPU that way?
Any help is appreciated! Thanks :)
I don't think more RAM will necessarily help if you're mostly CPU Bound. However, if you're using MongoAtlas then the alternative tiers definitely do provide more vCPU as you go up the scaling options.
You can also enable auto scaling and set your minimum and maximum tiers to allow the database to scale as necessary: https://docs.atlas.mongodb.com/cluster-autoscaling/
However, be warned that MongoAtlas has a pretty aggressive scale-out and a pretty crappy scale-in. I think the scale-in only happens after 24hours so it can get costly.

Compose.io MongoDB memory usage uneven between data servers

I have a FeathersJS and Apollo graphql app that uses a Compose.io as the database. The portal servers have similar memory consumption but the data servers are very different (see screenshot below). A few weeks ago I increased the memory allocation because I was experiencing memory errors. The errors have stopped but one database is still at the memory limit and the other database is less than half.
Is this an indication of an underlying problem?

filemaker 15 Pro server preformance

I creating a database in Filemaker, the database is about 1GB and includes around 500 photos.
Filemaker maker server is having performance issues, its crashes and takes it’s time when searching though the database. My IT department recommended to raise the cache memory.
I raised the memory 252MB but it's still struggling to give a consistent performance. The database shows now peaks in the CPU.
What can cause this problem?
Verify at FileMaker.com that your server meets the minimum requirements for your version.
For starters:
Increase the cache to 50% of the total memory available to FileMaker server.
Verify that the hard disk is unfragmented and has plenty of free space.
FM Server should be extremely stable.
FMS only does two things:
reads data from the disk and sends it to the network
takes data from the network and writes it to the disk
Performance bottlenecks are always disk and network. FMS is relatively easy on CPU and RAM unless Web Direct is being used.
Things to check:
Are users connecting through ethernet or wifi? (Wifi is slow and unreliable.)
Is FMS running in a virtual machine?
Is the machine running a supported operating system?
Is the database using Web Direct? (Use a 2-machine deployment for web direct.)
Is there anything else running on the machine? (Disable virus and indexing.)
Make sure users are accessing the live databases through FMP client and not through file sharing.
How are the database being backed up? NEVER let anything other than FMS see the live files. Only let OS-level backup processes see backup copies, never the live files.
Make sure all the energy saving options on the server are DISABLED. You do NOT want the CPU or disks sleeping or powering down.
Put the server onto an uninterruptible power supply (UPS). Bad power could be causing problems.

Is Swap Space Needed for a Kafka Node ?

We are in the midst of doing a Kafka POC between our enterprise and Google Cloud, and we were told that Google Cloud VMs dont provision swap space by default. Anyone in the Kafka community who implemented Kafka know whether Kafka needs swap space ?
The brokers themselves should not require a substantial amount of memory and as such do not require a swap space. Ideally you will run your brokers on dedicated VMs allowing the broker to take full advantage of the OS's buffer cache. In order to hit the expected latency levels the OS should have an abundant amount of 'free' memory. If you make it to the point where pages need to be swapped to disk you have already ventured into bad territory.
You only need swap space if Kafka is running out of memory and in practice I haven't seen Kafka to be a huge memory hog. So just be sure your VM is provisioned with enough memory and the swap space should not matter.

Sizing CPU and RAM for website

How do I count Ram and CPU to my site?
Accoriding to number of unique visitors per day and sustained visitors
Memory and CPU are not visitors. To see how much memory your site uses, check how much memory it uses on the server's monitoring systems.
To see how much CPU it uses, check the server logs.