Sizing CPU and RAM for website - webserver

How do I count Ram and CPU to my site?
Accoriding to number of unique visitors per day and sustained visitors

Memory and CPU are not visitors. To see how much memory your site uses, check how much memory it uses on the server's monitoring systems.
To see how much CPU it uses, check the server logs.

Related

Effects of upgrading MongoDB server tiers

I have an AWS ec2 server that is running an application that is connected to a MongoDB atlas sharded cluster. Periodically, the application will slow down and I will receive alerts from MongoDB about high CPU steal %. I am looking to upgrade my MongoDB server tier and see the only difference in the options is more storage space and more RAM, but the number of vCPUs is the same. I'm wondering if anyone has any insight on whether this increased RAM will help with the CPU steal % alerts I am receiving and whether it will help speed up the app? Otherwise, am I better off upgrading my AWS server tier for more CPU that way?
Any help is appreciated! Thanks :)
I don't think more RAM will necessarily help if you're mostly CPU Bound. However, if you're using MongoAtlas then the alternative tiers definitely do provide more vCPU as you go up the scaling options.
You can also enable auto scaling and set your minimum and maximum tiers to allow the database to scale as necessary: https://docs.atlas.mongodb.com/cluster-autoscaling/
However, be warned that MongoAtlas has a pretty aggressive scale-out and a pretty crappy scale-in. I think the scale-in only happens after 24hours so it can get costly.

Can we ever have 0 page fault rate with an infinite, or an absurd amount of it?

I have an assignment for my Operating System course. One of the questions aks me to provide an explanation as to why it is possible/not possible to have 0 page-fault rate. Can a real system have enough RAM so that it will have no page faults at all.
I was thinking that maybe if we had an infinite amount of RAM, there will be no need for virtual memory, thus there will be no page-faults. I came to this cinclusion because page-faults happen when a process requests a memory page that is in virtual memory, and not in physical memory. Maybe with an infinite amount of RAM, all the memory the process will need will be on the physical memory, there will be no need for paging.
Yes, you can. There are times when we do not tolerate page faults, when any page fault is doomed. For starters, interrupt handlers may not page fault because they may not wait.
Besides that, sometimes the specification reads "must respond in 1/60th of a second" where the consequence of not responding is bad things happen. Depending on the severity of the consequences, we may go way out of our way to ensure page faults do not happen once initialized.
Yes, this means having enough RAM, but that alone will not suffice. There are system calls for locking pages into RAM so that they cannot ever be evicted because otherwise the OS would reclaim idle RAM in favor of disk cache. When we can't tolerate that behavior ...
Some embedded operating systems can't even page.

Vesta graphs and Digitalocean graphs on RAM usage are not the same

The server of my website is on Digitalocean
Vesta graphs shows all of the RAM capacity is used in a situation that Digitalocean graphs shows only 33% of RAM is uses. Which one is correct?
Why there is this difference ?
Vesta likes to display cached RAM as used up RAM. Where DigitalOcean usually shows cached RAM as free RAM.
I think you should go with DigitalOcean's statistics as cached RAM is generally counted as free RAM.

Is Virtual memory really useful all the time?

Virtual memory is a good concept currently used by modern operating systems. But I was stuck answering a question and was not sure enough about it. Here is the question:
Suppose there are only a few applications running on a machine, such that the
physical memory of system is more than the memory required by all the
applications. To support virtual memory, the OS needs to do a lot work. So if
the running applications all fit in the physical memory, is virtual memory
really needed?
(Furthermore, the applications running together will always fit in RAM.)
Even when the memory usage of all applications fits in physical memory, virtual memory is still useful. VM can provide these features:
Privileged memory isolation (every app can't touch the kernel or memory-mapped hardware devices)
Interprocess memory isolation (one app can't see another app's memory)
Static memory addresses (e.g. every app has main() at address 0x0800 0000)
Lazy memory (e.g. pages in the stack are allocated and set to zero when first accessed)
Redirected memory (e.g. memory-mapped files)
Shared program code (if more than one instance of a program or library is running, its code only needs to be stored in memory once)
While not strictly needed in this scenario, virtual memory is about more than just providing "more" memory than is physically available (swapping). For example, it helps avoiding memory fragmentation (from an application point of view) and depending on how dynamic/shared libraries are implemented, it can help to avoid relocation (relocation is when the dynamic linker needs to adapt pointers in a library or executable that was just loaded).
A few more points to consider:
Buggy apps that don't handle failures in the memory allocation code
Buggy apps that leak allocated memory
Virtual memory reduces severity of these bugs.
The other replies list valid reasons why virtual memory is useful but
I would like to answer the question more directly : No, virtual memory
is not needed in the situation you describe and not using virtual
memory can be the right trade-off in such situations.
Seymour Cray took the position that "Virtual memory leads to virtual
performance." and most (all?) Cray vector machines lacked virtual
memory. This usually leads to higher performance on the process level
(no translations needed, processes are contiguous in RAM) but can lead
to poorer resource usage on the system level (the OS cannot utilize
RAM fully since it gets fragmented on the process level).
So if a system is targeting maximum performance (as opposed to maximum
resource utilization) skipping virtual memory can make sense.
When you experience the severe performance (and stability) problems
often seen on modern Unix-based HPC cluster nodes when users
oversubscribe RAM and the system starts to page to disk, there is a
certain sympathy with the Cray model where the process either starts
and runs at max performance, or it doesn't start at all.

Huge Mongo datasets. How much RAM do I need and how to not get ruined by paying for hosting?

So, I have a what I call huge mongo database which is about 30Gb (about 30 millions documents). I tried to run mongod on the server shared with another application and it was completely slowed down. So I have to look for a dedicated server but have no idea how much RAM do I need.
I understand that I probably need to have amount of RAM enough to put all indexes there. But, if I'm correct, it would be about 13Gb of RAM which makes the price for the server very-very expensive (my app isn't making any money yet).
I tried to look into mongoHQ, but their cheapest dedicated plan is $600/month.
Any ideas? Is it really that expensive to host heavy mongo databases like that?
Build your own server and colocate it instead of renting someone's server. You have full control over the hardware, higher startup costs, but lower long-term costs. You are also liable for hardware malfunctions, so watch out for that.