why the page size of OS is 4k, and why it still stick to be 4k as the memory can be large enough now? - page-size

As far as i know, the page size is 4k in OS memory management. It can be a advantage when the memory is too small.
But the memory in our machine is big enough now, why the page size is still 4k? What is the limit for change it to be like 1M or bigger?

4KB is just default page size supported by many architectures.
However, some architectures support switching to use bigger page sizes.
For example, i386 supports switching to huge pages mode with 2MB or 4MB page size, and x86_64 supports 2MB huge pages, and for some newer CPUs it can even support 1GB page size (large pages).
Many filesystems use block size that is exact page size or small multiplier of it (4KB-8KB or so). Also, many operating systems allocate memory only in whole pages. Allocating 2MB page on every memory allocation request would waste a lot of memory.

Related

manipulation of processor speed without changing the processor

is it possible to replace a ram with higher storage on a machine with a processor of low speed? is speed of processing increased or decreased.
i want to replace the ram drive of my machine with a higher storage ram so that I can try to manipulate the processor speed without changing the processors, will it work?
Yes. You can use faster RAM, currently up to DDR4-4800MHz. Using higher capacity RAM would also be useful if running out of RAM capacity is what’s slowing you down. Using a high-speed solid-state drive (SSD) would also speed up task that require reading or writing from storage, including booting up the computer and running programs that use a lot of assets on the computer. You can also overclock both your CPU and RAM to make them faster, but this may void your warranty.
Additionally, you can use software. For example, you can use CCleaner to remove useless files which clutter your computer. You can also use it to disable unneeded scheduled tasks that your computer runs. If your computer has a spinning-disc hard drive, then you can try defragmenting it by using the built-in Windows defragger, or you can use the Defraggler program from the same people who make CCleaner. Of course, you can also try deleting programs or files you no longer need or use if it’s limited hard drive capacity that’s slowing you down.
If it’s web browsing that’s slow, you may want to consider using a faster browser, like Google Chrome or Firefox. You can also install browser extensions/add-ons like AdBlock and Ghostery to prevent unneeded things from being loaded on pages, making pages load faster.

Magento 2 website goes down every day and need to restart server

I have one e-commerce website in Magento 2.2.2 and it keeps on going down almost every day. Whenever it goes down, users get site took too long too respond and it never loads. To get web site working again I have to restart the server and then it works.
Total space on the server is 50GB. Out of which the whole website is around 18GB (11GB Media files and then vendor files etc.). Here are things that i cannot figure out why:
a.) The server shows that 33GB has been used although it should show only 18GB has been used. I have checked everywhere and I can't find what is consuming additional 15GB of space. Complete HTML folder is only 18GB.
b.) When I checked log files: it shows the following:
WARNING: Memory size allocated for the temporary table is more than 20% of innodb_buffer_pool_size. Please update innodb_buffer_pool_size or decrease batch size value (which decreases memory usages for the temporary table). Current batch size: 100000; Allocated memory size: 280000000 bytes; InnoDB buffer pool size: 1073741824 bytes.
I have already set innodb_buffer_pool_size to 2GB. But still, this problem keeps coming.
The server is an Amazon EC2 server and Magento is in production mode. Can allocating 100GB instead of 50GB will solve the problem?
Increased innodb buffer pool size to 10GB and logs do not show error anymore but server still goes down every day. Since RAM is only 4GB on our server, can that be the main cause? Because everyone is suggesting at least 8GB RAM?
Try the things below.
Magento2 has big log files and caching system. There may be increase your files in var folder.
But still you have to check whether your site belongs to more than 3000 products with high size images for products and you are storing all these in your server itself.
The suggestions what I can give, If your site have more products which I already mentioned better you have to use CDN for better performance. So the entire image will be process from the third party.
Next is You have to setup cloud flare to avoid the down time errors or customer side effect. You can make your index page to load while the server is down. And obviously you have to write script to restart the site automatically while its down.
In your server side check the memory size for php, you can better to give to 2G.
In Mysql side : Check the whether its making sleep query or not. If its making through your custom extension area ask your developer to optimize the code.
for eg : May be the code passing 'collection' for a single item fetch.
You can use the tool like nurelic
If everything is fine from your developer area please try to optimize the site with making memory limit mysql killing etc.. with your server side.
In the mean while magento is a big platform for e-commerce sector, so it has more area to cover by default. Better to avoid the unwanted modules from your active site, like disable the core modules which you are not using yet.
For an average site Use 16gb RAM,
A restart your mysql to make it effect ?
Also you need to set that buffer up to 20971520000, thats around 20GB.
Magento uses a lot of sessions and cache.

Implementing larger page tables

I have studied in my operating systems class that Larger page tables can be implemented in memory management technique by using paging. When implementing this virtual memory does it really become slow memory access when it is implemented on a secondary storage device. I want to know the reason for it that how it slows down memory access? When we implement large page tables on secondary storage device.
Even smaller page tables can be (and have been) implemented using paging. The architectural issue is how to get around the chicken and egg problem of page tables being in virtual memory while referring to physical memory. A number of techniques have been developed to deal with that.
Paged page tables only slows down the memory when a page table access causes a page fault. Once the page fault is serviced, subsequent references to not trigger a page fault (unless the table is paged out). Paine the page tables is not slowing down the system constantly.

In Mongodb how does the cache get warmed up?

I'm wondering how the cache (memory) gets warmed up. I understand that MongoDB uses memory mapped files and the OS's virtual memory to swap pages in and out as needed. What I don't understand is how it gets warmed up on startup.
Upon startup does mongod map all of the pages in the database to virtual memory or is there some other mechanism to load pages that are not yet mapped which get mapped as queries are run against the database?
Similarly, is the size of the database limited to the amount of virtual memory available to the system. I understand that on a 64-bit system this is a lot. Is there another mechanism other than memory mapping for pages to moved to and from disk?
Memory mapping means that there is a representation of all the on disk files available but only a portion of these files may be present in RAM. When a given page is needed (and it is not in RAM) it can be read from disk into RAM to be accessed.
Regarding limitations, you can see them on the MongoDB limits page
MongoDB does not do any specific "warming" of pages on startup, as it does not have any concept of which pages would be useful and which not.
If you wish to "Warm" certain collections manually before using them you should look at the touch command.

Ideal Chunk Size for Writing Streamed Content to Disk on iPhone

I am writing an app that caches streaming content from the web on the iPhone. Right now, I'm saving data to disk as it arrives (in chunk sizes ranging from 1KB to about 60KB), but application response is somewhat sluggish (better than I was expecting, but still pretty bad).
My question is: does anyone have a rule of thumb for how frequent and large writes to the device memory should be to maximize performance?
I realize this seems application-specific, and I intend to do performance tuning for my scenario, but this applies generally to any app on the iPhone downloading a lot of data because there is probably a sweet spot (given sufficient incoming data availability) for write frequency/size.
These are the resources I've already read related to the issue, but no one addresses the specific issue of how much data to accumulate before dumping:
Best way to download large files from web to iPhone for writing to disk
The Joy in Discovering You are an Idiot
One year later, I finally got around to writing a test harness to test chunking performance of streaming downloads.
Here's the set-up: Use an iPhone 4 to download a large file over a Wi-Fi connection* with an asynchronous NSURLConnection. Periodically flush downloaded data to disk (atomically), whenever the amount of data downloaded exceeds a threshold.
And the results: It doesn't make a difference. The performance difference between using 32kB and 512kB chunks (and several sizes in-between) is smaller than the variance between runs using the same chunking size. The file download time, as expected, is comprised almost entirely of time spent waiting on the network.
*Average throughput was approximately 8Mbps.