I am seeing evictions when the memcached is only 40% full. How is that possible?
Check the slab sizes by running the memcached stats. Looks like your slabs are not evenly populated and that is causing the evictions even when the cache is not full full.
I wrote up a long explanation to this question which would apply equally to this as well.
Memcached stores data according to slabs of different memory chunks. If the different memory chunks are already allocated, then the Least recently used algorithm runs on the slab and evicts the data out, even if the there are no data in other memory slabs.
Therefore a large distribution of data sizes can be responsible for this problem.
By having multiple instances of memcached running and using it as a distributed system, the issue can be reduced.
Related
Was playing around with some larger data sets and noticed that VSCode only uses around 30% CPU and RAM.
Is there some way to increase it? Probably some configurations? Thanks
You can increase/decrease the available RAM for VS Code on its Settings. Go to File -> Preferences -> Settings, there you can type files.maxMemoryForLargeFilesMB and change the value for your desired maximum RAM.
Not sure which coding language are you using, but let's break your question in two parts:
How to use more CPU ? (Can Increase Performance)
By using multiprocessing apis, which can divide a given large data sets into smaller units to be processed by various CPU cores, it is like a master slave architecture, where each sub process will execute on separate core and at max it is driven by total number of CPU cores.
If number of data units is more than CPU cores, then it will context switch
How to use more RAM ? (Can Degrade Performance)
Why do you need to increase RAM usage, that will be anyway dependant on amount of data allocated by the program
You may plan to create multiple copies to have a snapshot for each thread, needn't then use mutex or lock, but generally not a good practice
Finally :
CPU and RAM will be used process that is executing, based on programming langiage not the VSCode which is just an editor
I am trying to understand both paradigms of memory management;however, I fail to see the big picture and the difference between both. Paging consists of taking fixed size pages from a secondary to a primary storage in order to do some task requested by a process. Segmentation consists of assigning to each unit in a process an address space, so they are allowed to grow. I don't quiet see how they are related and that's because there are still a lot of holes in my understanding. Can someone fill them up?
I think you have something confused. One problem you have is that the term "segment" had multiple meanings.
Segmentation is a method of memory management. Memory is managed in segments that are of variable or fixed length, depending upon the processor. Segments originated on 16-bit processors as a means to access more than 64K of memory.
On the PDP-11, programmers used segments to map different memory into the 64K address space. At any given time a process could only access 64K of memory but the memory that made up that 64K could change.
The 8086 and it successors used segments with base registers. Each segment could have 64K (that grew with the processors) but a process could have 4 segments (more in later processors).
Paging allows a process to have a larger address space than there is physical memory available.
The 8086's successors used the kludge of paging on top of segments. However, that bit of ugliness has finally gone away in 64-bit mode.
You got your answer right there, paging relates with fixed size pages in a storage while segmentation deals with units in a page. 'Segments' are objects in the class 'Page'
On Digitalocean I came up with this message when I want to add swap:
Although swap is generally recommended for systems utilizing traditional spinning hard drives, using swap with SSDs can cause issues with hardware degradation over time. Due to this consideration, we do not recommend enabling swap on DigitalOcean or any other provider that utilizes SSD storage. Doing so can impact the reliability of the underlying hardware for you and your neighbors. This guide is provided as reference for users who may have spinning disk systems elsewhere.
If you need to improve the performance of your server on DigitalOcean, we recommend upgrading your Droplet. This will lead to better results in general and will decrease the likelihood of contributing to hardware issues that can affect your service.
Why is that? I thought it was necessary for creating a stable server (not running into memory issues)
I believe that here's your answer.
Early SSDs had a reputation for failing after fewer writes than HDDs. If the swap was used often, then the SSD may fail sooner. This might be why you heard it could be bad to use an SSD for swap.
Modern SSDs don't have this issue, and they should not fail any faster than a comparable HDD. Placing swap on an SSD will result in better performance than placing it on an HDD due to its faster speeds.
I believe this is referring to the fact that SSDs have a relatively limited lifetime measured in number of times data is written in each memory location. Although such number has gotten big enough that using SSD as storage drives should not be a concern anymore, Swap memory, as a backup for ram memory, can potentially be written on pretty frequently, thus reducing the overall life of the SSD.
SSD Endurance is measured in so called DWPD units. DWPD stands for Drive full Writes Per Day. For Mobile, Client and Enterprise Storage Market segments DWPD requirements are very different. SSD Vendors usually state warranty as, for example, 0.8 DWPD / 3 years or 3.0 DWPD / 5 years. First example means that writing 80% of Drive Capacity every single day will result into 3 years life-time. Technically you can kill your 480GB Drive (let's say with 1 DWPD / 3 years warranty) within 12 days if to perform non-stop write access at the speed of 500 MB/s.
SSDs show much higher throughput on the one side if to compare with HDDs, but at the same time quite low endurance level. Partially it is due to the media physical structure and mapping. For example, when writing 1GB of user data to the HDD drive - internally physical media will receive around 10% more data (meta data, error protection data, etc.). Ratio between Host Data Amount and Internal Data Amount is called Write Amplification Factor (WAF). In comparison SSD may need to write 4 times more data than received from Host. Pure Random access is the worst scenario, when writing 1GB of Host Data will result into writing 4GB of data to the Internal Flash Media. If to perform only sequential write access WAF for SSDs will be close to 1.0, like for HDDs.
Enabling System swap and its intensive usage (probably due to DRAM shortage) will generate more Random access to the SSD. Endurance will degrade quicker if to compare with disable swap. Unless you are running Enterprise System with non-stop IO traffic to the SSD, I would not expect Swap enablement to affect SSD endurance much. You can always monitor SSD SMART Health parameter called - SSD Life Left. How it is changing in dynamic with/without swap enabled will help to make a decision.
I've just upgraded my Heroku postgres database from the Kappa plan (800MB RAM, postgres 9.1) to the Ronin plan (1.7GB RAM, postgres 9.2), but performance has degraded.
Following the guide here, I checked and the cache hit rate is even lower than it was with our Kappa database (now ~57%, previously ~69%). Our app design should be decently ok, as we've seen a cache hit rate of ~99% before.
The recommendation is that the data set should be able to fit into memory, which shouldn't be a problem now - our data size is 1.27GB (at least most of it should fit).
Is the low cache hit rate due to the data size, or is there something else I can look into? Or is it simply a case of the database cache not fully warmed up? (it's been almost 2 days)
If you have plenty of memory and are not running much else on the db, one thing that may change is the shared_buffers. What the shared buffers do is they cache frequently used data so that it maximizes throughout when not all of the database will fit in memory.
Unfortunately this cache does not perform as well as he OS cache. If your data will easily fit in memory, make sure that effective_cache_size is high enough, and then try reducing shared_buffers
Note that this is not a magic bullet. The appropriate size of shared_buffers depends on how much data you have, how much space it takes up, your types of queries, how much memory is going towards things like sorts and the like. You can expect to play around with this from time to time to find the sweet spot for your current setup and database.
Where are the boundaries of SSTables compaction (major and minor) and when it becomes ineffective?
If I have major compaction couple of 500G SSTables and my final SSTable will be over 1TB - will this be effective for one node to "rewrite" this big dataset?
This can take about day for HDD and need double size space, so are there best practices for this?
1 TB is a reasonable limit on how much data a single node can handle, but in reality, a node is not at all limited by the size of the data, only the rate of operations.
A node might have only 80 GB of data on it, but if you absolutely pound it with random reads and it doesn't have a lot of RAM, it might not even be able to handle that number of requests at a reasonable rate. Similarly, a node might have 10 TB of data, but if you rarely read from it, or you have a small portion of your data that is hot (so that it can be effectively cached), it will do just fine.
Compaction certainly is an issue to be aware of when you have a large amount of data on one node, but there are a few things to keep in mind:
First, the "biggest" compactions, ones where the result is a single huge SSTable, happen rarely, even more so as the amount of data on your node increases. (The number of minor compactions that must occur before a top-level compaction occurs grows exponentially by the number of top-level compactions you've already performed.)
Second, your node will still be able to handle requests, reads will just be slower.
Third, if your replication factor is above 1 and you aren't reading at consistency level ALL, other replicas will be able to respond quickly to read requests, so you shouldn't see a large difference in latency from a client perspective.
Last, there are plans to improve the compaction strategy that may help with some larger data sets.