I'm trying to understand how memcached memory model works.
If all items in an assigned page have expired or been deleted, can that page be marked as unassigned (and later migrate to another slab class)? That is, let's say I fill up my memcached instance with lots of 1 kb objects with an expiry of 24 hours. 48 hours later, I write lots of 512 kb items (different slab class), will the 1 kb slab class pages slowly get unassigned?
If this is the case, best practise should be to always set an expiry time for all objects.
I did some tests on 1.4.20 (starting the app without any flags) and can confirm that within the ~5 minutes I ran the tests, pages did not change slab classes.
Above said, there is something called automover which is a background thread that automatically can change the slab class for pages as they turn unused.
Related
What happens if the staleTime is longer than cacheTime, such as when the staleTime is 10 minutes and the cacheTime is 5 minutes? I thought that even if cacheTime is not valid after 6 minutes, staleTime is still valid, so it doesn't call to server to get the data.
However, I found a post that says that if cacheTime is not valid, and the data -that still fresh- will be deleted. If staleTime is longer than cacheTime, doesn't staleTime work as I expected?
staleTime and cacheTime are two completely different times that are not related at all. They serve two different purposes and can be two different times - each one can be smaller or larger than the other, it doesn't matter.
I'll copy the section from my blogpost here:
StaleTime: The duration until a query transitions from fresh to stale. As long as the query is fresh, data will always be read from the cache only - no network request will happen! If the query is stale (which per default is: instantly), you will still get data from the cache, but a background refetch can happen under certain conditions.
The first thing to notice here is that even if staleTime elapses, there will not be a request immediately. staleTime just says: Hey, the data that we have in the cache is no longer fresh, so we might refresh it in the background. But for a background refresh to happen, it needs a "trigger": A component mount, a window focus, a network reconnect.
CacheTime: The duration until inactive queries will be removed from the cache. This defaults to 5 minutes. Queries transition to the inactive state as soon as there are no observers registered, so when all components which use that query have unmounted.
cacheTime does nothing as long as you use a query. You can open a webpage and work on that page for forever - React Query will not remove that data, because that wold mean the screen would transition to a loading state. That would be bad. cacheTime is about gargabe collection - removing data that is no longer in use. This is why we'll rename this option to gcTime in the next major version.
From the comments:
It doesn't make sense to make the stateTime longer than the cacheTime.
Yes, it does, if you want. You can even have staleTime: Infinity, cacheTime: 0. This basically means: Never refetch data if you have cached data, but remove data from the cache as soon as I don't use it anymore.
I am studying for my final OS exam and am currently stuck in a question:
Assume a system uses demand paging as its fetch policy.
The resident size is 2 pages.
Replacement policy is Least Recently Used (LRU).
Initial free frame list: 10, 20. 30, 40, 50
Assume a program runs with the following sequence of page references:
3(read), 2(read), 3(write), 1(write), 1(write), 0(write), 3(read)
I am asked to show the final contents of the free frame list, modified list, and the page table.
Here is the model answer.
This is what I managed to do.
The final Resident Set is correct, but the free frame list and the modified list are not. I just cannot see how the modified list does not contain page number 0 (as in it got written to memory), while page number 1 was not written even though it was referenced before it.
Any help would be appreciated.
Why do you recycle 3(10) to the free list in step 4? It was the least recently used (and is dirty) so you would want to keep it, and get rid of 2(20). That appears to be what the model answer is based on.
I have looked trough the "logbook" and "datalogger" APIs and there are no way of telling that the data logger is almost full. I found the API call with the following path "/Mem/Logbook/IsFull". If I have understood it correct this will notify me when log is full and the datalogger has stopped logging.
So my question is: Are there a way to know how much of the memmory is currently in use so that I do a cleanup old data (need to do some calculations on them before they are deleted) before the EEPROM is full and the Datalogger stops recording?
The data memory of Logbook/DataLogger is conceptually a ring buffer. That's why /Mem/DataLogger/IsFull always returns false on Movesense sensor (Suunto uses the same API in its watches where the situation is different). Therefore the sensor never stops recording, it just replaces oldest data with new.
Here are a couple of strategies that you could use:
Plan A:
Create a new log (POST /Mem/Logbook/Entries => returns the logId for it)
Start Logging (PUT /Mem/DataLogger/State: LOGGING)
Every once in a while create a new log (POST /Mem/Logbook/Entries). Note: This can be done while logging is ongoing!
When you want to know what is the status of the log, read /Mem/Logbook/Entries. When the oldest entry has completely been overwritten, it disappears from the list. Note: The GET /Entries is a heavy operation so you may not want to do it when the logger is running!
Plan B
Every now and then start a new log and process the previous one. That way the log never overwrites something you have not processed.
Plan C
(Note: This is low level and may break with some future Movesense sensor release)
GET the first 256 bytes of EEPROM chip #0 using the /Component/EEPROM API. This area contains a number of ExtflashChunkStorage::StorageHeader structs (see: ExtflashChunkStorage.h), rest is filled with 0xFF. The last StorageHeader before 0xFF is the current one. With that StorageHeader one can see where the ring buffer starts (firstChunk) and where next data is written (cursor). The difference of the two is the used memory. (Note: Since it is a ring buffer the difference can be negative. In that case add the "Size of Logbook area - 256" to it)
Full disclosure: I work for Movesense team
I know how to use page references to determine a page fault using FIFO. I was confused if how to determine if we have FIFO table.
A process consists of six pages, 0 1 2 3 4 5. Page 0 is automatically loaded when we start running the program. Other pages are loaded (as they are referenced) by the page fault mechanism. This process is allowed only 3 pages in memory at any one time.
How to get a sequence of page references that this process can make (starting with 0, 2, 4)? Asterisks represent page faults.
enter image description here
I have a semi-slow memory leak in a Talend joblet. I am using a tHashOutput/tHashInput pair in the middle of a joblet because I need to find out how many rows are in the flow. Therefore, I push them into a tHashOutput and later on reference tHashOutput_1_NB_LINE from the globalMap.
I have what I think are the proper options:
allRows - "append" is FALSE
tHashinput_1 - "Clear after reading" is TRUE
Yet, when I run this for a period of time, and analyzing with the Eclipse Memory Analyzer, I see objects building up over time. This is what I get after 12 hours:
This usage (64MB/12 hours) increases steadily and is unrelated to what the job is doing (i.e. actively pumping data or just idling - and this code while invoked for idling also). If I look inside the memory references in MAT, I can see strings that point me to this place in the code, like
tHashFile_DAAgentProductAccountCDC_delete_BPpuaT_jsonToDataPump_1_tHashOutput_2
(jsonToDataPump being the name of the joblet). Am I doing something wrong in using these hash components?
i believe you should set your garbage collector pace to minimum time duration so that it will take care of unused object in application