Windows Phone Mango - Memory constraint (6 MB) issue - scheduled-tasks

As per documentation, background task cannot use more than 6 MB of memory. I ran background task without any code as follows:
protected override void OnInvoke(ScheduledTask task)
{
Debug.WriteLine("Available Memory: " + (DeviceStatus.ApplicationMemoryUsageLimit - DeviceStatus.ApplicationCurrentMemoryUsage).ToString());
Debug.WriteLine("Peak Memory: " + DeviceStatus.ApplicationPeakMemoryUsage.ToString());
NotifyComplete();
}
The code does not contain any logic. Just writing to output window about available memory.
Following is the output of the above:
Available Memory: 1863680
Peak Memory: 4435968
What I am wondering is, without writing any code or allocating memory to object, how is my 4435968 bytes of memory is used? If 4435968 bytes are used without writing code, what will I be able to do with remaining 1863680 bytes?

You should consider that the background agent is a single piece of code, that's 100% separated from the rest of your application.
So when you're not testing it in a debug mode, where the application is actually running, it won't be using ~4mb of memory.
You clearly worry to much. If you're even remotely near of hitting the memory limit for a PeriodicAgent, then I strongly doubt you're doing something that's suitable for a background task.

Related

Can't get max ram size - STM32 with rtos

i'm using STM32F103R8T6,I'm currently setting max heap size for RTOS
When i try setting 12000
#define configTOTAL_HEAP_SIZE ((size_t)12000)
ERROR Compilation
region `RAM' overflowed by 780 bytes Project-STM32 C/C++ Problem
so what's the max i can use ?
Look in the linker (.ld) file. You'll see section defining RAM. That will tell you how much RAM you have, assuming the linker file was properly generated.
The error message you've pasted indicates that linker went 780 bytes past the end of available RAM area. In your case (STM32F103R8T6), it tried to place 21260 bytes (20KB + 780) into RAM which is defined to only fit 20KB. If you decrease configTOTAL_HEAP_SIZE by the amount reported by linker, it'll likely link successfully. There will however be 0 remaining space for regular / non-RTOS heap so no malloc or new will succeed, in case any part of your code wanted to use it.
You can determine exactly what gets put into RAM by your linker by analyzing your *.map file (sidenote: map file is created only if your program gets linked successfully, so you need to at least get it to that state). When you open it, search for 20000000 (start of your RAM region) and there you should see what exactly gets put there, including size of each chunk.
Unless you did something out-of-ordinary to your project (which I think is safe to assume you didn't as you mention using generated project), your RAM area during linking will need to at least fit the following sections:
.data segment where things like global variables initialized by value live
.bss segment which is similar to the one above except values are zero-initialized. This is where eventually the byte array of size configTOTAL_HEAP_SIZE will be put that RTOS uses as its own heap
Stack (don't confuse with RTOS stack sizes, this one is totally separate) - stack used outside of RTOS tasks. This has a constant size - consult your sections.ld file to find the value.
Heap segment that has a size calculated dynamically by the linker and which is equal to total size of RAM minus size of all other sections. The bigger you make your other segments, the smaller your regular heap will be.
Having said that, apart from going through the *.map file to determine what else other than the RTOS heap occupies your RAM, I'd also think twice about why you'd need 12KB (out of 20KB total) assigned only to RTOS heap. Things like do you need so many tasks, do they need such large stacks, do you need so many/so large queues/mutexes/semaphores.

release memory after del in jupiter notebook

I have deleted some variables in jupiter notebook using del list_of_df. But we realize the contents still occupies memory. so we tried %reset list_of_df , but the previous variable names are already not there... Is there nothing we could do but to restart the kernel to recollect the memory? Thanks
Further:
In a more general, I might have lost track of what I have deleted from a huge jupiter notebook codes. Is it possible to check what have been deleted but still occupying the memory and delete it?
Python isn't like C (for example) in which you have to manually free memory. Instead, all memory allocation and deallocation tasks are handled automatically in the background by a garbage collection (GC) routine. GC uses lazy evaluation, which means that the memory probably won't be freed right away, but will instead be freed only when it "needs" to be (in the ideal case, anyway).
You shouldn't use this in production code, but if you really want to, after you del your list you can force GC to run using the gc module:
import gc
gc.collect()
It might not actually work/deallocate the memory, though, for many different reasons. In general, it's better to just let Python manage memory automatically and not interfere.

Memory usage growing seems to be linked to Metal objects

I am currently building an app that uses Metal to efficiently render triangles and evaluate a fitness function on textures, I noticed that the memory usage of my Metal app is growing and I can't really understand why.
First of all I am surprised to see that in debug mode, according to Xcode debug panel, memory usage grows really slowly (about 20 MB after 200 generated images), whereas it grows way faster in release (about 100 MB after 200 generated images).
I don't store the generated images (at least not intentionally... but maybe there is some leak I am unaware of).
I am trying to understand where the leak (if it one) comes from but I don't really know where to start, I took a a GPU Frame capture to see the objects used by Metal and it seems suspicious to me:
Looks like there are thousands of objects (the list is way longer than what you can see on the left panel).
Each time I draw an image, there is a moment when I call this code:
trianglesVerticesCoordiantes = device.makeBuffer(bytes: &positions, length: bufferSize , options: MTLResourceOptions.storageModeManaged)
triangleVerticiesColors = device.makeBuffer(bytes: &colors, length: bufferSize, options: MTLResourceOptions.storageModeManaged)
I will definitely make it a one time allocation and then simply copy data into this buffer when needed, but could it cause the memory leak or not at all ?
EDIT with screenshot of instruments :
EDIT #2 : Tons of command encoder objects present when using Inspector:
EDIT #3 : Here is what seems to be the most suspect memory graph when analysed which Xcode visual debugger:
And some detail :
I don't really know how to interpret this...
Thank you.

Stack size of secondary threads, significant differences between DEBUG and RELEASE versions

In my iPhone application (XCode 3.2.4, iOS3.1.3), if I run the application in RELEASE mode, all is fine, but in DEBUG mode, the application crashes with an EXC_BAD_ACCESS exception.
The application does some complex calculations. All the main code is contained in several C++ static libraries and the UIApplication only creates an object from one of these libraries and calls a method of this object.
If I put into a secondary thread the code which calls the complex calculations, I still have the same behavior: EXC_BAD_ACCESS exception in DEBUG mode and no problem in RELEASE mode.
Then I've looked around the thread stack size. By default, iOS set a thread stack size with 512 Kbytes for secondary threads and 1024 Kbytes for the main thread.
I've looked the minimal required value for the thread stack size to run correctly my application.
I've found the following result:
40 Kbytes for the RELEASE version.
1168 Kbytes for the DEBUG version.
The value 1168 Kbytes in DEBUG version explains why in the main thread, the application will crash (defaut stack size for the main thread is 1024 Kbytes).
I really don't understand why the required thread stack size is so different between the RELEASE and DEBUG version of my application (40 KB vs 1168 Kb !!!). I would like any help to understand this issue.
Thank you.
Marc
It is not unusual for debug versions of code and libraries to contain extra self-tests, extra local variables and verification. Perhaps these are increasing the needs of your code.
In particular, it is relatively easy to define some buffer as a local variable and take large amounts of stack. You might find something like this in one or more places taking up stack:
#ifdef _DEBUG
testBuffer[bufferSize];
#endif
If 'bufferSize' was defined as 10K, that would use up 1/4 of your entire 40K stack.
Or perhaps a debug-only function uses a large amount of stack.
It may also be that the settings of your debug build are using any number of Apple's test settings. Things like MallocStack, GuardMalloc, NSZombiesEnabled will require more memory.

The stack size used in kernel development

I'm developing an operating system and rather than programming the kernel, I'm designing the kernel. This operating system is targeted at the x86 architecture and my target is for modern computers. The estimated number of required RAM is 256Mb or more.
What is a good size to make the stack for each thread run on the system? Should I try to design the system in such a way that the stack can be extended automatically if the maximum length is reached?
I think if I remember correctly that a page in RAM is 4k or 4096 bytes and that just doesn't seem like a lot to me. I can definitely see times, especially when using lots of recursion, that I would want to have more than 1000 integars in RAM at once. Now, the real solution would be to have the program doing this by using malloc and manage its own memory resources, but really I would like to know the user opinion on this.
Is 4k big enough for a stack with modern computer programs? Should the stack be bigger than that? Should the stack be auto-expanding to accommodate any types of sizes? I'm interested in this both from a practical developer's standpoint and a security standpoint.
Is 4k too big for a stack? Considering normal program execution, especially from the point of view of classes in C++, I notice that good source code tends to malloc/new the data it needs when classes are created, to minimize the data being thrown around in a function call.
What I haven't even gotten into is the size of the processor's cache memory. Ideally, I think the stack would reside in the cache to speed things up and I'm not sure if I need to achieve this, or if the processor can handle it for me. I was just planning on using regular boring old RAM for testing purposes. I can't decide. What are the options?
Stack size depends on what your threads are doing. My advice:
make the stack size a parameter at thread creation time (different threads will do different things, and hence will need different stack sizes)
provide a reasonable default for those who don't want to be bothered with specifying a stack size (4K appeals to the control freak in me, as it will cause the stack-profligate to, er, get the signal pretty quickly)
consider how you will detect and deal with stack overflow. Detection can be tricky. You can put guard pages--empty--at the ends of your stack, and that will generally work. But you are relying on the behavior of the Bad Thread not to leap over that moat and start polluting what lays beyond. Generally that won't happen...but then, that's what makes the really tough bugs tough. An airtight mechanism involves hacking your compiler to generate stack checking code. As for dealing with a stack overflow, you will need a dedicated stack somewhere else on which the offending thread (or its guardian angel, whoever you decide that is--you're the OS designer, after all) will run.
I would strongly recommend marking the ends of your stack with a distinctive pattern, so that when your threads run over the ends (and they always do), you can at least go in post-mortem and see that something did in fact run off its stack. A page of 0xDEADBEEF or something like that is handy.
By the way, x86 page sizes are generally 4k, but they do not have to be. You can go with a 64k size or even larger. The usual reason for larger pages is to avoid TLB misses. Again, I would make it a kernel configuration or run-time parameter.
Search for KERNEL_STACK_SIZE in linux kernel source code and you will find that it is very much architecture dependent - PAGE_SIZE, or 2*PAGE_SIZE etc (below is just some results - many intermediate output are deleted).
./arch/cris/include/asm/processor.h:
#define KERNEL_STACK_SIZE PAGE_SIZE
./arch/ia64/include/asm/ptrace.h:
# define KERNEL_STACK_SIZE_ORDER 3
# define KERNEL_STACK_SIZE_ORDER 2
# define KERNEL_STACK_SIZE_ORDER 1
# define KERNEL_STACK_SIZE_ORDER 0
#define IA64_STK_OFFSET ((1 << KERNEL_STACK_SIZE_ORDER)*PAGE_SIZE)
#define KERNEL_STACK_SIZE IA64_STK_OFFSET
./arch/ia64/include/asm/mca.h:
u64 mca_stack[KERNEL_STACK_SIZE/8];
u64 init_stack[KERNEL_STACK_SIZE/8];
./arch/ia64/include/asm/thread_info.h:
#define THREAD_SIZE KERNEL_STACK_SIZE
./arch/ia64/include/asm/mca_asm.h:
#define MCA_PT_REGS_OFFSET ALIGN16(KERNEL_STACK_SIZE-IA64_PT_REGS_SIZE)
./arch/parisc/include/asm/processor.h:
#define KERNEL_STACK_SIZE (4*PAGE_SIZE)
./arch/xtensa/include/asm/ptrace.h:
#define KERNEL_STACK_SIZE (2 * PAGE_SIZE)
./arch/microblaze/include/asm/processor.h:
# define KERNEL_STACK_SIZE 0x2000
I'll throw my two cents in to get the ball rolling:
I'm not sure what a "typical" stack size would be. I would guess maybe 8 KB per thread, and if a thread exceeds this amount, just throw an exception. However, according to this, Windows has a default reserved stack size of 1MB per thread, but it isn't committed all at once (pages are committed as they are needed). Additionally, you can request a different stack size for a given EXE at compile-time with a compiler directive. Not sure what Linux does, but I've seen references to 4 KB stacks (although I think this can be changed when you compile the kernel and I'm not sure what the default stack size is...)
This ties in with the first point. You probably want a fixed limit on how much stack each thread can get. Thus, you probably don't want to automatically allocate more stack space every time a thread exceeds its current stack space, because a buggy program that gets stuck in an infinite recursion is going to eat up all available memory.
If you are using virtual memory, you do want to make the stack growable. Forcing static allocation of stack sized, like is common in user-level threading like Qthreads and Windows Fibers is a mess. Hard to use, easy to crash. All modern OSes do grow the stack dynamically, I think usually by having a write-protected guard page or two below the current stack pointer. Writes there then tell the OS that the stack has stepped below its allocated space, and you allocate a new guard page below that and make the page that got hit writable. As long as no single function allocates more than a page of data, this works fine. Or you can use two or four guard pages to allow larger stack frames.
If you want a way to control stack size and your goal is a really controlled and efficient environment, but do not care about programming in the same style as Linux etc., go for a single-shot execution model where a task is started each time a relevant event is detected, runs to completion, and then stores any persistent data in its task data structure. In this way, all threads can share a single stack. Used in many slim real-time operating systems for automotive control and similar.
Why not make the stack size a configurable item, either stored with the program or specified when a process creates another process?
There are any number of ways you can make this configurable.
There's a guideline that states "0, 1 or n", meaning you should allow zero, one or any number (limited by other constraints such as memory) of an object - this applies to sizes of objects as well.