Is there any way to suppress console output in iPhone player when a new scene is loaded using Application.LoadLevelAdditiveAsync or similar methods?
Unloading 7 Unused Serialized files (Serialized files now loaded: 0 / Dirty serialized files: 0)
Unloading 185 unused Assets to reduce memory usage. Loaded Objects now: 3468. Operation took 377.272217 ms.
System memory in use: 6.7 MB.
Yes it might not be the most important thing on earth but it's somewhat annoying when looking for relevant error messages within noisy output.
Related
The monitoring of the Local MDrivenServer (cmd: AppCompleteGenericCore.exe -port=5050 -nohttps) shows that the process eats memory up to 1.2Gb regardless of the Model's size.
If I upload the simplest sample model with Class1 and Class2, no viewmodels, no serverside jobs - AppCompleteGenericCore process memory starts from 60Mb (there is no model) and stabilized at the ~1.2Gb (sample model is uploaded).
Could you please advise is it normal behavior?
FYI, I've tried "System.GC.Server": false, "System.GC.Concurrent": false in the AppCompleteGenericCore.runtimeconfig.json - no results.
Thank you!
I deleted my LocalServers folder and downloaded the latest version of MDrivenServer—and the problem disappeared. The old server had CodeDress assembly, so I guess it was a root cause of the memory leak I had.
I am trying to debug a memory leak in a 64-bit C++ native application. The app leaks 1300 bytes 7-10 times a second - via plain malloc().
If I attach to the process with WinDBG and break into it every 60 seconds, !heap does not show any increase in memory allocated.
I did enable User Mode Stack trace database on the process:
gflags /i <process>.exe +ust
In WinDBG (with all the symbols successfully loaded), I'm using:
!heap -stat -h
But the output of the command never changes when I break in even though I can see the Private Bytes increase in Task Manager and a PerfMon trace.
I understand that when allocations are small they go to HeapAlloc(), when they're bigger they go to VirtualAlloc. Does !heap not work for HeapAlloc?
This post seems to imply that maybe using DebugDiag would work, but it still boils down to using WinDBG commands to process the dump. Tried to no avail.
This post also says that !heap command is broken for 64-bit apps. Could that be the case?
Is there an alternate procedure for diagnosing leaks in 64-bit apps?
!heap does not show any increase in memory allocated.
That may depend on which column you're looking at and how much memory the heap manager has allocated before.
E.g. it's possible that your application has a heap of 100 MB, of which just some blocks of 64kB are moving from the "reserved" column to the "committed" column. If the memory is committed right from the start, you won't see anything at all with a plain !heap command.
I did enable User Mode Stack trace database on the process
That will help you getting the allocation stack traces, but not affect the leak in general.
I understand that when allocations are small they go to HeapAlloc(), when they're bigger they go to VirtualAlloc.
Yes, for allocations > 512k.
Does !heap not work for HeapAlloc?
It should. And since C++ malloc() and new both use the Windows Heap manager, they should result in HeapAlloc() sooner or later.
The following code
#include <iostream>
#include <chrono>
#include <thread>
int main()
{
// https://stackoverflow.com/questions/53157722/windbg-diagnosing-leaks-in-64-bit-dumps-heap-not-showing-memory-growth
//
// I am trying to debug a memory leak in a 64-bit C++ native application.
// The app leaks 1300 bytes 7-10 times a second - via plain malloc().
for(int seconds=0; seconds < 60; seconds++)
{
for (int leakspersecond=0; leakspersecond<8;leakspersecond++)
{
if (malloc(1300)==nullptr)
{
std::cout << "Out of memory. That was unexpected in this simple demo." << std::endl;
}
std::this_thread::sleep_for(std::chrono::milliseconds(125));
}
}
}
compiled as 64 bit release build and run in WinDbg 10.0.15063.400 x64 shows
0:001> !heap -stat -h
Allocations statistics for
heap # 00000000000d0000
group-by: TOTSIZE max-display: 20
size #blocks total ( %) (percent of total busy bytes)
514 1a - 8408 (32.24)
521 c - 3d8c (15.03)
[...]
and later
0:001> !heap -stat -h
Allocations statistics for
heap # 00000000000d0000
group-by: TOTSIZE max-display: 20
size #blocks total ( %) (percent of total busy bytes)
514 30 - f3c0 (41.83)
521 18 - 7b18 (21.12)
even without +ust set.
It's 4.5M lines of code.
How do you then know that it leaks 1300 bytes via plain malloc()?
I am trying to get familiar with gem5 simulator.
To start, I wrote a simple program with
int main()
{
m5_reset_stats(0, 0);
m5_dump_stats(0, 0);
return 0;
}
I compiled it with util/m5/m5op_x86.S and ran it using...
./build/X86/gem5.opt configs/example/se.py --caches -c ~/tmp/hello
The m5out/stats.txt shows (among other things)...
system.cpu.dcache.ReadReq_hits::total 881
system.cpu.dcache.WriteReq_hits::total 917
system.cpu.dcache.ReadReq_misses::total 54
system.cpu.dcache.WriteReq_misses::total 42
Why is an empty function showing so much hits and misses? Are the hits and misses caused by libc? If so, then what is the purpose of m5_reset_stats() and m5_dump_stats()?
I would check in the stats.txt file if there are two chunks of
---Begin---
---End-----
because as you explained it, the simulator is supposed to dump the stats at dump_stats(0,0) and at the end of the run. So, it seems like you either are looking at one of those intervals (and I would expect the other interval to have 0 for all stats); or there was a bug in the simulation and the dump_stats() (or reset_stats())didn't actually do anything. That actually happened to me plenty of times, but I am not really sure as to the source of this bug.
If you want to troubleshoot further, you could do the following:
Look at the disassembly of your code and find the reset_stats.w and dump_stats.w
Dump a trace from gem5 and see if it ends up executing the dump and reset instructions and also what instructions (and how many) are executed before/after.
Hope this helps!
I have C program running on a AVR32 microcontroller (UC3C0512C).
Issuing the avr32-size -A PROGRAM.elf command generates the following output:
PROGRAM.elf :
section size addr
.reset 8200 2147483648
.rela.got 0 2147491848
.text 99512 2147491848
.exception 512 2147591680
.rodata 5072 2147592192
.dalign 4 4
.data 7036 8
.balign 4 7044
.bss 5856 7048
.heap 48536 12904
.comment 48 0
.debug_aranges 8672 0
.debug_pubnames 14476 0
.debug_info 311236 0
.debug_abbrev 49205 0
.debug_line 208324 0
.debug_frame 23380 0
.debug_str 43961 0
.debug_loc 63619 0
.debug_macinfo 94469328 0
.stack 4096 61440
.data_hram0 512 2684354560
.debug_ranges 8368 0
Total 95379957
Can someone explain how to interpret these values?
How can I calculate the flash and ram usage based on this list?
Update 1:
Without the -A flag, I am getting the following:
text data bss dec hex filename
113296 7548 58496 179340 2bc8c PROGRAM.elf
Update 2:
I'm not using dynamic memory allocation, so according the avr-libc user-manual, the free RAM space should be simply: stackpointer minus __heap_start.
In this case: 61440 - 12904 = 48536 byte free RAM space.
Can someone confirm that?
(There is a mismatch in the two outputs in your question. The bss number is wildly different.)
If you don't use malloc, and don't count the stack, then yes, the RAM usage is the data plus the bss (plus some alignment spacing). The data are the variables that are set in a declaration, and the bss are the variables that are not. The C runtime will probably initialize them to 0, but it doesn't have to.
The flash usage will be the text and the data. That is, the flash will include the program instructions and C runtime, but also the values that need to get copied into RAM on startup to initialize those variables. This data is generally tacked onto the end of the program instructions.
Re: update 2
RAM holds global variables, the heap, and then the stack in that order.
The global variables can be initialized in the program, or not. The .data section is stored in flash, and the C runtime copies these values into the beginning of RAM where the corresponding variables live before your code runs. The .bss section of global variables needs space in RAM to hold the values, but they aren't necessarily initialized. The C runtime that comes with avr-gcc does actually initialize these to 0. The point it that your don't need to store an array of 0s to copy over, as you do with the .data section.
You are not using heap, but dynamically allocated memory is obtained from the addresses between heap_start and heap_end.
But the stack is not limited. Yes, the stack-pointer is initialized at startup, but it changes as your program runs, and can move well into the heap or even into the global variables (stack overflow). The stack pointer moves whenever a function is called, or local variables within a function are used. For example, a large array declared inside a function will go on the stack.
So in answer to your question, there is no RAM that is guaranteed to remain free.
I think you should remove the -A (all) flag, since that gives you the more low-level list you're showing.
The default output is easier to parse, and seems to directly state the values you're after.
Note: I didn't try this, not a system with an AVR toolchain installed.
I guess that in your linker script you have RAM at 0, and Flash at 0x80000000, so all things that need to go to RAM are at addresses 0+ (.stack is the last at 61440 (spanning next 4k)). So you would need a bit more that 64k of RAM. Everything else you have is flash.
That is provided that your linker script is correct.
Also see unwind's comment.
These values are the assembly language sections of the compiled C code. See the docs for the details. This article is also helpful.
The section titled .text represents the instruction section, i.e. the assembly instructions. The .data section represents the size of the variables (ints, arrays, etc.). The size column has the significant info, and it has the size of each section in bytes. The .stack and .heap represent the memory allocated in preparation for the execution of the program to set up the virtual memory.
You can try
avr-nm --print-size --radix d --demangle x.elf
to get the sizes in decimal notation.
Then you can copy & paste into a spreadsheet, filter, sort by the sections, and sum it up.
I'm not sure if memory is the culprit here. I am trying to instantiate a GD image from data in memory (it previously came from a database). I try a call like this:
my $image = GD::Image->new($image_data);
$image comes back as undef. The POD for GD says that the constructor will return undef for cases of insufficient memory, so that's why I suspect memory.
The image data is in PNG format. The same thing happens if I call newFromPngData.
This works for very small images, like under 30K. However, slightly larger images, like ~70K will cause the problem. I wouldn't think that a 70K image should cause these problems, even after it is deflated.
This script is running under CGI through Apache 2.0, on OS 10.4, if that matters at all.
Are there any memory limitations imposed by Apache by default? Can they be increased?
Thanks for any insight!
EDIT: For clarification, the GD::Image object never gets created, so clearing out the $image_data from memory isn't really an option.
GD library eats many bytes per byte of image size. It's a well over a 10:1 ratio!
When a user uploads an image to our system, we start by checking the file size before loading it into a GD image. If it's over a threshold (1 Megabyte) we don't use it but instead report an error to the user.
If we really cared we could dump it to disk, use the command line "convert" tool to rescale it to a sane size, then load the output into the GD library and remove the temporary file.
convert -define jpeg:size=800x800 tmpfile.jpg -thumbnail '800x800' -
Will scale the image so it fits within an 800 x 800 square. It's longest edge is now 800px which should safely load. The above command will send the shrunk .jpg to STDOUT. The size= option should tell convert not to bother holding the huge image in memory, but just enough to scale to 800x800.
I've run into the same problem a few times.
One of my solutions was simply to increase the amount of memory available to my scripts. The other was to clear the buffer:
Original Script:
$src_img = imagecreatefromstring($userfile2);
imagecopyresampled($dst_img,$src_img,0,0,0,0,$thumb_width,$thumb_height,$origw,$origh);
Edited Script:
$src_img = imagecreatefromstring($userfile2);
imagecopyresampled($dst_img,$src_img,0,0,0,0,$thumb_width,$thumb_height,$origw,$origh);
imagedestroy($src_img);
By clearing out the memory of the first src_image, it freed up enough to handle more processing.