EFI_FILE_HANDLE->Write crashes when writing more than about 3.4 GiB - x86-64

I'm writing an UEFI application that should be able to write a lot of data to disk.
I'm aware of the FAT-32 file size limitations and number of files per directory etc. That should not be the problem. The memory region I'm trying to write is marked as usable by the memory map und I can read/write to it without problems but after a certain amount of data my vm just reboots without any error messages.
The following line makes problems:
uefi_call_wrapper(handle->Write, 3, handle, size, content);
handle is initialized a few lines earlier, size is always max 128MiB and content a valid memory region with read/write access.
I've already rewritten the whole thin on Windows with EDK2 and got the same problems.
Can anyone help me with this?
Thank you in advance and have a nice evening

Assuming that handle is a pointer to EFI_FILE_PROTOCOL, the BufferSize parameter to Write is passed by reference. When the function returns, BufferSize contains the number of bytes written. You didn't give enough context in your question, but it looks like you are passing it by value.

Hi guys and thank you for you answeres. The size argument is a pointer. I've just found the solution for the problem. I didn't know I have to reset the watchdog timer.
After calling uefi_call_wrapper(ST->BootServices->SetWatchdogTimer, 4, 0, 0, 0, NULL); everything works as expected
Cheers!

Related

Saving Matlab object instance results in an infinite loop

Set up :
I have created a Matlab handle class called "Participant' for reading and operating on certain research data. I have created multiple instances of this object and saved them to the hard disk with no problem. I have also checked my problematic instance to ensure that it is functional in Matlab. There does not seem to be any bugs with the instance.
The problem
However, on certain instances, for no clear reason to me, Matlab gets stuck on an infinite loop writing to disk. This is evident in looking at the .mat fiels output's modification date which keeps changing per minute and in the fact that my Matlab instance slows down tremendously.
The code to create the the participant is
myparticipant = participant([basedir ,p_folder{p_num}]);
Methods tried
I have saved to disk by right clicking in the workspace which results in the problem above.
Using the save function, I get :
save('test.mat', 'myparticipant')
Error using save
Error closing file test.mat.
The file may be corrupt.
and of course it won't load after.
Any insight would be appreciated as I'm not sure how to start approaching this issue.
Thanks to excaza's comment I was able to spot this issue. As I explained in my comment response, the problem was that because I was using a handle class, the size of my data shown in working memory was much small. However, my data size was actualy bigger than 2gb. In these cases, you have to use Matlab's "-V7.3" keyword to save to file! adding that flag did it for me.

$readmemh to load subblocks of memory

Sorry if the question is too newbie. I have been looking for information on readmemh and I haven't been able to find a suitable solution to my problem.
I have a large hex file with the contents of a memory to read from it. But since the file is just too large I want to be able to bring to my program just a smaller memory block every time. For example, if my memory has addresses 32 bits long, being able to load only chunks of 2048 addresses and storing in a variable the upper part to know if I'm in the right chunk or not.
Or more or less, if my hex file is as follows:
#00000000 1212
#00000001 3434
#00000002 5656
...
#ffffffff 9a9a
I want to be able to save it in a structure with the following fields:
logic [15:0] chunck [11:0]; // Keeps 2048 entries
logic [20:0] address_top; // Keeps the top part of the address
Looking at the documentation, I see that readmemh allows me to indicate a start and an end address, but what I am seeing is that it refers to the position in the destination array, not from the source.
How can I do it?
Thanks in advance!
If you are using SystemVerilog, then you can use an associative array to model your memory, and let it take care a allocating and mapping the physical memory as needed, instead of doing it yourself.

Segmentation fault from outside of my code

I have a wxWidgets/GTK based application that works well - except for one installation on an Debian Squeeze ARM system. There it crashes when the user just activates the main window of it. To find the reason for that I added a signal handler to the application and use libunwind out of that signal handler to find the source for the crash. During a test that worked fine, when the software writes e.g. to address 0x0 libunwind correctly points me to the function where that happens.
But the results for the system where the crash appears unexpectedly are a bit strange, they seem to happen outside of my application. One crash comes from a function with no name (here libunwind returns an empty string), and one is caused by "malloc_usable_size", a system function which should never die this way.
So...what to do next? All ideas, suggestions or any other hints are welcome since I'm not sure how to contunue with that problem...
Check for buffer overrun or overwriting some memory unexpectedly for any structures, pointers, memory locations for items returned by library functions.
Check for invalid pointer frees in your code for the library allocated pointers that you are using.
May be using valgrind would also help.

What can I do to find out what's causing my program to consume lots of memory over time?

I have an application using POE which has about 10 sessions doing various tasks. Over time, the app starts consuming more and more RAM and this usage doesn't go down even though the app is idle 80% of the time. My only solution at present is to restart the process often.
I'm not allowed to post my code here so I realize it is difficult to get help but maybe someone can tell me what I can do find out myself?
Don't expect the process size to decrease. Memory isn't released back to the OS until the process terminates.
That said, might you have reference loops in data structures somewhere? AFAIK, the perl garbage collector can't sort out reference loops.
Are you using any XS modules anywhere? There could be leaks hidden inside those.
A guess: your program executes a loop for as long as it is running; in this loop it may be that you allocate memory for a buffer (or more) each time some condition occurs; since the scope is never exited, the memory remains and will never be cleaned up. I suggest you check for something like this. If it is the case, place the allocating code in a sub that you call from the loop and where it will go out of scope, and get cleaned up, on return to the loop.
Looks like Test::Valgrind is a tool for searching for memory leaks. I've never used it myself though (but I used plain valgrind with C source).
One technique is to periodically dump the contents of $POE::Kernel::poe_kernel to a time- or sequence-named file. $poe_kernel is the root of a tree spanning all known sessions and the contents of their heaps. The snapshots should monotonically grow if the leaked memory is referenced. You'll be able to find out what's leaking by diff'ing an early snapshot with a later one.
You can export POE_ASSERT_DATA=1 to enable POE's internal data consistency checks. I don't expect it to surface problems, but if it does I'd be very happy to receive a bug report.
Perl can not resolve reference rings. Either you have zombies (which you can detect via ps axl) or you have a memory leak (reference rings/circle)
There are a ton of programs to detect memory leaks.
strace, mtrace, Devel::LeakTrace::Fast, Devel::Cycle

Post mortem minidump debugging In windbg -- what causes <memory access error> for heap memory?

I'm looking at a crash dump. Some variables seem perfectly viewable in windbg, while others just say "memory access error". What causes this? Why do some variables have sensical values while others simply list ?
It appears that all the problems are associated with following pointers. I'm certain that while many of these pointers are uninitialized the vast majority of them should be pointing somewhere valid. Based on the nature of this crash (a simple null ptr dereference) I'm fairly certain the whole process hasn't gone out to lunch.
Mini-dumps are fairly useless, they don't contain a snapshot of all in use memory. Instead, all they contain are some critical structures/lists (e.g. the loaded module list) and the contents of the crashing stack.
So, any pointer that you try to follow in the dump will just give you question marks. Grab a full memory dump instead and you'll be able to see what these buffers point to.
-scott
If they are local pointer variables, what is most likely happening is that the pointers are not initialized, or that stack location has been reused to contain another variable, that may not be a pointer. In both cases, the pointer value may point to a random, unreadable portion of memory.