My goal is to instrument the region-of-interest of a program in syscall emulation mode. I have already implemented pseudo instructions for full system mode based on this tutorial. It is time consuming though to test everything in FS after I make even a small change. Is there any way to implement the same functionality for syscall mode?
So I found what the problem was. You need to remove all mentions of mmap. So in my case it would be removing the include of m5_mmap.h in the microbenchmark and not calling map_m5_mem() first thing in main(). Just call m5_roi_begin() and m5_roi_end() (or however you call your instrumentation functions).
Also in the gem5 x86 makefile (gem5/util/m5/Makefile.x86) remove the
-DM5OP_ADDR=0xFFFF0000
flag and compile again (make -f Makefile.x86). Now when I run the microbenchmark with gem5, I can see when the ROI starts and ends. Everyting else remains the same, as I've posted in a comment above to the question.
Thanks Ciro.
Related
I have not thought so much of this before I started learning c++ and I have not needed to terminate matlab from a program so many times before. The question just occured to me:
Are there any perils in using the functions quit and exit in matlab? I know that these functions should never be used except for emergencies in c++. However, in my little dream world matlabs functions are in most cases stable which speaks towards that matlab do a successful cleaning of resources even in the case where exit or quit is called. As always this kind of documentation is a bit hard to find for matlab. I am also curious if the same principles applies for both windows and linux.
In case no cleaning is done here, are there any way to fix it? Like creating the file finish.m and letting it contain only a clear all call or so?
BR/Patrik
Unlike c++, Matlab has its own garbage collector that takes care of all the "cleaning" stuff for you. So, when you exit or quit Matlab resources are cleaned for you (!)
If you are using some custom data types or you have data you want to save prior to exit you'll need to take care of them by yourself. You can define a custom script containing commands to be executed upon terminating Matlab, see finish for more details.
I have compiled a Matlab routine using the MCR and deployed it to other computers without having them installed matlab. So far, so good. But of course, the routine is not completely error-free, particularly the GUI part. The problem is that when the MCR tries to write the error message to the terminal, it seems to corrupt the terminal so that everything is no longer legible - not even the prompt. Sometimes I also get an extra window, vaguely resembling the matlab editor window, full of illegible ascii characters. Does anyone know what is causing this, or how to avoid it?
My first attempt was a big try-catch block around everything, but whatever it is still seems to get through. The catch block just tries to divert the error to an errordlg rather than the command prompt:
catch e
errordlg({e.message;['in: ',e.stack.name]})
end
MATLAB Compiler does not support command window functions.
Peter Webb tells on Loren's blog:
Certain MATLAB functions cannot be deployed because they act on
objects that are not present in a deployed application. For example,
since deployed applications have no command window, functions that
modify the command window can't be deployed.
So, you probably need to get rid of any function that prints to the command window.
Also, you can check out the mccExcludedFiles.log file.
Whether or not I compile a Racket program seems to make no difference to the runtime performance.
Is it just the loading of the file initially that is improved by compilation? In other words, does running racket src.rkt do a jit compilation on the fly, which is why I see no difference in compiling vs interactive?
Even for tight loops of integer arithmetic, where I thought some difference would occur, the profile times are equivalent whether or not I previously did a raco make.
Am I missing something simple?
PS, I notice that I can run racket against the source file (.rkt) or .zo file. Does racket automatically use the .zo if one is found that corresponds to the .rkt file, or does the .zo file need to be used explicitly? Either way, it makes no difference to the performance numbers I'm seeing.
Yes, you're right.
Racket compiles code in two stages: first, the code is compiled into bytecode form, and then when it runs it gets jitted into machine code. When you compile a file, you're basically creating the bytecode which saves on re-compiling it later. Since that's usually not something that takes a lot of time for small pieces of code, you won't see any noticeable difference in runtimes. For an extreme example, you can delete all *.zo files in the collection tree and start DrRacket -- it will take a lot of time to start since there's a ton of code, but once it does start, it would run almost as usual. (It would also be slow to click "run" since that will reload and recompile some files.) Another concern for bigger pieces of code is that the compilation process can make memory consumption higher, but that's also not an issue with smaller pieces of code.
See also the Performace chapter in the guide for hints on how to improve performance.
Racket will always compile your code, regardless of whether it is run interactively at the REPL or run from the command-line. Here is the section in the guide that explains it. In interactive mode, the compiler turns every expression/definition into bytecode in memory and executes that. Otherwise, the compilers outputs the bytecode to zo files.
Note: Eli replied at the same time I did. See his response for more details.
I've read a lot of similar questions, but I can't seem to find an answer to exactly what my problem is.
I've got a set of minidumps from a 32-bit application that was running on 64-bit Windows 2008. The 32-bit Visual Studio on my 32-Bit Vista Business wouldn't touch them at all, so I've been trying to open them in WinDbg.
I don't have the EXACT corresponding .pdb files (we only started saving them AFTER this particular release), but I have .pdbs built by the same machine with the same code. I also have access to the exact executable that created the minidumps.
I found a nifty little application called ChkMatch that can make .pdbs match an executable... the only difference (according to ChkMatch) was age, so I matched my newer .pdbs to the original executable.
However, when I load it in WinDbg, it still says that it is a "mismatched pdb" then, since I had set .symopts+0x40 it tries to load them anyway. I then get the warning:
*** WARNING: Unable to verify checksum for myexe.exe
I ran !lmi myexe and saw that, indeed, the checksum of the executable was in fact zero. From poking around a bit, I've found that the executable should have been built with the /release flag to have a checksum. That's all well and good, but I can't exactly go back in time and rebuild (if I did though, I'd definitely save the original .pdbs :-P ).
Is there anything I can do here? Seems a little ridiculous I can't make things match here at least enough to get a call stack.
you don't need the checksum to get a call stack - this warning can be safely ignored.
to get the stack you need to issue the stack command (any variant of k).
if the minidumps are any good (i.e. describe an actual fault), you should first try the auto analysis !analyze -v which will get you started.
come back when you have exhausted your expertise :o)
If you're working with minidumps then you have to set your image path (Ctrl+I) to point to a location with the images in the dump. The trouble with minidumps is that they don't contain any code or data from the executables on the target, so you have to supply them yourself.
-scott
I am still new to Perl. Since BEGIN blocks are run during compilation can't a virus spread or data loss occur from simply compiling? Does Perl do anything to stop it? If so does it mean the code in BEGIN blocks may act differently outside of it?
Yes to all these questions. The Eclipse IDE was vulnerable to this. It discussed in more detail here.
As with all software, you should avoid downloading and running anything from a source you do not trust. CPAN is generally trustworthy; although I am not aware of anyone intentionally releasing rogue code to CPAN, it's possible it has happened.
You can avoid running code during compile checks with the $^C flag, e.g.:
BEGIN { load_data_from_db() unless $^C; }
chromatic explains how a Perl program works.
Note that sometimes this is a feature. BEGIN blocks inside mod_perl modules are executed only once, when they are first loaded. So you have a simple syntax to do page-level initialization in the same script, and place it "near" the code that it assists.
Occasionally it's similarly useful for writing complicated initialization code that you don't want to put at the top of a script.
But mostly it's just there for thematic compatibility with awk.