gem5 cache statistics - reset and dump - cpu-architecture

I am trying to get familiar with gem5 simulator.
To start, I wrote a simple program with
int main()
{
m5_reset_stats(0, 0);
m5_dump_stats(0, 0);
return 0;
}
I compiled it with util/m5/m5op_x86.S and ran it using...
./build/X86/gem5.opt configs/example/se.py --caches -c ~/tmp/hello
The m5out/stats.txt shows (among other things)...
system.cpu.dcache.ReadReq_hits::total 881
system.cpu.dcache.WriteReq_hits::total 917
system.cpu.dcache.ReadReq_misses::total 54
system.cpu.dcache.WriteReq_misses::total 42
Why is an empty function showing so much hits and misses? Are the hits and misses caused by libc? If so, then what is the purpose of m5_reset_stats() and m5_dump_stats()?

I would check in the stats.txt file if there are two chunks of
---Begin---
---End-----
because as you explained it, the simulator is supposed to dump the stats at dump_stats(0,0) and at the end of the run. So, it seems like you either are looking at one of those intervals (and I would expect the other interval to have 0 for all stats); or there was a bug in the simulation and the dump_stats() (or reset_stats())didn't actually do anything. That actually happened to me plenty of times, but I am not really sure as to the source of this bug.
If you want to troubleshoot further, you could do the following:
Look at the disassembly of your code and find the reset_stats.w and dump_stats.w
Dump a trace from gem5 and see if it ends up executing the dump and reset instructions and also what instructions (and how many) are executed before/after.
Hope this helps!

Related

How do I make `mmm` omit the time-consuming doc build steps?

Working with the source code from AOSP, after I make a trivial change to a
source file under frameworks/base/core/java/android/,
mmm frameworks/base -j9 takes about 4 minutes.
A large portion of that time seems to be waiting for steps with names containing "Droiddoc" or "Docs droiddoc" to complete:
...
[ 14% 4/28] Docs droiddoc: out/target/common/docs/api-stubs
[ 21% 6/28] //frameworks/base:test-api-stubs-docs Droiddoc [common]
DroidDoc took 102 sec. to write docs to out/soong/.intermediates/frameworks/base/test-api-stubs-docs/android_common/docs/out
[ 28% 7/25] Docs droiddoc: out/target/common/docs/api-stubs
DroidDoc took 113 sec. to write docs to out/target/common/docs/api-stubs
[ 32% 8/25] //frameworks/base:api-stubs-docs Droiddoc [common]
DroidDoc took 115 sec. to write docs to out/soong/.intermediates/frameworks/base/api-stubs-docs/android_common/docs/out
[ 40% 9/22] //frameworks/base:system-api-stubs-docs Droiddoc [common]
DroidDoc took 117 sec. to write docs to out/soong/.intermediates/frameworks/base/system-api-stubs-docs/android_common/docs/out
...
I really don't need or want any documentation to be built on every little incremental recompile.
Is there a way to omit all these doc-related steps?
I'd be interested in either a command line flag if there is one,
or a hopefully simple hack to one or more Makefiles and/or .mk files.
I've looked through the .mk files; in particular build/make/core/droiddoc.mk
seems relevant. I tried cutting some wires in it without really understanding what I was doing, without success.
I'm hoping someone who understands how these .mk files are put together
will know how to do this easily.
I expect this will be of interest to anyone who runs mmm frequently.
During make or mmm invocations, there are apparently two different kinds of build steps that build docs.
Each must be dealt with in its own way.
The build steps that have "Docs droiddoc" in their progress messages. That string comes from build/make/core/droiddoc.mk.
I was able to suppress these build steps as follows: delete all lines from build/make/core/droiddoc.mk, so it becomes an empty file.
The build steps that have "Droiddoc" in their progress messages. That string comes from build/soong/java/droiddoc.go.
I was able to suppress these build steps as follows: delete or comment out the last two blocks in the calling file build/soong/java/androidmk.go:
func (jd *Javadoc) AndroidMk() android.AndroidMkData {
...
}
func (ddoc *Droiddoc) AndroidMk() android.AndroidMkData {
...
}
I confirmed that it's no longer spending time building docs, on Darwin, by keeping an eye on the Activity Monitor during the build,
and verifying that javadoc processes no longer appear.
With docs omitted, mmm frameworks/base -j9 after a small code change now takes 45 to 55 seconds, instead of 4 minutes.

EF6/Code First: Super slow during the 1st query, but only in Debug

I'm using EF6 rc1 with Code First strategy, without precompiled views and the problem is:
If I compile and run the exe application it takes like 15 seconds to run the first query (that's okay, since I'm still working on the pre-generated views). But if I use Visual Studio 2013 Preview to Debug the exact same application it takes almost 2 minutes BEFORE running the first query:
Dim Context = New MyEntities()
Dim Query = From I in Context.Itens '' <--- The debug takes 2 minutes in here
Dim Item = Query.FirstOrDefault()
Is there a way to remove this extra time? Am I doing something wrong here?
Ps.: The context itself is not complicated, its just full with 200+ tables.
Edit: Found out that the problem is that during debug time the EF appears to be generating the Views ignoring the pre-generated ones.
Using the source code from EF I discovered that the property:
IQueryProvider IQueryable.Provider
{
get
{
return _provider ?? (_provider = new DbQueryProvider(
GetInternalQueryWithCheck("IQueryable.Provider").InternalContext,
GetInternalQueryWithCheck("IQueryable.Provider").ObjectQueryProvider));
}
}
is where the time is being consumed. But this is strange since it only takes time in debug. Am I missing something here?
Edit: Found more info related to the question:
Using the Process Monitor (by Sysinternals) I found out that there its the 'desenv.exe' process that is consuming tons of time. To be more specific its consuming time with an 'Thread Exit'. It repeats the Thread Exit stack 36 times. I don't know if this info is very useful, but I saved a '.cvs' with the stack, here is his body: [...] (edit: removed the '.cvs' body, I can post it again by the comments if someone really think its going to be useful, but it was confusing and too big.)
Edit: Installed VS2013 Ultimate and Entity Framework 6 RTM. Installed the Entity Framework Power Tools Beta 4 and used it to generate the Views. Nothing changed... If I run the exe it takes 20 seconds, if I 'Start' debugging it takes 120 seconds.
Edit: Created a small project to simulate the error: http://sdrv.ms/16pH9Vm
Just run the project inside the environment and directly through the .exe, click the button and compare the loading time.
This is a known performance issue in Lazy (which EF is using) when the debugger is attached. We are currently working on a fix (the current approach we are looking at is removing the use of Lazy). We hope to ship this fix in a patch release soon. You can track progress of this issue on our CodePlex site - http://entityframework.codeplex.com/workitem/1778.
More details on the coming 6.0.2 patch release that will include a fix are here - http://blogs.msdn.com/b/adonet/archive/2013/10/31/ef6-performance-issues.aspx
I don't know if you have found the solution. But in my case, I had similar issue which wasted me close to a week after trying different suggestions. Finally, I found a solution by changing my web.config to optimizeCompilations="true" and performance improved dramatically from 15-30 seconds to about 2 seconds.

Calling a method from ABL code not working

When I create a new quote from Epicor I would like to add an item from the parts form automatically.
I am trying to do this using the following ABL code which runs when 'GetNewQuoteHed' is called:
run Update.
run GetNewQuoteDtl.
run ChangePartNumMaster("Rod Tube").
ttQuoteDtl.OrderQty = 5.
run Update.
I am getting the error:
Index -1 is either negative or above rows count.
This error occurs for each line in my ABL code.
What am I doing wrong?
That's not the proper format for a 4GL error message (nor is it at all familiar) so I'd say it is an Epicor application message. Epicor support is probably your best bet. However... Just guessing but it sounds like you might need to somehow initialize the thing that you're updating.
Agree with #Tom, but i would also say try and isolate the error and see where the error is raised as soon as you find the point the error is actually raised it is normally much easier to figure out exactly what is going wrong and how to solve it.
Working between a 0 based and a 1 based system there can be issues with the 1st or last entry depending on which way you moving. As the index for 0 based systems starts at 0 and ends at n-1 where 1 based systems start at 1 and end at n.

Perl/Swig/Python/Postgresql/C++ Script just stops executing, only getting "Premature end of script headers"

This is hard to explain in a few sentences. I have spent the last 5 days trying to figure this out, so now I'm asking here as a last resort. I am trying to run a pool physics library with tournament server, built by the Stanford University Computational Billiards Group, available at http://www.stanford.edu/group/billiards/
It provides a tournament server using apache2, postgresql and perl. I have been able to enable functions like logging in or creating simple matches, but the communication with AI clients does not work. The client is a C++ application I'm running in a terminal, its fetching commands from the server and its printing this error:
XMLRPC error: Unable to transport XML to server and get XML response back. HTTP response code is 500, not 200
And the apache2 error.log is printing this:
[Thu May 17 21:30:17 2012] [error] [client 127.0.0.1] Premature end of script headers: api.pl
I tried running it with CGI::Debug but didnt get any more output. I have been able to identify the line it fails at with some debug outputs (one after every line :) ) and it's in this function, on the line between the print STDERRs:
package Pool::Rules::GameState;
sub addToDb {
my $self=shift;
my $timeleft=shift || $self->timeLeft();
my $timeleft_opp=shift || $self->timeLeftOpp();
#print STDERR "uh uh uh".$self." ".$self->timeLeft()." ".$self->timeLeftOpp()." ".$timeleft." ".$timeleft_opp."\n";
my $playingSolids = ($self->isOpenTable() ? undef : ($self->playingSolids()?1:0));
# print STDERR "addToDb: X".$self->isOpenTable()."X Y".$self->playingSolids()."Y Z".$playingSolids."Z\n";
$Pool::dB::dbh->do('INSERT INTO states (turntype,cur_player_started,playing_solids,timeleft,timeleft_opp,gametype) VALUES (?,?,?,?,?,?)',{},
$self->getTurnType(),$self->curPlayerStarted()?1:0,$playingSolids,$timeleft,$timeleft_opp,$self->gameType()) or STDERR $DBI::errstr;
my $stateid=$Pool::dB::dbh->selectrow_array('SELECT lastval()');
$self->tableState()->addToDb($stateid);
return $stateid;
}
It's only simple getter-methods. And still, it just stops executing. The code of the getter-methods has been generated by swig. The first print-line prints this:
uh uh uhPool::Rules::GameState=HASH(0x99fa99c) 0 0 00:10:00 00:10:00
Which seems to be okay, just curious that the $self->-getters only print 0. Since the library is from Stanford University and actual competitions have been held with it, I find it hard to believe that this is an error in the code. It just seems to be a bit too old to work (I had to make some adjustments on other places to make it work on more recent versions of libraries, f.ex. I added a < cstddef >-include in another file unrelated to this perl-issue). I first tried making it run on current versions, which I failed. Then I tried to install it on the versions the readme.txt sais it's been tested for (Ubuntu 8.10, Perl 5.10 and so on), but at some point my Ubuntu 8.10 installation always died and I had to reinstall (I tried this ~4 times, the gnome-terminal always Segmentation faulted). So now I'm back in Ubuntu 12 trying to make it work. I dont know much about perl, only the little I have been able to pick up over the last few days trying to get this to run.
Does anyone have an idea what could be triggering this kind of behavior? Is anyone aware of any compatibility issues that may be related to this? If you need any additional information just ask and I'll provide it.
Thank you for your help!

How can I validate an image file in Perl?

How would I validate that a jpg file is a valid image file. We are having files written to a directory using FTP, but we seem to be picking up the file before it has finished writing it, creating invalid images. I need to be able to identify when it is no longer being written to. Any ideas?
Easiest way might just be to write the file to a temporary directory and then move it to the real directory after the write is finished.
Or you could check here.
JPEG::Error
[arguments: none] If the file reference remains undefined after a call to new, the file is to be considered not parseable by this module, and one should issue some error message and go to another file. An error message explaining the reason of the failure can be retrieved with the Error method:
EDIT:
Image::TestJPG might be even better.
You're solving the wrong problem, I think.
What you should be doing is figuring out how to tell when whatever FTPd you're using is done writing the file - that way when you come to have the same problem for (say) GIFs, DOCs or MPEGs, you don't have to fix it again.
Precisely how you do that depends rather crucially on what FTPd on what OS you're running. Some do, I believe, have hooks you can set to trigger when an upload's done.
If you can run your own FTPd, Net::FTPServer or POE::Component::Server::FTP are customizable to do the right thing.
In the absence of that:
1) try tailing the logs with a Perl script that looks for 'upload complete' messages
2) use something like lsof or fuser to check whether anything is locking a file before you try and copy it.
Again looking at the FTP issue rather than the JPG issue.
I check the timestamp on the file to make sure it hasn't been modified in the last X (5) mins - that way I can be reasonably sure they've finished uploading
# time in seconds that the file was last modified
my $last_modified = (stat("$path/$file"))[9];
# get the time in secs since epoch (ie 1970)
my $epoch_time = time();
# ensure file's not been modified during the last 5 mins, ie still uploading
unless ( $last_modified >= ($epoch_time - 300)) {
# move / edit or what ever
}
I had something similar come up once, more or less what I did was:
var oldImageSize = 0;
var currentImageSize;
while((currentImageSize = checkImageSize(imageFile)) != oldImageSize){
oldImageSize = currentImageSize;
sleep 10;
}
processImage(imageFile);
Have the FTP process set the readonly flag, then only work with files that have the readonly flag set.