What happens to performance.mark entries when the resource buffer gets full - web-performance

I am building a large private (ie used behind a firewall) PWA and wondering how to improve the diagnostics if/when my users hit issues. I already have an error manager which uses navigator.sendBeacon to log the error on the server, but that lacks detailed info of what led up to that point.
A thought I had was to liberally mark the code with performance.mark() statements and on an error dump the performance buffer to the server. It would give me an ordered list of recent activity.
However it only makes sense to do this if the browser throws away the oldest entry to make way for the new when the internal buffer is full. However all the documentation I found with a google search doesn’t mention it. I am aware I can get an event when it is full and could use that to copy and clear it but I can find no words on what happens if I ignore the event. Neither can I find a typical size. I don’t want to keep getting entries filling up the entire computer memory either
Can anyone give me a definite answer
Edit: The more I look into this, the more confused I become. It appears that you can control the size of the resourceTimingBuffer but "resource" performance entries are related to fetch and not Performance.mark(). I can't find any statement on limitations.

There are no meaningful limitations I could find. I did a test and generated more than 4000 marks and they were all there and the memory usage did not increase in any measurable way.

Related

Want to know what allth terms in windbg output of "!analyze -v" indicates?

What the key value indicates.......and which is the term help me to undersatnd how the windbg bucket the crashes means how it braodly classify the crashes into?
help me to understand the windbg bucket
IMHO, the idea of buckets was introduced for WER (Windows Error Reporting). WER was used by Microsoft but was also available for companies. WER included a service where you could log in on a Microsoft website and then get an overview of your application crashes.
Of course, people were not interested in a flat list of crashes, but they wanted to know how many crashes of the same type occured. Thus Microsoft and other company could focus on fixing those bugs first which affected most of the users.
The bucket, as the name suggests, is a container where similar problems grouped. The bucket ID is generated in 2 phases: a labeling process which was done on the client side and a classifying process which was done on server side.
What you get from !analyze is the classification, so basically you have access to the functionality via WinDbg that Microsoft used on the server side for providing the WER services.
These WER services are not available any more. They hae been replaced by something else, but I have forgotten the name.
how it braodly classify the crashes into?
An ideal bucketing algorithm would create a new bucket for each bug. So the number of buckets is just limited by the amount of bugs you can code into your application.
The command !analyze has implemented more than 500 different heuristics. The combination of these can create more than 25.000.000 different buckets.
Buckets can differ because of
stack
modules
function name
function offset
corruptions (heap corruption, image corruption)
detected malware
known outdated programs or libraries
known defective hardware
exception codes
exception subcodes
...
The result of that bucketing process is this line of output:
FAILURE_BUCKET_ID: BREAKPOINT_80000003_ntdll.dll!LdrpDoDebuggerBreak
which is probably somehow equivalent to this hash:
FAILURE_ID_HASH: {06f54d4d-201f-7f5c-0224-0b1f2e1e15a5}
I have read some of your previous questions in the windbg tag and I get the impression that you want to use the bucket ID to display some meaningful information to humans.
Actually, the WER system provided such a feature. It worked like this: a developer analyzes the crashes in a bucket and finds out what to do (e.g. update a driver, install a newer version of the application etc). He then assigns that bucket ID a text. Any customers that experience the same crash again were redirected to a website at Microsoft that contained the text written by the developer.
However, note that there is no magic involved that would transfer a crash into something human readable. That's the developer doing hard work and then creating a mapping from the bucket ID to some text that is displayed.
IMHO, the latter can easily be achieved. However, any new bug will require an analysis first. But, who knows, maybe we can train an AI that does better at this.
For more on buckets etc. please read the Microsoft paper Debugging in the (Very) Large:Ten Years of Implementation and Experience

IBM Window Services (DWS) csrevw function on MVS

I'm working on IBM MVS (z/OS) and trying to make Window Services working.
On the function CSREVW I don't understand what the purpose of the parameter pfcount.
Acording to the documentation this will ask to the window services to read more than one block after my program references a block that is not in my window.
But how the window services is suposed to know that I tried to reference data that are not in my window? I mean, it can't know that I'm reading data out of my window if i don't call CSREVW or CSRVIEW again.
Maybe my major issue is that I have trouble to understand english but this seems clear to me...
Here is the link to the documentation, this is explained at pages 23-24 :
http://publibz.boulder.ibm.com/epubs/pdf/iea3c102.pdf
I know this is a very specific problem about an IBM service and I apologize about that.
Thank you !
Tim
I think the problem you're having is that you need to understand a little bit about how the underlying objects behind the windowing service work in virtual storage.
At the core, the various windowing services work to give you what amounts to a "private" page dataset. You allocate and reference storage, but the objects in that virtual space aren't really in memory - the system's page fault mechanism brings them in as you reference them. So yes, you're accessing data within a "window", but in reality, the data you expect to see may not be "paged in" at that moment.
Going a little deeper, when you first allocate the object, the virtual storage it's mapped to has all of the pages marked "invalid" in the underlying page table entries. That means that as soon as you touch this storage, a page fault interrupt occurs. At this point, the operating system steps in and resolves the page fault by brining the necessary data into memory, then your program continues, oblivious to all of this processing on your behalf. You're correct that you're just referencing data within the window, but there's a lot under the covers going on to support this.
This is where PFCOUNT comes in...
Let's say you have structures that are, say, 64K long inside your virtual window. It would be sloppy and slow to reference each page of this structure and cause a page fault each time. Much better would be to use PFCOUNT to cause the page you reference and all 15 other pages needed by your object to be paged-in with a single operation. Conversely, if your data was small and you were highly random about how you access it, PFCOUNT isn't going to help you - the next page you reference could be anywhere, and it's actually wasteful to have a large PFCOUNT since you end up bringing in a lot of data you never use.
Hope that makes sense - if you'd like a challenge, take yourself a system dump and examine the system trace entries as you reference data...you'll see a very distinct pattern of page faults, I/O and resumption of your program, and hopefully it will all make sense to you.
From the manual
,pfcount
Specifies the number of additional blocks you want window services to bring into the window each time your program references data that
is not already in the window. The number you specify is added to the
minimum of one block that window services always brings in. That is,
if you specify a value of 20, window services brings in up to 21. The
number of additional blocks ranges from zero through 255.
Note that you get 1 block without asking.

SimpleDB: Guaranteed to see all item attributes if we see the item? (non-consistent read)

I've just discovered an assumption in my use of SimpleDB. I suspect it's safe but would like other opinions since the docs don't seem to cover it.
So say Process 1 stores an item with x attributes. When Process 2 tries to access said item (without consistent read) & finds it, is it guaranteed to have all the attributes stored by Process 1?
I'm excluding the possibility that another process could have changed the data.
I also know that Process 2 has no guarantee of seeing the item unless consistent read is used, I'm just talking about the point when it does eventually see it.
I guess the question is, once I can get an item & am not changing it anywhere else can I assume it has an ad-hoc fixed schema and access all my expected attributes without checking they actually exist?
I don't want to be in a situation where I need to keep requesting items until they have all the attributes I need to use them.
Thanks.
Although Amazon makes no such guarantees in the documentation the current implementation of their eventual consistency guarantees that you'll see all the properties stored by Process 1 or none of them.
See this thread over at the AWS forums and more specifically this answer by an Amazon employee confirming the behavior (emphasis mine).
I don't think we make that guarantee in the documentation, but the
current implementation treats each Put request as a bundle. It won't
split the request up and apply the operations piecemeal. You'll get
either step-1 responses or step-2 responses until eventual consistency
shakes out and leaves you with step-2 responses.
While this is undocumented behavior I suspect quite a few SimpleDB users are relying on it now and as such Amazon won't be likely to change it anytime soon, but that's ju my guess.

Line Level Profiling for iPhone

I'm looking for a way to find out how much time is spent in each of my program's source line when running on the iPhone.Similar to what Shark can provide on the method/function level. Is this possible with the standard tools? Are there 3rd party tools that can provide this sort of granularity?
It wouldn't be necessary for profiling data for every line of source code in the project to be collected. Ideally one would be able to select specific methods or functions whose performance would be analyzed.
This link talks about how to gather trace data on an iPhone app, and that includes sampling the stack. Unfortunately, I could not tell from the doc if you can have samples drawn at random wall-clock times, or manually when you hit a key combination.
When you have traces, you can get a call tree, and that should get you line-level information. In fact the percent of time a line is responsible for is a simple number, the fraction of stack traces containing the line. The problem is, the UI may not show you that. The fact that that is a useful statistic is not well known.

How can I clear Class::DBI's internal cache?

I'm currently working on a large implementation of Class::DBI for an existing database structure, and am running into a problem with clearing the cache from Class::DBI. This is a mod_perl implementation, so an instance of a class can be quite old between times that it is accessed.
From the man pages I found two options:
Music::DBI->clear_object_index();
And:
Music::Artist->purge_object_index_every(2000);
Now, when I add clear_object_index() to the DESTROY method, it seems to run, but doesn't actually empty the cache. I am able to manually change the database, re-run the request, and it is still the old version.
purge_object_index_every says that it clears the index every n requests. Setting this to "1" or "0", seems to clear the index... sometimes. I'd expect one of those two to work, but for some reason it doesn't do it every time. More like 1 in 5 times.
Any suggestions for clearing this out?
The "common problems" page on the Class::DBI wiki has a section on this subject. The simplest solution is to disable the live object index entirely using:
$Class::DBI::Weaken_Is_Available = 0;
$obj->dbi_commit(); may be what you are looking for if you have uncompleted transactions. However, this is not very likely the case, as it tends to complete any lingering transactions automatically on destruction.
When you do this:
Music::Artist->purge_object_index_every(2000);
You are telling it to examine the object cache every 2000 object loads and remove any dead references to conserve memory use. I don't think that is what you want at all.
Furthermore,
Music::DBI->clear_object_index();
Removes all objects form the live object index. I don't know how this would help at all; it's not flushing them to disk, really.
It sounds like what you are trying to do should work just fine the way you have it, but there may be a problem with your SQL or elsewhere that is preventing the INSERT or UPDATE from working. Are you doing error checking for each database query as the perldoc suggests? Perhaps you can begin there or in your database error logs, watching the queries to see why they aren't being completed or if they ever arrive.
Hope this helps!
I've used remove_from_object_index successfully in the past, so that when a page is called that modifies the database, it always explicitly reset that object in the cache as part of the confirmation page.
I should note that Class::DBI is deprecated and you should port your code to DBIx::Class instead.