Webgpu shows scripting in Idle state - webgpu

I've been tinkering with WebGPU and, when profiling the page on a couple different pages, I see that the script is marked as Idle. This makes it hard to estimate the actual rendering budget, as the time that the JS is executing is actually quite small but it looks much larger due to the Idle marking. Is there anyway to clear up this Idle when using WebGPU? Thanks!

Related

Unusual spikes in CPU utilization in CentOS 6.6 while starting pycharm

my system since last couple of days is behaving strangely. I am a regular user of pycharm software, and it used to work on my system very smoothly with no hiccups at all. But since last couple of days, whenever I start pycharm, my CPU utilization behaves strangly, like in the image: Unusual CPU util
I am confused as when I go to processes or try ps/top in terminal, there are no process which is utilizing cpu more then 1 or 2%. So I am not sure where these resources are getting consumed.
By unusual CPU util I mean, That first CPU1 is getting used 100% for couple or so minutes, then CPU2. Which is, only one cpu's utilization goes to 100% for sometime followed by other's. This goes on for 10 to 20 minutes. then system comes back to normal.
P.S.: I don't think this problem is related to pycharm, as I face similar issues while doing other work also, just that I always face this with pycharm for sure.
POSSIBLE CAUSE: I suspect you have a thrashing problem. The CPU usage of your applications are low because none of them are actually getting much useful work done. All the processing is being taken up by moving memory pages to and from the disk. Your CPU usage probably settles down after a time because your application has entered a state where its memory working set has shrunk to a point where it all can be held in memory at one time.
This has probably happened because one of the apps on your machine is handling a larger data set than before, and so requires more addressable memory. Another possibility is that, for some reason, a lot more apps are running on your machine.
POTENTIAL SOLUTION: There are several ways you can address this. The simplest is to put more RAM on your machine. If this doesn't work or isn't possible, you'll have to figure out which app is the memory hog. You may simply have to work with smaller problems/data-sets or offload some of the apps onto a different box.
MIGRATING CPU LOAD: Operating systems will move tasks (user apps, kernel) around for many different reasons. The reasons can range anywhere from it being just plain random to certain apps having more of their addressable memory in one bank vs another. Given that you are probably doing a lot of thrashing, I'm not surprised that the processor your app is running is randomized over time.

Create a background thread that executes a command every 4hrs

I am trying to figure out how to use a background thread to execute a command ever 4hrs.
I have never created anything like this before so have only been reading about it so far.. One of the things I have read are this
"Threads tie up physical memory and critical system resources"
So in that case would it be a bad idead to have this thread that checkes the time then executes my method... or is there a better option, I have read about GCD (Grand Central Dispatch) but I am not sure if this is applicable as I think its more for concurrent requests? not something that repeats over and over again checking the time..
Or finally is there something I have completely missed where you can execute a request every 4hrs?
Any help would be greatly appreciated.
There is a max time background processes are allowed to run (10 min) which would make your approach difficult. Your next best attempt is to calculate the next event as save the times tamp somewhere. Then if the app is executed at or after that event it can carry out whatever action you want.
This might help:
http://www.audacious-software.com/2011/01/ios-background-processing-limits/
I think that it would be good to make use of a time stamp and post a notification for when the time reaches for hours from now.
Multithreading is not a good means to do this because essentially you would be running a loop for four hours eating clock cycles. Thanks to the magic of operating systems this would not eat up an entire core or anything silly like that however it would be continuously computed if it was allowed to run. This would be a vast waste of resources so it is not allowed. GCD was not really meant for this kind of thing. It was meant to allow for concurrency to smooth out UI interaction as well as complete tasks more efficiently, a 4hr loop would be inefficent. Think of concurrency as a tool for something like being able to interact with a table while its content is being loaded or changed. GCD blocks make this very easy when used correctly. GCD and other multithreading abilities give tools to do calculations in the background as well as interact with databases and deal with requests without ever affecting the users experience. Many people whom are much smarter then me have written exstensively on what multithreading/multitasking is and what it is good for. In a way posting a message for a time would be method of multitasking without the nastiness of constantly executing blocks through GCD to wait for the 4 hr time period, however it is possible to do this. You could execute a block that monitored for time less then the max length of a threads lifetime then when the threads execution is over dispatch it again until the desired time is achieved. This is a bad way of doing this. Post a notification to the notification center, its easy and will accomplish your goal without having to deal with the complexity of multithreading yourself.
You can post a notification request observing for a time change and it will return its note, however this requires you application be active or in the background. I can not guarantee the OS wont kill your application however if it is nice and quiet with a small memory footprint in "background" state its notification center request will remain active and function as intended.

What sort of things can cause a whole system to appear to hang for 100s-1000s of milliseconds?

I am working on a Windows game and while rendering, some computers will experience intermittent pauses ("hitches" for lack of a better term). When profiled they appear in seemingly random places in the code. Eventually I noticed that it wasn't just my process that was affected, but (seemingly) every process on the system. All of the threads in my application hitch at once. The CPU utilization drops during these hitches and it appears as if most processes make no progress.
This leads me to believe this may be an Operating System or Driver issue, but it only occurs while playing the game (and only on some systems). What sort of operations might the operating system be doing that would require the kernel to pause all user threads and block. Some kind of I/O? At first I thought of paging but my impression is that would only affect a single process, no?
Some systems in use: Windows, DirectX (3d), nVidia cards (unknown if replicates on ATI), using overlapped io for streaming
If you have a lot of graphics in use, it may be paging graphics memory into the swap file.
Or perhaps the stream is getting buffered on disk?
It's worth seeing if the hitches coincide with the PC's disk activity LED.
Heavy use of memory mapped IO. This of course includes the system pagefile, but can also include user applications that use mmio heavily (gcc for one)

How does Scala's Lift manage state?

I'm quite impressed by what Lift 2.0 brings to the table with Actors and StatefulSnippets, etc, but I'm a little worried about the memory overhead of these things. My question is twofold:
How does Lift determine when to garbage collect state objects?
What does the memory footprint of a page request look like?
If a web crawler dances across the footprint of the site, are they going to be opening up enough state objects to drown out a modest VPS (512M)? The question is very obviously application dependent, but I'm curious if anyone has any real world figures they can throw out at me.
Lift stores state information in a session, so once the session is destroyed the state associated with that session goes away.
Within the session, Lift tracks each page that state is allocated for (e.g., mapping between an ajax button in the browser and a function on the server) and have a heart-beat from the browser. Functions for pages that have not seen the heartbeat in 10 minutes are unreferenced so the JVM can garbage collection them. All of this is tunable, so you can change heart-beat frequency, function lifespan, etc., but in practice the defaults work quite well.
In terms of session explosion, yeah... that's a minor issue. Popular sites (including http://demo.liftweb.net/ ) experience it. The example code (see http://github.com/lift/lift/tree/master/examples/example/ ) detects sessions that were created by a single request and then abandoned and expires those early. I'm running demo.liftweb.net with 256MB of heap size (that'd fit in a 512MB VPS) and occasionally, the session count rises over 1,000, but that's quickly tamped down for search engine traffic.
I think the question about memory footprint was once answered somewhere on the mailing list, but I can’t find it at the moment.
Garbage collection is done after some idle time. There is, however, an example on the wiki which uses some better heuristics to kill off sessions spawned by web crawlers.
Of course, for your own project it makes sense to check memory consumption with something like VisualVM while spawning a couple of sessions yourself.

How to reduce the startup time for a typical iPhone app?

To be clear, this is for a normal iPhone application, and not a game.
I've read around the web a few times some developers mentioning that they were working hard to improve/reduce the startup time of their applications, but never with any good background information on how to do so.
So the question is simple: how can you reduce the startup of iPhone applications?
Same as any other performance issue: Use Shark and/or Instruments to identify bottlenecks in your code, and then focus on how you can speed things up there. Each tool will give you a picture of how much time was spent in what parts of your code, so the general scheme would be to run the tool while you start the app and then pore over the data to see where the performance hits occur.
At app startup time, the most likely candidates for improvement will be deferring data loading until later on when it's actually needed, variously described as "on demand" or "lazy" loading. Essentially, don't load any data at app startup unless it's actually needed right away when the app loads. In practice, lots of stuff that may be needed at some point doesn't have to be immediately available when the app starts. For example, if you have a database of N records but only one is visible at a time, don't load all N into memory at app startup time. Load whatever the current record is and then load the others when you actually need them.
James Thomson did a nice blog post documenting his efforts to make PCalc launch faster.
Of particular interest is his use of image with screenshot from last app run, to pull the same trick Default.png does, while loading rest of the app.