I'm trying to figure out why my application performs very bad. So I did a performance record within DevTools and I can see that frames usually takes about 150 ms which is too long.
However I don't understand why the frame takes so much time. There is some javascript handling an input event, some DOM manipulation and some paint. It all takes about 60 ms. So why is the frame 150 ms long?
EDIT
I've enabled some timeline-related devtools experiments as wOxxOm suggested. There is some kind of Update Layer task.
I would have said it came from too much nodes in your layer, but with some research we can now find that someone already had your problem before. To quote the answerer :
In your case, I would guess that you are triggering a fundamental
layer invalidation that is forcing it to update a layer high up on the
tree hierarchy which is then trickling down the tree and causing each
of those layers to be updated. Although its hard to say without
looking at your code.
Either way if this long Update Layer Tree is consistently happening
before the Layout is being recalculated, it's most certainly related
to that.
I'll advise checking the ressource pointed by Alexander J. Ray, especially the HTML5 Rocks article.
OK,
So we want our robot - roomba (the nice vacuum cleaner) to know it's location in a given room.
That means we have the map of the room and the robot is put somewhere and needs to know in a short time where it is located.
We saw a lot of algorithms - where the most relevant one was MCL (monte carlo algorithm) for localization of robots in space.
We are afraid that it is too big for us and don't know where to start from.
We would like to write the code in MATLAB.
So if anyone have any idea where we can find a code - we would apprecate it a lot.
We are open minded about the algorithm - so if you have a better one or something that might work, that will be great. That goes to the language we are writing it in.
Thanks.
Liron.
Interesting.
I've read a lot about trying to keep track of where the roomba is, but it seems like every system that has used only "internal" feedback from the roomba has ended disastrously. Meaning they try to keep track of the wheel locations etc... The main problem is that you can't take into consideration the wheel slip you get and will drastically change based on surface and other factors.
I would recommend using either a stationary based sensor that the roomba can locate from, on-board diagnostic sensors (such as a camera, wiskers, ultrasonic), or a combination of the two.
STAMP makes a great ultrasonic sensor package called the PING((( that can sense up to 6ft. I've used it up to 15 feet, but it works great in close proximity for mapping.
hope this helps!
I don't see any way to start/stop profiling in Instruments from code, which kinda kills its usefulness for me in a large number of situations.. Am I missing something? Does anyone know of a way to do this?
The fallback approach is to grab performance data on my own, without Instruments. Has anyone tried to do this before? By "performance data" I mean counts of events like cache misses, fills, missed branches, etc.
Thanks!
Update:
I looked into operating the performance monitor hw directly from code, but, unsurprisingly, it appears to be a no-go. USEREN, the "user enable register" controls access to perfmon registers but is not enabled. It might be possible to run privileged or enable user access with a jailbroken phone, but that's a lot of work for some basic profiling.. ugh.
Is there a way to simulate user activity on desktop on Windows? This is the situation: A friend of mine works from his home. His company recently decided to provide their employees with a communication tool which they have to keep running in the background. Apart from its main functionality it also has a very intimidating side effect: It tracks user activity. This means that the programm monitors keystrokes and mouse movements. If a user is idle for say 5 minutes or something, an icon next to his name indicates his idle status to all other users, much similar to instant messengers like skype for example. Now while this may be useful in IM programms, we both find it a bit disturbing in a work related context, for obvious reasons.
Doing some google search only gave me shareware links or cheating tools for MMORPGs. But maybe I searched for the wrong terms. My first guess would have been to have a small process running in the background which imitates keystrokes or mouse movement in regular intervals. But maybe there is another way to deal with this. (Oh, and complaining about lack of privacy to the employer is not an option ;) Also please note that I don't want to promote laziness or question an employer's rights over his employees.)
Any comments and help appreaciated. Thanks!
There is an easy way to make cursor move in C++.
its something like:
pos.X = 10;
pos.Y = 10;
I dont know if this is the best way, but it works.
If you dont want to program your own program, Im sure there are a lot of programs on the internet. You just need to google :) .
I was working on something for a client today when I found a way to break some functionality in our program.
(The code is really legacy code, it's been in development for about 10 years and I've only been working here for about a year.)
It didn't cause an error, or cause the program to crash, but if a user was using the program and duplicated the behavior I'm pretty sure they'd be holding up their "WTF?" flag.
In our program we have named fields (textboxes) and static text (labels) that can be linked with the textboxes. When the textbox is not filled in the label(s) that were linked to them disappear.
The functionality that I broke was, when you change the name of a textbox that already has one label or more linked to it, and save the file, without re-associating the one or more labels associated with the textbox, the formerly-associated labels appear when the textbox is blank.
Now my thinking on the matter is that a simple observer pattern could have solved this problem in the first place, but then I didn't write the code.
I was thinking that if I could dig up more situations like this with the guys in my shop, that maybe I could talk them into considering unit testing, decoupling, applying patterns where they are called for and the like.
So for this reason I was wondering if anyone had any tips for finding broken (but not error causing) functionality in any sort of app (web-based, desktop, etc...)
For an app to fail usability, it has to have a defined set of expected behaviors.
"Is this textbox SUPPOSED to do nothing when the enter key is pressed?" Maybe it is, maybe it isn't. I've seen apps where a tester/reviewer reports something that they ASSUME should work another way, when in actuality the client specifically asked that they DON'T want the form submitted on a return key press, but only a submit button click.
So basically you have to define proper behaviour before you can determine incorrect behavior.
Hire some testers.
If it has an interface, then one of my favorite unconventional test is putting 5-10 year old children in front of it. You'd be surprised what they can come up with (especially the younger ones). While this may sound like a joke, it isn't -- it really works, because children don't have the mindset of only going through "mindset" paths.
And yeah, children are the experts in "breaking things" xP.
Code inspections, i.e. reading the source code: if you had taken time to read/inspect the source code, looking for "smells" or even just looking for code whose behaviour you don't immediately understand and agree with, you might have been holding up your "WTF?" flag too.
Test, test, test.
Do unexpected things. Start doing one task and switch another to see if anything goes haywire. Use the back button when you're not supposed to. Open it in two windows. Let it time out.
Test in all browsers, especially IE.
You can find database connections/sessions aren't released by:
working out the minimum number of connections you need to do something
setting resource limits to that minimum number
ensuring one "run" of the scenario that should use exactly that number (and release it afterwards)
then run it again a few times... do you run out of connections?
I used to work in a company where programmers regularly used to forget to de-allocate db connections. The standard answer was to reduce the resource to a minimum to see if there's a leak - and to try to work out where it is by restarting the system and running different scenarios repeatedly.
The first hour of code review, with the first reviewer, will do the most to find quality problems. But here's the thing: You don't need to convince people of quality problems. You need to convince them of the value of fixing bugs, and of rewriting only when the present quality absolutely justifies it.
I've dealt with some seriously bad code in my time. But you can't just rewrite. You need a spec before you can even tell if the rewrite is an improvement.
Sometimes, you have to infer the spec from the code and then check it against some human somewhere. But by the time you've done that, you understand the code as written and are now better prepared to repair than to rewrite -- most of the time.
Repair proceeds by a process of small behavior-preserving modifications that render the spec more clear in the code. Then, when you find something that looks wrong, you don't just change it. You ask around until you find the person responsible for that decision, and you get them to show you where in the spec it says that behavior X is correct. (This conversation can take many forms.) If you're lucky, they'll tell you that behavior X is in fact incorrect, and then you've earned your pay.
assert()
Also unit testing with coverage analysis.
This is particular to the Visual Studio IDE, although it probably also applies to others:
During testing, always at some point run in the debugger with "Break when an exception is thrown" turned on.
This can often help expose exceptions which are incorrectly being silently caught and which represent bugs, but otherwise may not be evident.
Code reviews should always also include reviews of the unit test code.
The problem is that with ad-hoc testing it's impossible to know how much or how well a developer has tested their code. So, you're at the mercy of different developers definition of the word "done".
If you include reviews of the unit test code at the same time you review the production code you should have a good idea of whether the code is really complete; in that "complete" includes "tested". Not just "Hey, I'll throw it over the wall to the testers!".