SWTBot doesn't work - eclipse

I follow a book called Eclipse Plugin Development by Example: Beginner's Guide and all examples are hosted at github. However, I can't successfully run SWTBot example.
The first time it takes a very long time to run, but in the end it would pass all test cases.
However when I try to run the same code second time, it only testUI() will pass, the other three will have org.eclipse.swtbot.swt.finder.exceptions.WidgetNotFoundException: The widget was null.
Somewhere in the book is said
If one (shell) is not currently visible, it polls (every 500 milliseconds by default) until one is found or the default timeout period (5 seconds) ends when a WidgetNotFoundException is thrown
But I don't see why the first time all test cases will pass but not the second time.
but I have not idea why the first time will work but second time won't.
I also report this at github issue but so far no one response.

did you interfere with your desktop while the test was running? I found this can (!) cause problems with SWTBot.
Also, WidgetNotFound is an exception you'll be seeing a lot when using this framework. Sometimes it might be due to bugs, sometimes to unusual underlying UI code. It should be reproducible in those cases, though.

Related

program execution is too slow in eclipse and was fast just yesterday for the same program

I am executing one java program via eclipse and I was executing the exact same program yesterday and my program execution was only taking 10 min yesterday, today the same program is taking more than an hour and I did not change any single thing in my code. could you plwase give me a solution to revert back to the old duration of my program execution that I had yesterday
If you did not change anything in your sourcecode, I see the following possible reasons for this:
Side-effects on the machine you are running the program on, like other (maybe hidden) processes soak up cpu time and slow down your program.
This could also be the machine itself being slower (slowdown from to much heat, etc.)
Your code is doing some "random" things that require longer runs sometimes (sounds unlikely tho)
Somehow eclipse is causing an issue (try to run your program without it)
Your java runtime might cause a problem (sounds unlikely aswell, but maybe updating it to the newest version can help)

Is GWTTestCase obsolete? Are there better alternatives?

Trying to figure out what's the status of GWTTestCase suite/methodology.
I've read some things which say that GWTTestCase is kind of obsolete. If this is true, then what would be the preferred methodology for client-side testing?
Also, although I haven't tried it myself, someone here says that he tried it, and it takes seconds or tens of seconds to run a single test; is this true? (i.e. is it common to take tens of seconds to run a test with GWTTestCase, or is it more likely a config error on our side, etc)
Do you use any other methodology for GWT client-side testing that has worked well for you?
The problem is that any GWT code has to be compiled to run within a browser. If your code is just Java, you can run in a typical JUnit or TestNG test, and it will run as instantly as you expect.
But consider that a JUnit test must be compiled to .class, and run in the JVM from the test runner main() - though you don't normally invoke this directly, just start it from your build tool or IDE. In the same way, your GWT/Java code must be compiled into JavaScript, and then run in a browser of some kind.
That compilation is what takes time - for a minimal test, running in only one browser (i.e. one permutation), this is going to take a minimum of 10 seconds on most machines (the host page for the GWTTestCase to allow the JVM to tell it which test to run, and get results or stacktraces or timeouts back). Then add in how long the tested component of your project takes to compile, and you should have a good idea of how long that test case will take.
There are a few measures you can take to minimize the time taken, though 10 seconds is pretty much the bare minimum if you need to run in the browser.
Use test suites - these tell the compiler to go ahead and make a single larger module in which to run all of the tests. Downside: if you do anything clever with your modules, joining them into one might have other ramifications.
Use JVM tests - if you are just testing a presenter, and the presenter is pure Java (with a mock vide), then don't mess with running the code in the browser just to test its logic. If you are concerned about differences, consider if the purpose of the test is to make sure the compiler works, or to exercise the logic.

Part of DROOLS ruleflow not working randomly

We have a main ruleflow which calls 8 more rule flows (Rule1.rf to Rule8.rf) through an AND splitter. One of the rule flows - say Rules4.rf - is fired sometimes and not fired sometimes.
This is for an online application and we use jBoss. When the server is started, everything works fine. After many hours, for some requests, Rules4.rf is not fired at all and for others, its fired properly.
We even posted the same request again and again and the issue happens some times only. There is no difference in the logs between the success & failure requests, except for the logs from the Rules4.rf which missing in failued requests.
We are using drools 5.1 and java 6.
Please help me. This is creating a very big issue.
It is very difficult to figure out what might be going on without being able to look at the actual code and log. Could you by any chance create a JIRA and attach the process and if possible (a part of) an audit log that shows the issue?
Kris

anyway to see method execution times in Xcode?

I need to debug a certain ViewController I have and I can't seem to pinpoint exactly what is causing my lag time for the view to show.
IS there any debugger tool in Xcode that will show me how long my methods are taking to run so i can at least find the right place to start?
Instruments has a profiler built into it ever since iOS 4.0 (before which you used a stand-alone profiler tool called Shark).
Here's a quick little tutorial that will get you started: http://blancer.com/tutorials/flex/78335/apple-profiling-tools-shark-is-out-instruments-is-in/
If you don't know about Instruments, you should. It's how you know what's really going on inside your code while it runs.
Apart from Time Profiler as suggested by Dan you can also use Sampler instrument which generally stops a program at prescribed intervals and records the stack trace information for each of the program’s threads. You can use this information to determine where execution time is being spent in your program and improve your code to reduce running time.
The main difference between sampler & Time profiler :
Sampler instrument operates upon a single process but Time Profiler operates upon a single/All processes.

Tips for finding things in your program that are broken that you don't know about?

I was working on something for a client today when I found a way to break some functionality in our program.
(The code is really legacy code, it's been in development for about 10 years and I've only been working here for about a year.)
It didn't cause an error, or cause the program to crash, but if a user was using the program and duplicated the behavior I'm pretty sure they'd be holding up their "WTF?" flag.
In our program we have named fields (textboxes) and static text (labels) that can be linked with the textboxes. When the textbox is not filled in the label(s) that were linked to them disappear.
The functionality that I broke was, when you change the name of a textbox that already has one label or more linked to it, and save the file, without re-associating the one or more labels associated with the textbox, the formerly-associated labels appear when the textbox is blank.
Now my thinking on the matter is that a simple observer pattern could have solved this problem in the first place, but then I didn't write the code.
I was thinking that if I could dig up more situations like this with the guys in my shop, that maybe I could talk them into considering unit testing, decoupling, applying patterns where they are called for and the like.
So for this reason I was wondering if anyone had any tips for finding broken (but not error causing) functionality in any sort of app (web-based, desktop, etc...)
For an app to fail usability, it has to have a defined set of expected behaviors.
"Is this textbox SUPPOSED to do nothing when the enter key is pressed?" Maybe it is, maybe it isn't. I've seen apps where a tester/reviewer reports something that they ASSUME should work another way, when in actuality the client specifically asked that they DON'T want the form submitted on a return key press, but only a submit button click.
So basically you have to define proper behaviour before you can determine incorrect behavior.
Hire some testers.
If it has an interface, then one of my favorite unconventional test is putting 5-10 year old children in front of it. You'd be surprised what they can come up with (especially the younger ones). While this may sound like a joke, it isn't -- it really works, because children don't have the mindset of only going through "mindset" paths.
And yeah, children are the experts in "breaking things" xP.
Code inspections, i.e. reading the source code: if you had taken time to read/inspect the source code, looking for "smells" or even just looking for code whose behaviour you don't immediately understand and agree with, you might have been holding up your "WTF?" flag too.
Test, test, test.
Do unexpected things. Start doing one task and switch another to see if anything goes haywire. Use the back button when you're not supposed to. Open it in two windows. Let it time out.
Test in all browsers, especially IE.
You can find database connections/sessions aren't released by:
working out the minimum number of connections you need to do something
setting resource limits to that minimum number
ensuring one "run" of the scenario that should use exactly that number (and release it afterwards)
then run it again a few times... do you run out of connections?
I used to work in a company where programmers regularly used to forget to de-allocate db connections. The standard answer was to reduce the resource to a minimum to see if there's a leak - and to try to work out where it is by restarting the system and running different scenarios repeatedly.
The first hour of code review, with the first reviewer, will do the most to find quality problems. But here's the thing: You don't need to convince people of quality problems. You need to convince them of the value of fixing bugs, and of rewriting only when the present quality absolutely justifies it.
I've dealt with some seriously bad code in my time. But you can't just rewrite. You need a spec before you can even tell if the rewrite is an improvement.
Sometimes, you have to infer the spec from the code and then check it against some human somewhere. But by the time you've done that, you understand the code as written and are now better prepared to repair than to rewrite -- most of the time.
Repair proceeds by a process of small behavior-preserving modifications that render the spec more clear in the code. Then, when you find something that looks wrong, you don't just change it. You ask around until you find the person responsible for that decision, and you get them to show you where in the spec it says that behavior X is correct. (This conversation can take many forms.) If you're lucky, they'll tell you that behavior X is in fact incorrect, and then you've earned your pay.
assert()
Also unit testing with coverage analysis.
This is particular to the Visual Studio IDE, although it probably also applies to others:
During testing, always at some point run in the debugger with "Break when an exception is thrown" turned on.
This can often help expose exceptions which are incorrectly being silently caught and which represent bugs, but otherwise may not be evident.
Code reviews should always also include reviews of the unit test code.
The problem is that with ad-hoc testing it's impossible to know how much or how well a developer has tested their code. So, you're at the mercy of different developers definition of the word "done".
If you include reviews of the unit test code at the same time you review the production code you should have a good idea of whether the code is really complete; in that "complete" includes "tested". Not just "Hey, I'll throw it over the wall to the testers!".