Watin Unit Tests with Nunit Timing problem - nunit

Hi i am using WatiN (version 2.0.10.928) with NUnit (2.5.2.9222)
if I have something like
[Test]
public void WebPageTest()
{
string url = "www.google.com";
IE ie = new IE(url);
ie.TextField(Find.ByTitle("Google Search")).TypeText("Watin");
ie.Button(Find.ByName("btnG")).Click();
ie.Element(Find.ByText("WatiN")).Click();
// ie.WaitForComplete();
Assert.IsTrue(ie.Text.Contains("Welcome at the WatiN"));
ie.Close();
}
Then usually this will work and the test will pass but sometimes when I hit the assert it seems that Watin hasn't finished loading the page and is still on the previous page. I have this problem using the IE.Text or the IE.Url properties. I tried using WaitForComplete() (even though that shouldn't be neccessary) but still sometimes have the same problem.
Has Anybody had this problem with WatiN before?
Has anybody succesufully managed to use WatiN with NUnit like this? Or Maybe it would work better with a different unit testing framework like MBUnit? Has anyone had better luck with MBunit?

The test framework you use will make no difference, I'm afraid -- this is one of the "gotchas" of any screen-scraping test framework, and WaTin is no different.
The WaitForComplete() call is definitely necessary, I'm afraid.
Some of my colleagues have reported that the version of IE can make a difference; IE6 in particular has some internal timing issues that can cause problems. IE8 appears to be quite a bit better.

I've had the same problem with my tests; unfortunately it doesn't seem as though you can assume that the WaitForComplete() that's supposed to be inherent in the Click() method will function correctly. Even explicitly calling WaitForComplete() afterward hasn't always worked.
As a last resort we have used System.Threading.Thread.Sleep(int timeout_in_milliseconds) to force the browser to give the page time to load. This isn't a completely bulletproof means of doing it, but it has eliminated about 90% of these sorts of errors. For the timeout we have used anything from 500 to 2000 milliseconds, depending on how long it takes to load and how quickly we want the test to run.

Try using
[Test]
public void WebPageTest()
{
string url = "www.google.com";
IE ie = new IE(url);
ie.TextField(Find.ByTitle("Google Search")).TypeText("Watin");
var btnG = ie.Button(Find.ByName("btnG"));
btnG.ClickNoWait();
ie.WaitForComplete(400);
var elementWatin = ie.Element(Find.ByText("WatiN"));
elementWatin.ClickNoWait();
ie.WaitForComplete(400);
Assert.IsTrue(ie.Text.Contains("Welcome at the WatiN"));
ie.Close();
}
Thanks
Gandhi Rajan

I have used ie.WaitForComplete() but it still does enforce waiting, and sometimes it times out, so I use
Settings.AttachToBrowserTimeOut = 200;
Settings.WaitForCompleteTimeOut = 200;
This worked for me.

Related

MultiSphereTraceByChannel Issue in UnrealEngine 4.26.0 (Preview2)

I am currently learning the UnrealEngine and made huge steps in less time.
I worked with version 4.25.1 for a while now and created a projekt in there without having any problems so far.
Yesterday I switched from version 4.25.1 to 4.26.0 (Prev2) and observed a strange behaviour.
In the older version I used "MultiSphereTraceByChannel" in one of my Actor-BluePrints and everything worked fine. Looping through the HitArrayResult gave me all hits of the tracer with specific objects.
Now in 4.26.0 it seems that it does not work properly anymore like before. Every hit returns an ImpactPoint of [0,0,0].
Here is a shortened example of my BluePrint:
So First I trace every hit with MultiSphereTraceByChannel with specific objects and then I try to render a sphere at each ImpactPoint of all found hits.
This worked in the older Version but does not now...
Does anyone have any Ideas/Suggestions/Questions?
To me it seems that something changed in the newer version which affects the MultiSphereTracer to work like it did before.
Sincerly
OlsonLong
Edit:
This problem also happens with regular SphereTraceByChannel-Function!
After some investigations i found the problem/solution to this and it still bothers me that this happens in 4.26... The problems are the Start- and End-Vectors of the SphereTracer (doesn't matter which tracer).
In my example i use ActorLocation as Start- and(!) End-Vector which leads to this struggle. Now if i shift the vector like 1 unit on Z, everything workes fine again.
This is the correct way of the example above:
Works with both MultiSphere- and SphereTraceByChannel.
But it still bothers me why this happenes because now i have to change ALL SphereTracer and shift them by only 1 unit.

condition not working sometimes with gwt

I'm having a rare issue in my code, I have a method that makes a very simple validation based on a string variable:
private void showNextStep(String psCondition,String poStepName){
int liCurrentStep=Integer.valueOf(poStepName);
String lsNextTrueStep=moSteps[liCurrentStep][4];
String liNextFalseStep=moSteps[liCurrentStep][5];
if ("Yes".equals(psCondition)){
moFrmStepsContainer.getField(liNextFalseStep).hide();
moFrmStepsContainer.getField(lsNextTrueStep).show();
}else{
moFrmStepsContainer.getField(liNextFalseStep).show();
moFrmStepsContainer.getField(lsNextTrueStep).hide();
}
}
Now, here is the ticky part: if I execute the application without debugging mode, it does the validation right all the time, how ever if don't it always goes to the else block (or at least I think) I tried to use JS alerts (I have a class that calls JS methods) to debug manually and check the valors of the variables; the valors where all right and the validation was also good. This means that only debugging or putting alerts before at the beggining of the IF block it does the validation right, otherwise it always goes to the ELSE, what do you think it could be?
It might be worth mentioning this is a web application made in netbeans 6.9, using the framework GWT 2.1. This application runs in firefox 25.0.1
Thank you!
UPDATE
Here is the code of the event that calls my method
final ComboBoxItem loYesNo=new ComboBoxItem("cmbYesNo" + moSteps[liStepIndex][0],"");
loYesNo.setValueMap("Yes","No");
loYesNo.setVisible(false);
loYesNo.setAttribute("parent", liStepIndex);
loYesNo.addChangedHandler(new ChangedHandler() {
public void onChanged(ChangedEvent poEvent){
String lsStepName=loYesNo.getName();
FormItem loItem=moFrmStepsContainer.getField(lsStepName);
String liStepNumber=String.valueOf(loItem.getAttributeAsInt("parent"));
showNextStep((String)poEvent.getItem().getValue(),liStepNumber);
}
});

EF6/Code First: Super slow during the 1st query, but only in Debug

I'm using EF6 rc1 with Code First strategy, without precompiled views and the problem is:
If I compile and run the exe application it takes like 15 seconds to run the first query (that's okay, since I'm still working on the pre-generated views). But if I use Visual Studio 2013 Preview to Debug the exact same application it takes almost 2 minutes BEFORE running the first query:
Dim Context = New MyEntities()
Dim Query = From I in Context.Itens '' <--- The debug takes 2 minutes in here
Dim Item = Query.FirstOrDefault()
Is there a way to remove this extra time? Am I doing something wrong here?
Ps.: The context itself is not complicated, its just full with 200+ tables.
Edit: Found out that the problem is that during debug time the EF appears to be generating the Views ignoring the pre-generated ones.
Using the source code from EF I discovered that the property:
IQueryProvider IQueryable.Provider
{
get
{
return _provider ?? (_provider = new DbQueryProvider(
GetInternalQueryWithCheck("IQueryable.Provider").InternalContext,
GetInternalQueryWithCheck("IQueryable.Provider").ObjectQueryProvider));
}
}
is where the time is being consumed. But this is strange since it only takes time in debug. Am I missing something here?
Edit: Found more info related to the question:
Using the Process Monitor (by Sysinternals) I found out that there its the 'desenv.exe' process that is consuming tons of time. To be more specific its consuming time with an 'Thread Exit'. It repeats the Thread Exit stack 36 times. I don't know if this info is very useful, but I saved a '.cvs' with the stack, here is his body: [...] (edit: removed the '.cvs' body, I can post it again by the comments if someone really think its going to be useful, but it was confusing and too big.)
Edit: Installed VS2013 Ultimate and Entity Framework 6 RTM. Installed the Entity Framework Power Tools Beta 4 and used it to generate the Views. Nothing changed... If I run the exe it takes 20 seconds, if I 'Start' debugging it takes 120 seconds.
Edit: Created a small project to simulate the error: http://sdrv.ms/16pH9Vm
Just run the project inside the environment and directly through the .exe, click the button and compare the loading time.
This is a known performance issue in Lazy (which EF is using) when the debugger is attached. We are currently working on a fix (the current approach we are looking at is removing the use of Lazy). We hope to ship this fix in a patch release soon. You can track progress of this issue on our CodePlex site - http://entityframework.codeplex.com/workitem/1778.
More details on the coming 6.0.2 patch release that will include a fix are here - http://blogs.msdn.com/b/adonet/archive/2013/10/31/ef6-performance-issues.aspx
I don't know if you have found the solution. But in my case, I had similar issue which wasted me close to a week after trying different suggestions. Finally, I found a solution by changing my web.config to optimizeCompilations="true" and performance improved dramatically from 15-30 seconds to about 2 seconds.

Perl CGI gets parameters from a different request to the current URL

This is a weird one. :)
I have a script running under Apache 1.3, with Apache::PerlRun option of mod_perl. It uses the standard CGI.pm module. It's a regularly accessed script on a busy server, accessed over https.
The URL is typically something like...
/script.pl?action=edit&id=47049
Which is then brought into Perl the usual way...
my $action = $cgi->param("action");
my $id = $cgi->param("id");
This has been working successfully for a couple of years. However we started getting support requests this week from our customers who were accessing this script and getting blank pages. We already had a line like the following that put the current URL into a form we use for customers to report an issue about a page...
$cgi->url(-query => 1);
And when we view source of the page, the result of that command is the same URL, but with an entirely different query string.
/script.pl?action=login&user=foo&password=bar
A query string that we recognise as being from a totally different script elsewhere on our system.
However crazy it sounds, it seems that when users are accessing a URL with a query string, the query string that the script is seeing is one from a previous request on another script. Of course the script can't handle that action and outputs nothing.
We have some automated test scripts running to see how often this happens, and it's not every time. To throw some extra confusion into the mix, after an Apache restart, the problem seems to initially disappear completely only to come back later. So whatever is causing it is somehow relieved by a restart, but we can't see how Apache can possibly take the request from one user and mix it up with another.
This, it appears, is an interesting combination of Apache 1.3, mod_perl 1.31, CGI.pm and Apache::GTopLimit.
A bug was logged against CGI.pm in May last year: RT #57184
Which also references CGI.pm params not being cleared?
CGI.pm registers a cleanup handler in order to cleanup all of it's cache.... (line 360)
$r->register_cleanup(\&CGI::_reset_globals);
Apache::GTopLimit (like Apache::SizeLimit mentioned in the bug report) also has a handler like this:
$r->post_connection(\&exit_if_too_big) if $r->is_main;
In pre mod_perl 1.31, post_connection and register_cleanup appears to push onto the stack, while in 1.31 it appears as if the GTopLimit one clobbers the CGI.pm entry. So if your GTopLimit function fires because the Apache process has got to large, then CGI.pm won't be cleaned up, leaving it open to returning the same parameters the next time you use it.
The solution seems to be to change line 360 of CGI.pm to;
$r->push_handlers( 'PerlCleanupHandler', \&CGI::_reset_globals);
Which explicitly pushes the handler onto the list.
Our restart of Apache temporarily resolved the problem because it reduced the size of all the processes and gave GTopLimit no reason to fire.
And we assume it has appeared over the past few weeks because we have increased the size of the Apache process either through new developments which included something that wasn't before.
All tests so far point to this being the issue, so fingers crossed it is!

Quickly Testing Database Connectivity within the Entity Framework

[I am new to ADO.NET and the Entity Framework, so forgive me if this questions seems odd.]
In my WPF application a user can switch between different databases at run time. When they do this I want to be able to do a quick check that the database is still available. What I have easily available is the ObjectContext. The test I am preforming is getting the count on the total records of a very small table and if it returns results then it passed, if I get an exception then it fails. I don't like this test, it seemed the easiest to do with the ObjectContext.
I have tried setting the connection timeout it in the connection string and on the ObjectConntext and either seem to change anything for the first scenario, while the second one is already fast so it isn't noticeable if it changes anything.
Scenario One
If the connect was down when before first access it takes about 30 seconds before it gives me the exception that the underlying provider failed.
Scenario Two
If the database was up when I started the application and I access it, and then the connect drops while using the test is quick and returns almost instantly.
I want the first scenario described to be as quick as the second one.
Please let me know how best to resolve this, and if there is a better way to test the connectivity to a DB quickly please advise.
There really is no easy or quick way to resolve this. The ConnectionTimeout value is getting ignored with the Entity Framework. The solution I used is creating a method that checks if a context is valid by passing in the location you which to validate and then it getting the count from a known very small table. If this throws an exception the context is not valid otherwise it is. Here is some sample code showing this.
public bool IsContextValid(SomeDbLocation location)
{
bool isValid = false;
try
{
context = GetContext(location);
context.SomeSmallTable.Count();
isValid = true;
}
catch
{
isValid = false;
}
return isValid;
}
You may need to use context.Database.Connection.Open()