How do i profile the performance of nunit tests that are run through resharper? - nunit

I'm using resharper to run my nunit tests - and i'd like to improve performance
I know that resharper uses a built in version of nunit. How do i set up resharper/nunit, such that i can run my unit tests through a profiler and see where i can best spend my time when optimizing?

Actually, with dotTrace you can profile Nunit tests right from Visual Studio or Rider, you don't need to use attaching: https://www.jetbrains.com/help/profiler/Profiling_Guidelines__Profiling_Unit_Tests.html
Please note that dotTrace is a part of ReSharper Ultimate package, you need to have the corresponding license.

I profile my code with dotTrace. You can start your tests and profile it with dotTrace (I start the test and attach the dotTrace to it). Second option is to use StopWatch. I would put StopWatch lines that I want to profile and print the values and depending on the results, I would start analyzing the code.
Stopwatch sw = Stopwatch.StartNew();
// code that you want to measure
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
These are the things that comes to my mind.

Related

How to search a test name in NUnit GUI

I have written many tests and when i opened NUnit try to run some test it is taking time for me to find the testname where it is displaying in NUNit GUI.
Is there an option like Ctrl+F in NUnit to find the test?
There is no option like that (yet) in either of the NUnit GUI applications.
When asking questions, it's a good idea to be very specific about what you are using. Of course, if you have only seen one "NUnit GUI", you might assume that there are no others. But it's still helpful to give the name of the command you execute and the version of the software (at a minimum) in case it turns out that there are more applications than you are aware of.
In this case, the answer would be the same for both the NUnit Version 2 GUI "nunit.exe" and the TestCentric GUI (currently version 1.2) which runs NUnit tests. There is a plan for the TestCentric GUI to eventually support such a search. The V2 GUI is legacy software and isn't developed actively any longer.

Testing autocompletion in an Eclipse plugin

I am working on an Eclipse Plug-in which provides property auto-completions with a ICompletionProposalComputer contributed via the org.eclipse.wst.sse.ui.completionProposal.
I'd like to create automated tests for the functionality but have no idea where to start. How can I write automated tests for my proposal computer?
Some time ago a colleague and I had a similar problem while implementing a IContentAssistProcessor for a SourceViewer based editor in a console view.
We started with an integration test that simulated a Ctrl+Space key stroke within the console editor and expected a shell with a table that holds the proposal(s) to show up.
Here is such a test case: ConsoleContentAssistPDETest. It uses a ConsoleBot that encapusulates the key stroke simulation and a custom AssertJ assertion that hides the details of waiting for the shell to open and finding the table, etc. (ConsoleAssert)
With such a test in place we were able to implement a walking skeleton. We developed individual parts of the content proposal code test-driven with unit tests.
Instead of writing your own bot you may also look into SWTBot which provides an API to write UI/functional tests.
I ended up writing a simple SWTBot test. Once I have the editor open, it's pretty simple to get a list of autocompletions:
bot.editorByTitle("index.html").toTextEditor();
editor.insertText("<html>\n<div ></div>\n</html>");
editor.navigateTo(1, 5);
editor.getAutoCompleteProposals("")

Automated testing developer environments

We use gradle as our build tool and use the idea plugin to be able to generate the project/module files. The process for a new developer on the project would look like this:
pull from source control.
run 'gradle idea'.
open idea and be able to develop without any further setup.
This all works nicely, but generally only gets exercised when a new developer joins or someone gets a new machine. I would really like to automate the testing of this more frequently in the same way we automate our unit/integration tests as part of our continuous integration process.
Does anyone know if this is possible and if there is any libraries for doing this kind of thing?
You can also substitue idea for eclipse as we have a similar process for those that prefer using eclipse.
The second step (with or without step one) is easy to smoke test (just execute the task as part of a CI build), the third one less so. However, if you are following best practices and regenerate IDEA files rather than committing them to source control, developers will likely perform both steps more or less regularly (e.g. every time a dependency changes).
As Peter noted, the real challenge is step #3. The first 2 ones are solved by your SCM plugin and gradle task. You could try automating the last task by doing something like this
identify the proper command line option, on your platform, that opens a specified intellij project from the command line
find a simple good enough scenario that could validate that the generated project is working as it should. E.g. make a clean then build. Make sure you can reproduce these steps using keyboard shortcuts only. Validation could be made by validating either produced artifacts or test result reports, etc
use an external library, like Robot, to program the starting of intellij and the running of your keyboards. Here's a simple example with Robot. Use a dynamic language with inbuilt console instead of pure Java for that, it will speed your scripting a lot...
Another idea would be to include a daemon plugin in intellij to pass back the commands from external CLI. Otherwise take contact with the intellij team, they may have something to ease your work here.
Notes:
beware of false negatives: any failure could be caused by external issues, like project instability. Try to make sure you only build from a validated working project...
beware of false positives: any assumption / unchecked result code could hide issues. Make sure you clean properly the workspace, installation, to have a repeatable state and standard scenario matching first use.
Final thoughts: while interesting from a theoretical angle, this automation exercise may not bring all the required results, i.e. the validation of the platform. Still it's an interesting learning experience and could serve as a material for a nice short talk, especially if you find out interesting stuff. Make it a beer challenger with your team when you have a few idle hours to try to see who can implement the fastest a working solution ;) Good luck!

Show only specific Tests or TestFixtures in NUNIT via a configuration file or another way

I have a bunch of NUNIT tests in several TestFixtures. Currently, I just display all the tests for everyone. Is there a way to hide some tests and/or test fixtures. I have various "customers" and they don't all need to see every test. For example, I have engineers using low level tests, and I have a QA department that is using higher level tests. If I could have a configuration (XML?) file that I distributed with the dll that would be ideal. Can someone point me to the documentation and example? I did search the NUNIT site and did not see anything.
I am aware of the [IGNORE] attribute and I suppose a somewhat acceptable solution would be to have a configuration file that can apply IGNORE to various tests or testfixtures. I'd hand out a different version of the configuration file to each customer. At least that way certain customers would not be able run certain tests.
I'm using version 2.5.5
Ideas?
Thanks,
Dave
Yes - if the tests are in seperate assemblies, this can be accomplished by proper configuration of your NUnit projects. However, this is not an option if the tests are in one large test assembly. If this is the case, you may wish to break up the test assembly. Here is the documentation on the NUnit ProjectEditor: http://www.nunit.org/index.php?p=projectEditor&r=2.2.10

Writing Logs/results or generating reports using Selenium C# API

I am testing the web application using Selenium RC. All things works fine and I have written many test cases and executing these test cases using Nunit.
Now the hurdle that I am facing is how to keep track of failures or how to generate the Reports?
Please advice which could be best way to capture this.
Because you're using NUnit you'll want to use NUnits reporting facilities. If the GUI runner is enough, that's great. But if you're running from the command line, or NAnt, you'll want to use the XML output take a look at the NAnt documentation for more information.
If you're using Nant you'll want to look at NUnit2Report. It's nolonger maintained, but it may suit your needs. Alternatively, you could extract it's XSLT files and apply it against the XML output.
Selenium itself doesn't have report because it is only a library used by many different languages.
For anyone else happening randomly into this question, the 'nunit2report' task is now available in NAntContrib.
NantContrib Nunit2report task