I've written some test scripts & run them using the test runner. This has worked great, but unfortunately I didn't record the bugs at the time, via the test runner (just failed the steps and added comments).
When I review the Test Run Results I can create a bug for the Dev Team, but the Repro Steps box doesn't get populated with anything meaningful (see image1). I've spotted that the DevTeam can get to the test results via the 'Links' tab on the Bug, but it's a bit clunky:
However if I'd created a bug direct from the test runner, the bugs Repro Steps box gets populated with exactly what the DevTeam needs:
Does anyone know how/if I can generate the same nicely formatted test runner Repro Steps if I create a bug from the Test Results after testing? (I've got a few to do, so would prefer not to re-run the tests....)
You can write some code that will read test run content and copy paste this in bugs because this is just html in content of bug.
The best approach would be to create bug, and later on link test run to created bug.
Related
I have a question and I have been looking for a lot of reviews and nothing seems to work.
I am trying to run my project in my mac.
And always than I put in my cucumber file for example:
example:
only the line where I use "<>" show the next message : "step does not have a matching glue code.
and I totally sure that the same name is connected with his method in my page objects.
The weird thing is that in my windows computer the project run perfectly. I don't know why occurs this.
I hope you can help me.
I added ember-cli-blanket into my project and managed to get it working fine. localhost:4200/tests?coverage would show the coverage data. However it included files such as 'project/components/modal-dialog' or 'project/components/modal-dialog-overlay' in the results, which are not files in the project, but are included by Ember since the project uses a modal dialog in one of the template files. These extra test files don't give me anything new since I'm not testing the ember codebase and actually muddle the results by mixing in my tests with other ones. The project is still small, and with ~11 actual files needing testing, there were around 12 files I had to add to the loaderExclusions in blanket-options.js. Some could be gotten rid of with an exclusion like:
loaderExclusions: ['project/initializers'],
But for the ones under project/components, I do want to test the components that are part of the project, so I had to exclude each one individually. And there's no guarantee that excluding all initializers files won't come back to bite me if I actually end up with any files in there I want to test. Considering how small the project is so far, and the fact that there are more exclusions than actual files, this doesn't seem like a sustainable solution.
Am I doing something wrong in my set-up? Is this something I can solve with my filter which is currently on a default of:
filter: '/.*project/.*/',
Any help would be appreciated.
As a sidenote, I've been looking into testem with Istanbul as well as Karma as other options for coverage data in Ember but have been unable to get anywhere with them. If you have suggestions on the setup for those that would also be fine.
It doesn't look like there's anything wrong with your setup. What you're seeing is apparently due to how blanket.js works. See this issue for more information: https://github.com/sglanzer/ember-cli-blanket/issues/17
I was using ember-cli-blanket then found ember-cli-code-coverage. As of this writing, the sandersky fork, at 9f1dd33f, works great for me to solve the type of problem you're describing.
https://github.com/kategengler/ember-cli-code-coverage/pull/11
It solves it not using blanket, but using istanbul.
I use karma to run jasmine specs. Every time a file changes the tests are run. This is awesome but it would be even better if the previous test output would be removed from the terminal.
Is there a way to clear the terminal before tests are run in karma?
It is not supported. I like this idea though. I created issue #1004 to support this.
The issue #Sylvain has created is still open, but there is a plugin karma-clear-screen-reporter which does the job. Internally it uses the magic:
console.log('\u001b[2J\u001b[0;0H');
Alternatively you can also place this at the top of your test runner code to clear the console yourself.
Context
My answer is the same as #bluenote10
The difference is that I made some improvements to the package he posted and published it on my github:
Karma Clear Screen
Improvements
Improvement in the initial message.
Date of the update.
In addition to console.log('\u001b[2J\u001b[0;0H'); the spacing has been improved.
I'm using Michael Romer's fantastic ZF-Boilerplate and have hit a snag when testing.
When I view the code coverage reports, it only shows the code coverage for the actual unit tests, not for the code being tested.
I've looked high and low for instances of this happening, but can't find anything. As far as I can see, the phpunit.xml (https://github.com/michael-romer/zf-boilerplate/blob/master/tests/phpunit.xml) file is configured correctly for the directory structure (https://github.com/michael-romer/zf-boilerplate)
Is there anyone that can see why it's not working?
Typical... a couple minutes after I posted I figured it out. I moved the phpunit.xml up a directory, modified the paths inside it to relfect this, and tried again, and it worked as expected.
I've managed to set up unit tests for my library in Xcode 4. I've performed builds with tests that I know will pass and fail (i.e. STAssertTrue(YES) and STAssertTrue(NO) ) just to make sure it's working. I'm using the default apple SenTest libraries following this document.
However, when my tests are running I'm getting this error in the build log :
An internal error occurred when handling command output: -[IDEActivityLogSectionRecorder endMarker]: unrecognized selector sent to instance 0x20310b580
To be clear, it's not affecting the running of the tests at all, just the output into the build window. All the tests run each time so I can tell a pass / fail by looking to see if the build succeeds or fails.
However, when my tests fail I can't find out which one fails because the output seems to stop when it gets to that error.
Does anyone have experience with unit testing / Xcode 4 / this error?
I just posted this on another thread, but I'm going the opposite direction for Xcode 4.
Please see my blog post exploring the topic, leave a comment if you think I'm wrong.
I realise it doesn't directly answer your question, but forget SenTestingKit and use GHUnit. It'll take you about 10 minutes to figure out (much more straightforward than OCUnit) and will save you a lot of headaches. IMHO, Apple should be shipping it with Xcode instead of OCUnit.
GHUnit can run your tests in a true application environment (with a GUI), or on the command line. It literally just drops into your existing project as a separate target.
https://github.com/gabriel/gh-unit