Change location/file path of tests in Protractor - protractor

I am using protractor to run End-to-end tests and I was wondering if it was possible to change the locations of where the tests it needs to perform are. I am hoping to be able to pass it in as a command line parameter.
For example this is the current set up:
Protractor currently looks the tests in the path ./tests and then runs the features in the features folder.
I have done a lot of looking around and cannot find where it is defined that this is the path that it uses. I am wanting to be able to pass it a parameter, when run from the command line, along the lines of --params.tests="C:\path\to\tests".
EDIT: I am using Mocha as my test framework

I am assuming you have configured Cucumber as a custom framework in Protractor config file and triggering the tests by running the 'conf.js'
Current Setup might be:
specs: ['tests/features/*.features'],
cucumberOpts: {
// This will point to your dependencies. Script files which contain dependencies
require: 'tests/steps/*.js',
},
Change it to CLI to accept these values at run-time
protractor conf.js --specs tests/features/*.features --cucumberOpts.require tests/steps/*.js

Related

Can NUnit console runner pass some command line arguments to nunit-agent?

We use the NUnit console to run our tests in Jenkins and we have many projects that share some tests. We want to be able to run the tests concurrently and to do that we need the tests to look at different databases.
I would like to pass in the project name to the nunit-agent which wouldn't know how to use it, but we would be able to fetch that from the command line arguments running the test and decide which database to look at.
I am open to suggestions.
We currently use "C:\Program Files (x86)\NUnit.org\nunit-console\nunit3-console.exe" Path\Tests.dll --result=nunit-result1.xml to run the tests
nunit-agent uses arguments to pass information that NUnit needs. For passing information to the test, the standard way is to use the --params command-line option and then access the values from your tests by means of TestContext.Parameters.

Karma spec classification

Right now when I run karma it takes all "*.spec.js" and run time. We have both unit and integration tests. And when I am running unit tests I don't want integration specs to run. What is the best way to tell karma what spec to load. Like any name pattern, etc. I need to know how to configure karma so that it runs specific type of specs.
Any suggestion with example/link is highly appreciated.
Assuming that you can identify your integration tests by filename (e.g. *.integration.spec.js) then I would have two separate karma.conf.js files. One for integration (with ./*/*.integration.spec.js in the files list), and one for regular work (with the same pattern in exclude). Then just run the one you need - or have them both running in separate consoles.

Structuring webpack config for use with karma testing

I would like to create a test suite which will be run with karma against my app which is using webpack to build itself. I have two entrypoints, app and vendors. These are set up through my webpack.config.js file here. The resulting bundle.js should contain both of these entrypoints in its generated file.
My karma (mocha) tests residing in test/spec/*_spec.js are currently pointing to specific components, via require statements like:
var app = require('../src/scripts/App')
They also utilize react/jsx which seems to be causing problems during the test run where I get jsx errors:
Module parse failed: /Users/dmarr/src/status/test/spec/app_spec.js Line 10: Unexpected token <
You may need an appropriate loader to handle this file type.
I'd like to keep test runs quick as possible as well as fast build times for testing with webpack-dev-server during development by minimizing babel transforms to only where needed.
What do I need to do in karma.conf.js to get my builds working? Here is the karma.conf.js file I'm playing around with.
Note, that I do have this working without breaking out the vendor bundle here: https://github.com/bitwise/status
Thanks for any help,
Dave
In a similar setup, disabling CommonsChunkPlugin (for testing only) worked for me. Give it a shot!

Running NCover from code

Is it possible to run NCover automatically from code instead of running NCover manually or via command line?
Here is the scenario, I have written a few tests, I execute all the tests and after the tests are completed, NCover should run automatically for that particular test project and store the coverage report as an XML in a location.
Is this possible to do? Kindly help.
Running NCover from the command line was the only option with NC3. When we updated NC4 the default works like this --> you create a project, the NCover service watches for a process to start that meets the match rules defined in the project, and then collects coverage on it.
This doc may be of some help: http://www.ncover.com/support/docs/desktop/user-guide/coverage_scenarios/how_do_i_collect_data_from_nunit
If you have more questions, please reach out to us at support#ncover.com.

Change a value in a file and run all tests again

I wrote an integration test suite using NUnit. Since we're talking integration tests, test code uses configuration files, the file system, database and so on.
However, I noticed that it would be nice to change the test environment (i.e. change a value inside a configuration file - this would change the code behavior in some cases), and then run the full test suite again but using this new environment.
Is there a way to automate this using NUnit? I have code that updates the file, so if I can somehow set things up programatically, great.