I'm a JS developer and I'm learning Scala, I'm used to the Jest test framework, in JS, which allows you to watch changed files and run tests for the changed modules.
Is there something similar in Scala?
I know that I can do:
~test to watch all the files and trigger all the tests
~test package/test_class to watch all the files and trigger only the specified test class.
Thanks a lot
I think you're looking for testQuick:
The testQuick task, like testOnly, allows to filter the tests to run to specific tests or wildcards using the same syntax to indicate the filters. In addition to the explicit filter, only the tests that satisfy one of the following conditions are run:
The tests that failed in the previous run
The tests that were not run before
The tests that have one or more transitive dependencies, maybe in a different project, recompiled.
You can use it with ~ and filtering that you already know.
Related
I have 5 test scenarios in my 5 different feature files.
TC-1
TC-2
TC-3
TC-4
TC-5
TC-3&TC-4 are dependent test cases when test scenario TC-3 failed automatically TC-4 should skip and TC-5 should execute how we can achieve this in cucumber any suggestions
Thanks in advance.
I never worked with cucumber closely, but I found out it's not possible in jasmine and pytest. So I assume this is how all test frameworks work.
The problem is that both of these^ build a queue of tests to execute before the browser started. And you can't modify it based on a runtime results.
see this answer for jasmine, and see if you can apply this approach to cucumber Nested it in Protractor/Jasmine
To speed up our development workflow we split the tests and run each part on multiple agents in parallel. However, compiling test sources seem to take most of the time for the testing steps.
To avoid this, we pre-compile the tests using sbt test:compile and build a docker image with compiled targets.
Later, this image is used in each agent to run the tests. However, it seems to recompile the tests and application sources even though the compiled classes exists.
Is there a way to make sbt use existing compiled targets?
Update: To give more context
The question strictly relates to scala and sbt (hence the sbt tag).
Our CI process is broken down in to multiple phases. Its roughly something like this.
stage 1: Use SBT to compile Scala project into java bitecode using sbt compile We compile the test sources in the same test using sbt test:compile The targes are bundled in a docker image and pushed to the remote repository,
stage 2: We use multiple agents to split and run tests in parallel.
The tests run from the built docker image, so the environment is the
same. However, running sbt test causes the project to recompile even
through the compiled bitecode exists.
To make this clear, I basically want to compile on one machine and run the compiled test sources in another without re-compiling
Update
I don't think https://stackoverflow.com/a/37440714/8261 is the same problem because unlike it, I don't mount volumes or build on the host machine. Everything is compiled and run within docker but in two build stages. The file modified times and paths are retained the same because of this.
The debug output has something like this
Initial source changes:
removed:Set()
added: Set()
modified: Set()
Invalidated products: Set(/app/target/scala-2.12/classes/Class1.class, /app/target/scala-2.12/classes/graph/Class2.class, ...)
External API changes: API Changes: Set()
Modified binary dependencies: Set()
Initial directly invalidated classes: Set()
Sources indirectly invalidated by:
product: Set(/app/Class4.scala, /app/Class5.scala, ...)
binary dep: Set()
external source: Set()
All initially invalidated classes: Set()
All initially invalidated sources:Set(/app/Class4.scala, /app/Class5.scala, ...)
Recompiling all 304 sources: invalidated sources (266) exceeded 50.0% of all sources
Compiling 302 Scala sources and 2 Java sources to /app/target/scala-2.12/classes ...
It has no Initial source changes, but products are invalidated.
Update: Minimal project to reproduce
I created a minimal sbt project to reproduce the issue.
https://github.com/pulasthibandara/sbt-docker-recomplile
As you can see, nothing changes between the build stages, other than running in the second stage in a new step (new container).
While https://stackoverflow.com/a/37440714/8261 pointed at the right direction, the underlying issue and the solution for this was different.
Issue
SBT seems to recompile everything when it's run on different stages of a docker build. This is because docker compresses images created in each stage, which strips out the millisecond portion of the lastModifiedDate from sources.
SBT depends on lastModifiedDate when determining if sources have changed, and since its different (the milliseconds part) the build triggers a full recompilation.
Solution
Java 8:
Setting -Dsbt.io.jdktimestamps=true when running SBT as recommended in https://github.com/sbt/sbt/issues/4168#issuecomment-417655678 to workaround this issue.
Newer:
Follow recomendation in https://github.com/sbt/sbt/issues/4168#issuecomment-417658294
I solved the issue by setting SBT_OPTS env variable in the docker file like
ENV SBT_OPTS="${SBT_OPTS} -Dsbt.io.jdktimestamps=true"
The test project has been updated with this workaround.
Using SBT:
I think there is already an answer to this here: https://stackoverflow.com/a/37440714/8261
It looks tricky to get exactly right. Good luck!
Avoiding SBT:
If the above approach is too difficult (i.e. getting sbt test to consider that your test classes do not need re-compiling), you could instead avoid using sbt but instead run your test suite using java directly.
If you can get sbt to log the java command that it is using to run your test suite (e.g. using debug logging), then you could run that command on your test runner agents directly, which would completely preclude sbt re-compiling things.
(You might need to write the java command into a script file, if the classpath is too long to pass as a command-line argument in your shell. I have previously had to do that for a large project.)
This would be a much hackier approach that the one above, but might be quicker to get working.
A possible solution might be defining your own sbt task without dependencies or try to change the test task. For example you could create a task to run a JUnit runner if that was your testing framework. To define a task see this on Implementing Tasks.
You could even go as far as compiling sending the code and running the remotes from the same task as it is any scala code you want. From the sbt reference manual
You could be defining your own task, or you could be planning to redefine an existing task. Either way looks the same; use := to associate some code with the task key
tl;dr
Should I skip tests in maven?
Long version
(I don't know, whether this matters.)
I use Java EE eclipse, I have tests and the code in the proper maven directory.
Why should I (not) skip the tests during build?
tl;dr
Never make the test skipping as a normal thing of your build.
Doing it regularly is a vicious circle that will make your tests useless and your application less robust.
Long version
Automatically build from the IDE (Eclipse or any other)
IDEs that allow to use Maven as a build tool don't execute unit tests at each build as by default they don't execute the test goal, neither the package goal. Instead of they focus on the compiler plugin (compile and test-compile) as well the resources plugin (resources and test-resources).
So unit tests should not be executed at each automatically build performed by your IDE.
Build with mvn package/install
Why should I (not) skip the tests during build?
Your question refers Maven but it could be any other build tool or any programming language and it would have the same kind of consequences.
First, ask you the question why do we write unit tests.
We write them to validate the behavior of our components and also to be warned as the behavior of our components don't behave any longer as expected (regression detecting).
Ignoring regularly the tests execution means that you will take the risk that your covered by unit tests components having regressions in their behavior without that you are aware about them.
And later other regressions may occur and you will still be unaware of them.
And much later many of your tests will fail as you never corrected them to stay conform to the expected behavior.
As a test failure prevents the build, it will very probably result to make you skip/disable/comment/delete unit tests because these fail too often and for too many cases.
Your unit tests will become dust and you will lose all benefits provided by unit tests.
But so why does the skip tests option exist ?
I recommend/use them only for corner cases such as the followings (not exhaustive) and only temporarily :
legacy projects which many tests were not maintained.
As correcting them will take too much time, in a first time, you don't have other choices that skipping them to build your component.
And as soon as possible you have to disable unreliable tests (#Ignore with JUnit), correct these that may be or plan them. In this way you could re-enable the tests execution and let working tests to be regularly executed.
projects where the tests may fail according to the execution environment.
Here again it should be a temporary workaround and isolated too.
Indeed, as soon as possible you have to find a way to make the tests successful whatever the environment where they are executed.
Right now when I run karma it takes all "*.spec.js" and run time. We have both unit and integration tests. And when I am running unit tests I don't want integration specs to run. What is the best way to tell karma what spec to load. Like any name pattern, etc. I need to know how to configure karma so that it runs specific type of specs.
Any suggestion with example/link is highly appreciated.
Assuming that you can identify your integration tests by filename (e.g. *.integration.spec.js) then I would have two separate karma.conf.js files. One for integration (with ./*/*.integration.spec.js in the files list), and one for regular work (with the same pattern in exclude). Then just run the one you need - or have them both running in separate consoles.
It would be great to improve test driven development productivity by automatically firing of tests whenever there is a code change.
This is what I'm hoping for.
Whenever a Scala file is saved, SBT (or a shell script) should execute the ScalaTest or Specs2 specifications.
If the tests complete, the system should play a sound indicating success or failure
I'm using Scala IDE to do development and SBT to run my test specs at the moment, so just autimating the steps above would save a person from switching to the console, running the test specs and waiting for the result.
Any ideas automatically firing of the tests and playing a 'succeed' or 'fail' sound would be great.
Why not create a custom SBT task that depends on the test task. You could add the code for playing the sound to your build definition.
See here how to run a custom task after another task.
To automatically re-run tests, simply prefix the newly define task with a ~ in the SBT shell before running it.