java.lang.exception no runnable methods junit - eclipse

I have a suite, inside which I have added the test class.
I am using surefire to run my JUnits.
My test class ends with test and the methods has #test annotations to it.
How can this problem be resolved?

Here are various suggestions for this incomplete question (for those unfortunate enough to be brought here by google looking for an answer to this common issue):
if using Junit4.x, just use annotations (#Test); don't create a test suite: see this question for details.
original question said "#Test" annotation is being used, which should prevent the error. But it can still happen if there are other errors that happen earlier, and junit can hide the original problem with this message. E.g., see if there are problems with Spring configuration (unset #Required attributes), misconfigured mock objects, etc.
to avoid other frequent issues that also may generate this error (such as running classes suffixed by "*Test" that do not have any #Test methods), try updating to the surefire plugin 2.7 (currently #2.8.1) and junit 4.7+ (currently #4.8.1) to avoid this issue (i'm using maven3, btw; perhaps do a "mvn clean" before "mvn test" to be safe)
long shot: upgrade to at least ant 1.7+ (currently 1.8+) to avoid junit 4 test suite issues

You are using the correct version of JUnit at least 4.X to be able to use annotations for that? (Maven?)

Related

Karate 1.0.1 Upgrade [duplicate]

I have recently upgraded to version 1.0.0 from 0.9.6 and noticed that the generated karate-summary.html file, it doesn't display all the tested feature files in the JUnit 5 Runner unlike in 0.9.6.
What it displays instead was the last tested feature file only.
The below screenshots are from the provided SampleTest.java sample code (excluding other Tests for simplicity).
package karate;
import com.intuit.karate.junit5.Karate;
class SampleTest {
#Karate.Test
Karate testSample() {
return Karate.run("sample").relativeTo(getClass());
}
#Karate.Test
Karate testTags() {
return Karate.run("tags").relativeTo(getClass());
}
}
This is from Version 0.9.6.
And this one is from Version 1.0.0
However, when running the test below in 1.0.0, all the features are displayed in the summary correctly.
#Karate.Test
Karate testAll() {
return Karate.run().relativeTo(getClass());
}
Would anyone be kind to confirm if they are getting the similar result? It would be very much appreciated.
What it displays instead was the last tested feature file only.
This is because for each time you run a JUnit method, the reports directory is backed up by default. Look for other directories called target/karate-reports-<timestamp> and you may find your reports there. So maybe what is happening is that you have multiple JUnit tests that are all running, so you see this behavior. You may be able to over-ride this behavior by calling the method: .backupReportDir(false) on the builder. But I think it may not still work - because the JUnit runner has changed a little bit. It is designed to run one method at a time, when you are in local / dev-mode.
So the JUnit runner is just a convenience. You should use the Runner class / builder for CI execution, and when you want to run multiple tests and see them in one report: https://stackoverflow.com/a/65578167/143475
Here is an example: ExamplesTest.java
But in case there is a bug in the JUnit runner (which is quite possible) please follow the process and help the project developers replicate and then fix the issue to release as soon as possible.

How to debug JavaFX application with jdk14/javafx14/Eclipse v.2020-03?

I'm trying to run a JavaFX application to test some custom controls based on jdk14 and JavaFX14. My operating system is Windows 10, the IDE is Eclipse 2020-03, and I use m2e Maven plugin. The controls are exact copies of controls developed under jdk8 and JavaFX8; the earlier controls pass all tests, there was no problem with debugging.
There is no problem getting a test application to run using jdk14 and JavaFX14, but breakpoints are ignored regardless of whether I run in debug mode, or run mode, or whether I modify the Maven command from javafx:run to javafx:debug (that did NOT work) or to javafx:run#debug.
This issue seems to have been addressed several times in the context of a Netbeans IDE (see stackoverflow discussion, and I copied in the text from the modified plugin as suggested, but to no effect.
I have the following questions:
What must be done in order to debug a JavaFX application under the conditions described above?
Who is responsible for dealing with this? Eclipse? OpenJFX? Somebody else?
Based on the principle that whatever solution is developed, it should be as user friendly as the debugging process under jdk8 and JavaFX8 (i.e. before JavaFX and everything else got decoupled from Oracle), is it reasonable to expect that a solution along those lines will be available in the near future? Is anybody working on it now?
Thanks for feedback.

JAI can't execute in native spark - only in sbt and as a separate scala function

I want to use a library (JAI) with spark to parse some spatial raster files. Unfortunately, there are some strange issues. JAI only works when running via the build tool i.e. sbt run when executed in spark.
When executed via spark-submit the error is:
java.lang.IllegalArgumentException: The input argument(s) may not be null.
at javax.media.jai.ParameterBlockJAI.getDefaultMode(ParameterBlockJAI.java:136)
at javax.media.jai.ParameterBlockJAI.<init>(ParameterBlockJAI.java:157)
at javax.media.jai.ParameterBlockJAI.<init>(ParameterBlockJAI.java:178)
at org.geotools.process.raster.PolygonExtractionProcess.execute(PolygonExtractionProcess.java:171)
Which looks like some native dependency is not called correctly.
Assuming something is wrong with the class path I tried to run a plain java/scala function. but this one works just fine.
In fact, the exact same problem occurs when Nifi is calling the parse function.
Is spark messing with the class paths? What is different from running the jar natively via java-jar or through spark or NiFi? Both show the same problem even when concurrency is disabled and they run only on a single thread.
JAI vendorname == null is somewhat similar as it shows what can go wrong when running a jar with JAI. I could not identify this as the exact same problem though.
I created a minimal example here:
https://github.com/geoHeil/jai-packaging-problem
Due to the dependency on the build process & packaging of native libraries I think it will not be possible to include snippets directly in this posting.
edit
I am pretty convinced this has to do the the assembly merge strategy, so far I could not find one which works.
Below you can see that the Vectorize operation is missing on sparks class path
edit 2
I think spark / NiFis class loader will not load some of the required registry files for JAI. A plain java app works fine with these assembly/ fat-jar settings.

How to run GWT RequestFactory Validation Tool on Eclipse project

I've got a Android AppEngine Connected Project I'm trying to build using GWT2.4 RequestFactory and Objectify on my Eclipse IDE.
Apparently I need to run the RequestFactory Validation Tool because I'm using ServiceName and ProxyForName annotations (these are required especially when working on the Android client side). My problem is the Eclipse can't validate it and the solution provided at http://code.google.com/p/google-web-toolkit/wiki/RequestFactoryInterfaceValidation#IDE_configuration is enough to make me rip my eyes out.
Since I'm working on a Windows machine, the shell script provided is not very useful. Trying to run Validation Tool from a cmd propt returns the error message:"This tool must be run with a JDK, not a JRE"
Can someone explain how this Tool is supposed to be run? Is there a way to use it as an External Tool in eclipse?
Normally if you follow carefully the instructions in the link you show, and run the GWT Development Mode from Eclipse, the Validation should be done automatically at the time you access the development URL with your browser.
For the record, I've actually had some problems with it, but launching the application several times maked it work.
Well, I ran into the same problem as well. When I tried annotation processing (under Java Compiler-> Annotation processing )was being disabled. So RequestFactoryDeobfuscatorBuilder was not being generated. Try enabling that and rebuilding your project.
I've just recovered from two days of hunting this bug down in a project that used to run validation properly but stopped.
In my case I had a new-ish generic BaseRequestContext and a specific sub-interface that extended it. My parent interface declared a method that didn't match the Locator's exactly (e.g. getThing(T) vs get(T)) and this wasn't reported as an error but did stop the validation tool from completing.
Apt is also removed in Java 8 : http://openjdk.java.net/jeps/117 . So beware.
Switching back to Java 7 will fix the issue if you are using Java 8.
I understood why the error happens sometimes in a project: the compiler was complaining it cannot find the directory .apt . But when I tried to create it manually it was not possible (under windows). I think the validation tool mutes the exception of not being able to create the directory: try renaming .apt in your validation tool calls (do a text search in your project)

NUnit vs MSTest - a fickle TDD novice's experiences with both of them

There are a ton of questions here on SO regarding NUnit vs. MSTest, and I have read quite a few of them. I think my question here is slightly different enough to post separately.
When I started to use C#, I never even considered looking at MSTest because I was so used to not having it available when I was using C++ previously. I basically forgot all about it. :) So I started with NUnit, and loved it. Tests were very easy to set up, and testing wasn't too painful -- just launch the IDE and run the tests!
As many here have pointed out, NUnit has frequent updates, while MSTest is only updated as often as the IDE. That's not necessarily a problem if you don't need to be on the bleeding edge of TDD (which I'm not), but the problem I was having with frequent updates is keeping all of the systems up-to-date. I use about four or five different PCs daily, and while updating all of them isn't a huge deal, I was hoping for a way to make my code compile properly on systems with an older version of NUnit. Since my project referenced the NUnit install folder, when I upgraded the framework, any computers with the older framework installed would no longer be able to compile my project. I tried to combat the problem by created a common folder in SVN that had just the NUnit DLLs, but even then it would somehow complain about the version number of the binary. Is there a way to get around this issue? This is what made me stop using the first time.
Then one day I remembered MSTest, and decided to give it a try. I loved that it was integrated into the IDE. CTRL-R,CTRL-A, all tests run. How simple! But then I saw that the types of tests available in MSTest were pretty limited. I didn't know how many I'd actually really need, but I figured I should go back to NUnit, and I did.
About now I was starting to have to debug unit tests, and the only way I could figure out how to do it in NUnit was to set NUnit as the startup application, then set breakpoints in my tests. Then in the NUnit GUI, I would run the tests to hit the breakpoints. This was a complete PITA. I then looked at the MSTest GUI again, and saw that I could just click Debug there and it would execute my tests! WOW! Now that was the killer feature that swayed me back in favor of MSTest.
Right now, I'm back using MSTest. Unfortunately, today I started to think about daily builds and did some searching on Tinderbox, which is the only tool I had heard of before for this sort of thing. This then opened up my eyes to other tools like buildbot and TFS. So the problem here is that I think MSTest is guaranteed to lock me into TFS for automated daily builds, or continuous integration, or whatever the buzzword is. My company can't afford to get locked into MS-only solutions (other than VS), so I want to examine other choices.
I'm perfectly fine to go back to NUnit. I'm not thrilled about rewriting 100+ unit tests, but that's the way it goes. However, I'd really love for someone to explain how to squash those two issues of mine, which in summary are:
how do I setup NUnit and my project so that I don't have to keep upgrading it on every system to make my project build?
how do I get easier debugging of unit tests? My approach was a pain because I'd have to keep switching between NUnit and the default app to test / run my application. I saw a post here on SO that mentioned NUnitIt on codeplex, but I haven't any experience with it.
UPDATE -- I'm comparing stuff in my development VM, and so far, NUnitit is quite nice. It's easy to install (one click), and I just point it to whatever NUnit binaries are in my SVN externals folder. Not bad! I also went into VS -> Tools -> Options -> Keyboard and changed my mapping for CTRL-R,CTRL-A to map to NUnitit.Connect.DebugGUI. Not perfect since I haven't figured out how to make NUnit automatically run the tests when it's opened, but it's pretty good. And debugging works as it should now!
UPDATE #2 -- I installed TestDriven.Net and gave it a quick run through. Overall, I like it a lot better than NUnitit, but at the moment, NUnitit wins because it's free, and since it also works with NUnit, it will allow me to "upgrade" to TestDriven.Net when the time comes. The thing I like most about TestDriven.Net is that when I double click on the failed test, it takes me right to the line in the test that had failed, while NUnit + NUnitit doesn't seem to be capable of this. Has anyone been able to make this link between the NUnit GUI and the VS IDE happen?
Many projects I've worked on have included a copy of the specific version of NUnit (or xUnit.net, whatever) in a "lib" or "extrernal" or "libraries" folder in their source control, and reference that location for building all of their tests. This greatly reduces the "upgrade everyone" headache, since you really don't need to install NUnit or xUnit.net to use it.
This approach will still let you use something like TestDriven.Net to execute the tests, run the tests in a debugger, etc.
For easier debugging (and running, too) of unit tests I recommend checking out TestDriven.Net. The "Test With > Debugger" feature is so handy. The personal version is free.
Have you played with the "Specific Version" property on the NUnit.framework reference? We keep ours set to true so that the tests that are coded for a given nunit version require that specific version to execute.
I'm not sure how it will handle, for example, if you had 2.5 on your machine but another machine only had 2.4 - would .NET bind to the 2.4 version happily or will it only bind from earlier versions to later versions of an assembly (e.g. compiled against 2.4, but 2.5 availale at runtime?)