Since junit5 out there, how long jUnit4 going to be supported? Any plans for deprecation, when is it going to be? We are just trying to gauge whether we need to migrate existing jUnit4 test cases to jUnit5 now or later at some point. The user guide says the following but wanted know more clearly how long (a year, two or more from now)? Appreciate your response.
"since the JUnit team will continue to provide maintenance and bug fix releases for the JUnit 4.x baseline, developers have plenty of time to migrate to JUnit Jupiter on their own schedule."
JUnit 4 hasn’t seen any functional upgrade in a decade. What the JUnit team does is maintain the JUnit 4 engine, called Vintage, which runs JUnit 4 on the JUnit 5 platform. As long as the platform engine API stays downwards compatible this is very likely to work. The two events that could break JUnit 4 are:
The platform changes in a highly incompatible way and the Vintage engine will no longer be supported. This is not to be expected in the near future.
Java changes in an incompatible way so that the original JUnit 4 code no longer works. I don’t see that looming either.
That said, using just JUnit 4 decouples you from all innovation in the field of Java test automation. Many extensions already do not support JUnit 4 anymore. My recommendation: Start using the platform at once using Vintage. Write all new tests with Jupiter and the other test engines you need. Migrate old JUnit 4 tests when you have to adapt them anyway.
Related
I'm developing data driven tests using Nunit3 and .Net Core 3.1 and I have many tests with more different data sources, which sometimes have complex logic inside. When I want start only one tests I want to start only one data provider, but I run all. In 3.15.1 ver of NUnit framework was released PreFilter, which solve this problem.
But this feature available only on .runsettings file as I understood docs.
In this question Charlie Poole says that .runsettings is only for VS adapter. But the VS adapter takes a long time to run my tests.
I found info for configuration file but don't undestand what I can configure in this file 0_o
Can I run my tests by NUnit Console Runner 3.12.0-beta1 with PreFilter?
I'm afraid not, no.
There's an open issue to implement it here: https://github.com/nunit/nunit-console/issues/438. You'll see from the VS adapter docs there's several edge-case bugs around this, which will be more visible in the adapter than in the console. At this point in time, nobody has yet taken on that task of implementing this feature in the console.
I'm developing an Eclipse plugin and i've run into this problem several times already.
I always keep my Target Platform updated for the latest (stable) Eclipse release so that i test my code against all the recent updates, fixes etc.
However, this may (and have) result in accidental breakage of backward compatibility of my plugin, e.g. when i accidentally use new API that did not exist in the Eclipse version i aim to support.
Or, more sneaky example, in 4.6 Eclipse moved to Java 8 and some interface methods got default implementations. Now when i implement these interfaces my IDE doesn't automatically generate empty implementations for those methods and no error is generated. If i install and run this code against a previous Eclipse version these methods will throw AbstractMethodError since no implementation has been provided.
So my question is: is there a tool to further restrict API my Target Platform provides to some earlier Eclipse API version?
Is API Baseline an appropriate tool for this? Because i couldn't get it to work like this. (It allowed even non-baseline method calls not to mention the more complex default-methods example.)
You can use multiple target platforms, switching between them doesn't take long. For testing Stack Overflow questions I have one Eclipse install with 10 target platforms.
So have a target platform for the oldest release you want to support as well as your current release target platform and check the code runs against that.
It is particularly important to test with the actual Target Platform if you want to support Eclipse 3 releases as the were large changes going from Eclipse 3 to 4.
NuGet is filled with things that are built with dependency on NUnit>= 2.x. Can I use them with NUnit 3?
A specific example.
Create a new .Net project with a TestProject.
Via NuGet, add dependencies to packages
NUnit, noting that you're now on version 3
TestBase, which claims dependency on NUnit (>= 2.6.3)
And create some unit tests. This works until you actually invoke something in TestBase which call NUnit, e.g.
1.ShouldBeGreaterThan(0);
At this point the version mismatch breaks it.
"Assembly Binding Redirect!" I hear you cry. But NUnit 3 is signed with a different public key than Nunit 2, so that isn't possible.
Is it in fact possible to build something with a dependency on NUnit>= 2.x that will works with NUnit 3, given a change in public key?
NUnit 3 is basically a completely new product. In retrospect, we probably should have created a new NuGet package. Too late now.
Since the 3.0 framework works completely differently from v2, an Assembly redirect would not help you.
Third party products that want to work with NUnit 3.0 usually need to be rewritten unless they only use a very small subset that hasn't changed.
If you want to use NUnit 3 you can only use 3RE party solutions that are updated to work with it.
In addition to Charlie's answer, I would recommend that people try to contact the authors of broken packages and encourage them to either update their packages to work with NUnit 3, or change their dependencies to be NUnit>=2.x and <2.9.
The NUnit team has been announcing publicly that NUnit 3 would be a breaking change for many years now. Most of the packages that have dependencies on NUnit tend to be test runners or testing extensions, so I would have hoped they would try to keep informed.
So I've been messing around with the packaging feature that NetBeans offers, following this tutorial: http://platform.netbeans.org/tutorials/nbm-nbi.html. I didn't like how I had to modify the platform that my IDE was running on in order to customize the installer itself, so I decided to create a copy and just change the platform the application suite was using (Properties->Libraries).
This seemed to work fine, and even packaged that platform as part of the installer. However, when doing the packaging itself, I noticed that it was calling the IDE's platform build script to create the installer rather than the one I had customized. This defeats the purpose, at least in my case, of having the separate platform.
Within the platform manager, under the harness tab, I made sure that the platforms harness was being used rather than the IDE's, but it didn't seem to make a difference.
I verified the behaviour by throwing an echo into both the default IDE platform and my customized platform to see which was being called. I also noticed that the Ant call that gets made at the start of packaging makes an explicit reference to the IDE platform, as well.
I've tried this under 7.2 (currently using 7.3) as 7.3 has had some fairly nasty bugs and thought perhaps it was just recently introduced.
At this point I'm thinking it's a bug, but I was hoping that perhaps someone else had come across this and found some sort of solution or could shed some light on why it's doing what it's doing.
Thanks!
This is slated to be fixed for 7.4, in case anyone comes across it in the meantime.
Here's a link to the bug ticket: https://netbeans.org/bugzilla/show_bug.cgi?id=229478
We are currently working on a Salesforce.com custom APEX project that involves a lot of apex classes, triggers and Visualforce pages. We also have numerous applications from AppExchange that are part of the system.
We develop all the Apex Classes, Visualforce pages, etc in test environment and then deploy it to the live environment using Eclipse IDE. What happens is that every time we deploy changes to the live environment, all the test methods of all the classes (including those from AppExchange Apps) seems to be executing. So deployment of a simple change could end up taking couple of minutes.
Is there a way in apex to "package" classes by namespace or something like that so that when we try to deploy a change, only the test methods relevant to that package are executed. If something like that exists, our deployment can happen much faster.
Unfortunately no, there is no partial testing for deployment of apex code, every change, no matter how minute or self-contained triggers a full test run. This among other things enforces code metrics (minimum total code coverage for instance)
IMHO, this is proving to be a two-sided coin when it comes to enforcing code reliability. When we started using apex all of our tests were very comprehensive performing actual testing of the code with lots of asserts and checks. Then we started having very very long deploy times so now our tests serve one and only function, satisfying minimum code coverage, and even with that simplification it takes almost 3 minutes to deploy anything and we only use 20% of our apex code allowance.
IMHO2, Apex is way too slow of a coding platform to be enforcing this kind of testing. I cant even imagine how long the tests would run if we reach 50% allowance, not to mention any more.
This is possible but you'll need to learn about Apache Ant and have a look at the Force.com Migration Toolkit. You can then use a Build file to determine which files are deployed as well as which tests are run.
I'm busy writing a whitepaper that'll touch on this and other related development strategies... I'll post to my blog when it's done.
If we use the apache ant migration tool we have many options for deployment
like
deployCodeFailingTest which will skip the test classes
and if you want to run only specific test classes
please use : something similar to this in ur build.xml
<target name="deployCode">
`<sf:deploy`
username="${sf.username}"
password="${sf.password}"
serverurl="${sf.serverurl}"
deployroot="codepkg">
<runTest>SampleDeployClass</runTest>
</sf:deploy>
</target>
for detailed reference please use this link
http://www.salesforce.com/us/developer/docs/daas/salesforce_migration_guide.pdf
I would recommend the following approach:
Git as repository for all your sf code
jenkins to deploy your code as CI/CD
PMD as the static code analyser
sfdx as the deployment method in jenkins for deployment.
Refer the trailhead link: https://trailhead.salesforce.com/users/strailhead/trailmixes/architect-dev-lifecycle-and-deployment