How to merge a SoapUI (free) project into a ReadyAPI (soapui pro) project? - merge

I'm currently working on a ReadyAPI project with a collegue. Our projects have the same resources (same swagger has been used) and we are developing different testSuites.
We have both a testSuite with the basic resources tested unitary and our own testSuites that refer to this testSuite's testCases.
example.
TSuite_common/TCase1 : request1
TSuite_my_Collegue/TCase1 : run TSuite_common/TCase 1 + some processing
TSuite_my_testSuite/TCase1 : run TSuite_common/TCase 1 + some other processing
We want to merge projects in order to have all the testSuites but when I export/import his testSuites, the references to resources are lost, ie. references to TSuite_common/TCase1, altough I have the exact same TSuite_common/TCase1 in my project !
It is not possible to resolve all the links by hand as there are too many of them, is there a particular option that I have to set to do the merge properly ?
Do I have to use groovy scripting ?
thanks in advance
Alexandre

Well finally I found a way to proceed.
In SOAPUI, a 'Run test case step' implies the creation of a reference linked based on the distant test case's uid.
As the TSuite_common/TCase's uid is not the same in my project (destination) and in my colleague's project (source), I had to build a script to play on the exported source/testSuite
This script :
- gets the referenced test case's name from its uid (in the source/TSuite_common)
- gets the corresponding uid from the testcase's name in the destination/TSuite_common (which contains the same APIs as in the source)
- replace the referenced test case's uid with the uid just found
- import the testSuite in myProject
There is no unresolved links
Note that it's finally fairly easy as the testSuites imported have a different name from mine.
Next step : merge an existing testSuite from another project (!)

Related

how to disclude development.conf from docker image creation of play framework application artifact

Using scala playframework 2.5,
I build the app into a jar using sbt plugin PlayScala,
And then build and pushes a docker image out of it using sbt plugin DockerPlugin
Residing in the source code repository conf/development.conf (same where application.conf is).
The last line in application.conf says include development which means that in case development.conf exists, the entries inside of it will override some of the entries in application.conf in such way that provides all default values necessary for making the application runnable locally right out of the box after the source was cloned from source control with zero extra configuration. This technique allows every new developer to slip right in a working application without wasting time on configuration.
The only missing piece to make that architectural design complete is finding a way to exclude development.conf from the final runtime of the app - otherwise this overrides leak into production runtime and obviously the application fails to run.
That can be achieved in various different ways.
One way could be to some how inject logic into the build task (provided as part of the sbt pluging PlayScala I assume) to exclude the file from the jar artifact.
Other way could be injecting logic into the docker image creation process. this logic could manually delete development.conf from the existing jar prior to executing it (assuming that's possible)
If you ever implemented one of the ideas offered,
or maybe some different architectural approach that gives the same "works out of the box" feature, please be kind enough to share :)
I usually have the inverse logic:
I use the application.conf file (that Play uses by default) with all the things needed to run locally. I then have a production.conf file that starts by including the application.conf, and then overrides the necessary stuff.
for deploying to production (or staging) I specify the production/staging.conf file to be used
This is how I solved it eventually.
conf/application.conf is production ready configuration, it contains placeholders for environment variables whom values will be injected in runtime by k8s given the service's deployment.yaml file.
right next to it, conf/development.conf - its first line is include application.conf and the rest of it are overrides which will make the application run out of the box right after git clone by a simple sbt run
What makes the above work, is the addition of the following to build.sbt :
PlayKeys.devSettings := Seq(
"config.resource" -> "development.conf"
)
Works like a charm :)
This can be done via the mappings config key of sbt-native-packager:
mappings in Universal ~= (_.filterNot(_._1.name == "development.conf"))
See here.

How to use Jenkins Multi-Configuration (Matrix) type Projects?

Jenkins official Wiki page for Matrix projects isn't really helping me; so I have a few questions.
We're trying to build a couple of projects that are all essentially the same, just some are being branded differently for our customers. In other words, the software / tests / etc. are all identical, except for some tweaks to turn BrandA into BrandB (or BrandC, etc.)
I figure I should be using a Matrix project to create builds for BrandA, BrandB, etc. While I haven't figured out all my steps yet (including how to rename executables after they're built) I know that I will need to pass the Brand Name to many of my Jenkins Powershell scripts during the build process, and then use that brand n the script.
How do I get these variables into my scripts? Are they automatically passed in to every build step in Jenkins? What is the variable name to use?
Finally, is there a good resource on building these multi-configuration projects in Jenkins? I can't seem to find anything comprehensive online.
If you usually build the job for BrandA and only occasionally for BrandB and BrandC a matrix project may not be what you want. I recommend, instead, using a parameterized job where the brand is a parameter whose default value is BrandA. If the parameter is named BRAND the parameter is accessible in all of the builds and publish steps with ${BRAND} and as an environment variable as %BRAND%.
I refer you to the parameterized build wiki for more details.
Yes, ${BRAND} and %BRAND% should work fine.
If you're using Maven, ${env.BRAND} does this too.
There's a plugin that you can see all Environment Variables that are available to your job/build.
https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin
I'm not aware of that kind of process but I suggest you tu use the Copy project functionnality.
New Job
Copy From existing job
You will have a copy of your Job and you'll be able to setup easily all specific fields.

FitNesse Test History with Version Control

I'm starting some automated acceptance testing for our company, and have decided to use FitNesse.
I want to have FitNesse under source control - that is the FitNesse executable + plugins, the wiki pages and the test fixture source code.
Then anyone can get all they need from source control to build and run the acceptance tests locally. Including a Continuous Integration server.
I have read that the page versioning can be turned off using the -e 0 parameters. Then we don't have ZIP files in the FitNesse root folder under source control - nice.
But what about Test History? Do I want the history of locally ran tests to be checked in? And when someone gets the latest version, do they want their local test history to be overwritten?
I'm very grateful to anyone who can share their experiences of using FitNesse in similar scenarios to that described above.
Why do you not clear all test history before check-in?
In my current project, there are only 2 required operations for creating test cases on FitNesse: Drawing the table(Edit the page) and developing the api(output as .dll files).
We also develop a tool for triggering the FitNesse running testing from remote machines automatically. After finish testing, we get the testing result by handling the output excel files.
The structure of our svn:
-SVN
--FitNess
--- TestLib
--- FitNesse
---- FitNesseBin
---- TestCases
[Update]
Test Fixture code should be finished and frozen before testers start writing test cases and running them. Certainly, when Test Fixture need bug fixing or enhancement, the code could also be changed. In my team, we ask different roles to handle different tasks. Developer provide API for testing use. Senior QA wrap the API in test fixture. QA write the table/wiki. Every role only take the assigned parts. Before modification, team member should update local copy of the FixNesse, and check-out the file. And only check-in the modified check-outed file.
Disabling and auto-purging of test history is still a valid requirement for those integrating fitnesse into a standard build (e.g. Maven) and for running fitnesse builds locally, despite .svnignore .gitignore etc options.
The test history slows down the finalisation of the test and when it's purged after a few runs you will certainly notice the difference.
Uncle Bob mentioned he was working on an option to only keep the test history for the latest test run (you always need at least 1 so you can show the results) here http://tech.groups.yahoo.com/group/fitnesse/message/14306 but cannot see such an option in the code. I got a python error trying to reply to post - so no answer on that option unfortunately :-(

Moles "conflict" when using Moles with MsTest

I have found an explicable (but frustrating) behavior when working with Moles and MsTest.
Just imagine the following case:
"Test DLL A" is using Moles on mscorlib
"Test DLL B" is using Moles on mscorlib
To improve compilation time, in both cases we are editing the .moles files in order to ask the generation of moles for a single class.
When we do so, our projects will compile perfectly fine.
But when we run the test of our solution the MsTest process will be :
Copy all DLLs in the "Out" folder
Run the tests in the "Out" folder
As a consquence, the copy to the "Out" folder will try to copy two version of mscorlib.Moles.dll (one with type 1, and one with type 2) and of course, the second one will overwrite the first one.
And so my test of "Test DLL A" will fail because my mole assembly is not correct.
There are of course two simple workaround :
either include all needed types (in all projects) in every .moles file
either do not use type filtering
Have you ever faced this "problem" also ? is there any other solution ?
Many thanks !
Pierre-Emmanuel
DotNetHub user group lead
This is a late reply I know, but we did run into the same thing here at my shop.
What we ended up doing was creating a project just for moles. And then having all other unit test projects reference the .dlls created in our MolesProject/Moles folder.
We were able to leverage that and improve build times

Test projects not reading app.config in TeamCity -> NUnit phase

Well we are facing a strange problem with JetBrains TeamCity induced unit tests on our main project where tests from few library projects are failing regularly. Apparently, it's not reading the config file (coming from app.config and nicely stored in project -> bin -> debug -> projectName.dll.config).
Hints or tips on what could be the real issue would be highly appreciated.
I've got the same problem and wasted a couple of hours to figure out what the problem is.
In our case, the NUnit plugin was configured to run the tests from:
**\*Tests.dll
Though this sounds to be OK, it has turned out that this pattern will not only match to the MyTests.dll in the bin\Debug folder but also to the obj\Debug\MyTests.dll. The obj folder is used internally for the compilation and does not contain the config file.
Finally the solution was to change the plugin configuration to
**\bin\Debug\*Tests.dll
Actually we use a system variable for the build configuration so we did not have the "Debug" hard-coded. Using bin* might be also dangerous when the workspace is also used for Debug/Release builds and you don't have a full cleanup specified.
You might wonder why I did not realize the test count mismatch (actually it was doubled, because they were running once from bin and once from obj), but this is typical: while everything is green, you don't care about the count. When we have introduced the first test depending on the config, we had only one failure (because the one from bin was passing), so the duplication was not outstanding.
In addition to Gaspar Nagy's accepted answer, check to see if your project has multiple test dlls and one of them is referencing another.
This causes the referenced dll to be run twice, and the copy that was in the other dll's folder to not have the proper app.config entries. The proper fix is to remove any and all references from the other test project.
TeamCity (v6.5.4) has its own NUnit Test Runner and there seems to be an inconsistency between it and the NUnit GUI test runner (2.5.10). The NUnit GUI Test Runner follows the long standing convention of expecting the configuration file name to be of the format .config. You can see this in NUnit by looking at Project -> Edit...
TeamCity on the other hand is looking for an app.config.
Your options are to either:
Set the NUnit GUI to point to app.config and include the resultant
nunit project in your source control.
Have both an app.config and a .config - syncing both
manualy.
Add a step to your build process to copy .config to
app.config (or vice versa).
I had similar woes
This may help; additionally we had issues where this still would not work - we ended up copying the relevant config sections into the highest level config file. (i.e. if it was a web app copy it into the Web.Config) - fairly kludgy but we had wasted a few days on the issue
I learned recently that app.config files are not read for a class library... Maybe this link could help :)
app.config for a class library
If you need a config file for your "unit" tests then you are doing it wrong. Proper unit testing never needs configuration or access to the database, file system etc. You should change your testing strategy.
Good point to start is mark your tests that need configuration with the[Category("Integration")] annotation and set the Teamcity test runner to ignore this category. Then you should focus on refactoring these test.