Is it possible to get the test results of a job using the Java API for Jenkins? - rest

I'm looking at the Jenkins API for java and I see there's a JobApi to start/stop Jobs and a BuildInfo interface with information about a build in that job. I can't find anything to get the test results of a build though, is it not (yet) implemented or did I miss it?
I mean the results that you would get by calling the endpoints:
http://<server>/job/<job_name>/<build_number>/testReport/api/json?pretty=true --> returns a hudson.tasks.junit.TestResult
http://<server>/job/<job_name>/<build_number>/testReport/<package>/BugReportsTest/<test_class>/api/json?pretty=true --> returns a <hudson.tasks.junit.CaseResult>

What is being received is a Java Object. One needs to call the isPassed() to get whether this particular object passed or not.
Refer to the Javadoc here
https://javadoc.jenkins.io/plugin/junit/hudson/tasks/junit/TestResult.html
This is pretty similar to case result as well where again isPassed() to be used to check whether it has passed or not.
Refer to Case Result Javadoc here
https://javadoc.jenkins.io/plugin/junit/hudson/tasks/junit/CaseResult.html
You can know whether the entire build job has passed or not by using the following url:
http://<server>/job/<job_name>/<build_number>/api/json?pretty=true
A snapshot for entire build has passed or not is as follows:

Related

Citrusframework - Java action - Get result

Besides REST-API Calls, I need to call my own Java-Class, which basically does something, which I want to confirm later in the test via REST-API-Calls.
When calling my Java-Class, there is an expected behavior: It may fail or not fail, depending on the actual Test-Case.
Is there any chance to code this expectation this into my test-class:
java("com.org.xyz.App").method("run").methodArgs(args).build();
As this is the Main-Class, which should be executed later in a automated fashion, I would prefer to validate the Return-Code.
However, I'm looking for any possible way (Exception-Assertion, Stdout-Check, ..) to verify the status of the program.
As you are using the Java DSL to write test cases I would suggest to go with custom test action implementation and/or initializing your custom class directly in the test method and call the method as you would do with any other API.
You can wrap the custom code in a custom AbstractTestAction implementation as you then have access to the TestContext and your custom code is integrated into the test action sequence.
The java("com.org.xyz.App").method("run").methodArgs(args) API is just for compliance to the XML DSL where you do not have the opportunity to initialize own Java class instances. Way too complicated for your requirement in my opinion.

Scala inconclusive Assertion

I am writing an integration test in Scala. The test starts by searching for a configuration file to get access information of another system.
If it finds the file then the test should run as usual, however if it does not find the file I don't want to fail the test, I would rather make it inconclusive to indicate that the test can not run because of missing configurations only.
In C# I know there is Assert.Inconclusive which is exactly what I want, is there anything similar in Scala?
I think what you need here is assume / cancel (from "Assumptions" section, found here):
Trait Assertions also provides methods that allow you to cancel a test. You would cancel a test if a resource required by the test was unavailable. For example, if a test requires an external database to be online, and it isn't, the test could be canceled to indicate it was unable to run because of the missing database.

Extracting facts/objects from DefaultFactHandle in Drools StatelessKieSession (via Scala)

I've been working with Stateful Sessions (KieSession) so far and have managed to get my project running as desired using Scala with a few Java wrappers. I am now trying to switch over to StatelessKieSessions. Based on the documentation I found, I've managed to run the following to insert objects/collections into the session, fire the rules on them and update the facts:
val cmd = CommandFactory.newInsert(myObject, "myObject")
val result = ksession.execute(cmd)
When I print result (which is of class org.drools.core.common.DefaultFactHandle), it shows the structure of the desired fact, updated as expected, preceded by something of the sort "fact 0:1:2050275256:1971742898:2:DEFAULT:NON_TRAIT:"
The documentation says that I should be able to write something like result.getValue("myObject") however this option doesn't seem to be available in Scala. (https://docs.jboss.org/drools/release/6.0.0.Beta1/kie-api-javadoc/org/kie/api/runtime/StatelessKieSession.html)
I understand that Scala-Drools interoperability hasn't been provided in full, however does anyone know of a way to extract updated facts from within a StatelessKieSession or a DefaultFactHandle containing it?
What you get from this execute command is the fact handle of the newly inserted fact. The object therein would still be the one you have inserted, updated or not. You'll have to investigate whether this is something you can use in Scala or not.
There is no command to retrieve all facts that have been changed during the execution of a session. You'll have to monitor this, using some of the available technique.
There's not much to be gained by running a "Stateless Session". If you can achieve what you want using a regular (stateful) session, leave it at that. The stateless session may have its advantages, but don't grapple with it from Scala.

is there a way to handle assertion passed in pytest

I am trying to adapt the pytest tool so that it can be used in my testing environment, which requires that precise test report are produced and stored. The tests report are in xml format.
So far I have succeeded in creating a new plugin which produces the xml I want, at one exception :
I need to register in my xml report the passed assertion, with the associated code if possible. I couldn't find a way to do so.
The only possibility approaching is to overload pytest_assertrepr_compare im py pytest plugin, but it is called only on assertion failure, not on passed assertion.
Any idea to do so ?
Thank for the help!
etienne
I think this is basically impossible without changing the assertion re-writing itself. py.test does not see things happening on an assert-level of detail, it just executes the test function and it either returns a value (ignored) or raises an exception. In the case where it raises an exception it then inspects the exception information in order to provide a nice failure message.
The assertion re-writing logic simply replaces the assert statement with an if not <assert_expr>: create_detailed_assertion_info. I guess in theory it is possible to extend the assertion rewriting so that it would call hooks on both passing and failure of the <assert_expr>, but that would be a new feature.
Not sure if I understand your requirements exactly, but another approach is to produce a text/xml file with the expected results of your processing: The first time your run the test, you inspect the file manually to ensure it is correct and store it with the test. Further test runs will then produce a similar file and compare it with the former, failing if they don't match (optionally producing a diff for easier diagnosing).
The pytest-regtest plugin uses a similar approach by capturing the output from test functions and comparing that with former runs.

How do I use multiple levels in my REST call?

I'm trying to create a REST service with the following signature for a GET call:
//somesite/api/customer/1/invoices
Of course using the correct path, I can get this to work, but all the documentation that I look at for REST tells me how to query .../api/customer or .../api/customer/id, but nothing tells me how to define and get to the level after id.
I suspect it will have something to do with the router code, but could use some instruction on how to get to that next level.
Thanks