Trigger a build in Bamboo from a Jenkins job - rest

Currently all our Regression tests are configured in a Jenkins job.We want once the regression tests are completed,it should trigger a plan in our Bamboo server and also record the tests results in Bamboo using TestNG parser.Is it possible?
Ps : I have already seen Bamboo rest-api but can not seem to find a solution.Any suggestions will be highly appreciated.Thanks

It hasn't been updated for a while and I'm not using it currently, thus can't confirm it is still working as desired (the download statistics suggest it being used though), but given there's not much to it, you should be able to achieve the first part of your use case with the Bamboo Notifier, which allows you to Trigger a Bamboo build upon successful completion of a Jenkins job.
The second part should be covered by the Bamboo TestNG Parser task, though you'll need to push your existing test result files to Bamboo by some means of course, possibly by Using the SCP task in Bamboo.

Related

How to do automated integration tests using XUnit (.Net Core 2.1) and AzureDevOps?

I'm using Team Foundation Version Control as a source control for my .NET Core 2.1 project.
AzureDevOps is configured in continuous integration to checkout the code and build it.
We have 3 environments (Staging, PreProd, Prod). The Staging is not isometric with Prod so it is untrustworthy and we have to execute our integration tests on each environment with environmental data.
My build is generated by an agent in AzureDevOps on an OnPremise server which can only reach Prod environment.
I'd like to automate my XUnit integration tests in an AzureDevOps pipeline, however, I don't know where and how to do it. Am I supposed to execute the integration test step after building? or after releasing?
It looks like I need to deploy my binaries first on my environments, then execute the integration tests, and, if they go wrong, rollback the release.
Weird?!?
How can unblock this situation?
Regards,
If you want to run integration tests you need to first deploy your binaries to environment. You can do it as a separate:
step,
stage
pipeline
after deploying code.
Here it is up to you how you will do it. (To achieve last option you need to use pipeline triggers)
If you follow approach shift left, it means you detect issues as quickly as possible, you should don't worry about breaking them. If it happens on staging I would rather encourage you to fix the issue instead of roll backing code. Especially if it involves data model change.
And on production you can run only smoke tests, which are kind of integration tests which doesn't impact on state. They are like GET in REST - smoke tests should be idempotent, so you can run them without worrying bout changing state.
Since you use TFVC version, you could define a build pipeline to build and test your code, and then to publish artifacts. You also define a release pipeline to consume and deploy those artifacts to deployment targets.
As you have to execute integration tests on each environment with environmental data, you can run your XUnit integration tests in Release pipeline via VSTest task.

How to run Jenkins Groovy scripts directly from Intellij or Eclipse

I have a Groovy repository which contains my Jenkins pipeline's Groovy code.
Currently, I am making changes in an IDE, commiting them to the repository, going to the Jenkins instance, manually triggering a Jenkins job, and checking to see if all of the changes all working. This is taking a lot of time.
Is there a way to do all of this from the IDE itself?
I would suggest to treat your pipeline code like some other code in IT. What are you doing now could be called "manual integration tests" because you are making your code changes and check how that code integrate with other components (like shell commands, jenkins plugins, etc.) on jenkins - this development loop is long and not efficient. So my proposition for you is to write simple unit tests using this framework:
https://github.com/jenkinsci/JenkinsPipelineUnit
So you can test your pipelines on your machine without any interaction with jenkins.
If you think that it's not proper way for you I would suggest to mix using this plugin for running jobs directly from IntelliJ: https://github.com/programisci/jenkins-control-plugin/
and of course IntelliJ git integration to commit your changes to repository.
For executing from the IDE, an option is to create some automation around using the Jenkins CLI. You should be able to see the CLI commands at http://your-jenkins-url/cli.
java -jar jenkins-cli.jar -s https://jenkins.physiq.zone/ replay-pipeline JOB [-n (--number) BUILD#] [-s (--script) SCRIPT]
Replay a Pipeline build with edited script taken from standard input
JOB : Name of the job to replay.
-n (--number) BUILD# : Build to replay, if not the last.
-s (--script) SCRIPT : Name of script to edit, such as Script3, if not the main Jenkinsfile.
For example, in IntelliJ you could use a Run Configuration that:
Downloads the CLI JAR
Executes it with the path to the local file with certain parameters
You can also write a script, Gradle build, or something else that wires into the IDE to pull the CLI JAR and execute a job with your local pipeline code.
For testing you may want to use https://github.com/jenkinsci/JenkinsPipelineUnit as already brought up, or a Gradle plugin that I maintain at https://github.com/mkobit/jenkins-pipeline-shared-libraries-gradle-plugin which uses the previously mentioned library for unit testing and the jenkinsci/jenkins-test-harness for integration testing.

automated test, code coverage, static analysis and codereview

I used to be developer long ago but for last 10 years working on system ops. I am planning to move into devops and trying to sharpen my saw. However, when it comes to jenkins and specially static code analysis, code coverage, automated test and code review, I get so much confused.
Lets start from automated test ( for simplicity take unit test). I understand that we write a separate class file for unit test. But how does that test is carried out? Will jenkins create a jvm where the newly build artifact is deployed and the tests are run against it? or will the test be run against code ( I do not think but still want to clarify)?
I downloaded one example application with maven and codertura from github and build the project. When the build was completed, it publishes code coverage report.
I have not done any post build, for deploying the artifact. So, I am not sure how it works, and what did it do and how?
Thanks
J
Here is a common flow that you can follow to achieve your requirement.
Work with code --> Push to gerrit for review --> Jenkins gerrit trigger plugin get triggered --> The corresponding job will checkout the code you committed and do the compile, package, unit test, deploy to artifactory --> Execute the sonar build to analysis the code quality, static analysis, code coverage...
Br,
Tim

How to test Jenkins Workflow

Is there an example of how to do testing against the Jenkins Workflow groovy DSL?
Something similar to the example for the Jenkins Job DSL.
What I've done, is that I created a complete dev-test environment. I did it by using a docker-compose file that includes: jenkins, gitlab, and archiva. I push to a "jenkins-test" origin and run the workflow in the safe "test" environment.
Here's my docker-compose in case someone is interested as a starting point, or as a simple test env:
https://github.com/portenez/dry-dock
it's not fully automated, but it's a good start.
No, running a workflow script requires Jenkins to actually be running (since most of what it does is interact directly with Jenkins features like slaves and test results), so the only way to test it is to have a test Jenkins server and run it. By far the most convenient ways to do that in a fully automated way are:
Use JenkinsRule in the Jenkins test harness, like plugins would do in their test sources. Example
Use the acceptance-test-harness project as a dependency to create integration tests driven via Selenium. Example

TeamCity: Best Practices to deploy produced installers (artifacts)

We got a TeamCity server which produces nightly deployable builds. We want our beta tester to have access these nightly builds.
What are the best practices to do this? TeamCity Server is not public, it is in our office, so I assume best approach would be pushing artifacts via FTP or something like that.
Also I have no clue how to trigger a script when an artifact created successfully. Does TeamCity provide a way to do that?
I don't know of a way to trigger a script, but I wouldn't worry about that. You can retrieve artifacts via a URL. Depending on what makes sense for your project, you could have a script set up on a scheduler (cron or Windows Scheduling) that pulls the artifact and sends it to the FTP site for the Beta testers. You can configure it to pull only the latest successful artifact. If you set up the naming right, if the build fails they beta testers won't notice because the new build number just won't be there, no bad builds would be pushed to them.
Read the following help page from the documentation. It shows how you send commands from your build script to tell teamCity to publish the artifacts to a given path.
In TeamCity 7.0+ you can use Deployer plugin. Installation steps can be found here. It also allows to upload artifacts via SMB and SSH.
I suggest you start looking at something like (n)Ant to handle your build process. That way you can handle the entire "build artifacts" -> "publish artifacts" chain in an automated manner. These tools are dependency based, so the artifacts would only be published if the build succeeded.