How to test Jenkins Workflow - jenkins-workflow

Is there an example of how to do testing against the Jenkins Workflow groovy DSL?
Something similar to the example for the Jenkins Job DSL.

What I've done, is that I created a complete dev-test environment. I did it by using a docker-compose file that includes: jenkins, gitlab, and archiva. I push to a "jenkins-test" origin and run the workflow in the safe "test" environment.
Here's my docker-compose in case someone is interested as a starting point, or as a simple test env:
https://github.com/portenez/dry-dock
it's not fully automated, but it's a good start.

No, running a workflow script requires Jenkins to actually be running (since most of what it does is interact directly with Jenkins features like slaves and test results), so the only way to test it is to have a test Jenkins server and run it. By far the most convenient ways to do that in a fully automated way are:
Use JenkinsRule in the Jenkins test harness, like plugins would do in their test sources. Example
Use the acceptance-test-harness project as a dependency to create integration tests driven via Selenium. Example

Related

How to do automated integration tests using XUnit (.Net Core 2.1) and AzureDevOps?

I'm using Team Foundation Version Control as a source control for my .NET Core 2.1 project.
AzureDevOps is configured in continuous integration to checkout the code and build it.
We have 3 environments (Staging, PreProd, Prod). The Staging is not isometric with Prod so it is untrustworthy and we have to execute our integration tests on each environment with environmental data.
My build is generated by an agent in AzureDevOps on an OnPremise server which can only reach Prod environment.
I'd like to automate my XUnit integration tests in an AzureDevOps pipeline, however, I don't know where and how to do it. Am I supposed to execute the integration test step after building? or after releasing?
It looks like I need to deploy my binaries first on my environments, then execute the integration tests, and, if they go wrong, rollback the release.
Weird?!?
How can unblock this situation?
Regards,
If you want to run integration tests you need to first deploy your binaries to environment. You can do it as a separate:
step,
stage
pipeline
after deploying code.
Here it is up to you how you will do it. (To achieve last option you need to use pipeline triggers)
If you follow approach shift left, it means you detect issues as quickly as possible, you should don't worry about breaking them. If it happens on staging I would rather encourage you to fix the issue instead of roll backing code. Especially if it involves data model change.
And on production you can run only smoke tests, which are kind of integration tests which doesn't impact on state. They are like GET in REST - smoke tests should be idempotent, so you can run them without worrying bout changing state.
Since you use TFVC version, you could define a build pipeline to build and test your code, and then to publish artifacts. You also define a release pipeline to consume and deploy those artifacts to deployment targets.
As you have to execute integration tests on each environment with environmental data, you can run your XUnit integration tests in Release pipeline via VSTest task.

How to run Jenkins Groovy scripts directly from Intellij or Eclipse

I have a Groovy repository which contains my Jenkins pipeline's Groovy code.
Currently, I am making changes in an IDE, commiting them to the repository, going to the Jenkins instance, manually triggering a Jenkins job, and checking to see if all of the changes all working. This is taking a lot of time.
Is there a way to do all of this from the IDE itself?
I would suggest to treat your pipeline code like some other code in IT. What are you doing now could be called "manual integration tests" because you are making your code changes and check how that code integrate with other components (like shell commands, jenkins plugins, etc.) on jenkins - this development loop is long and not efficient. So my proposition for you is to write simple unit tests using this framework:
https://github.com/jenkinsci/JenkinsPipelineUnit
So you can test your pipelines on your machine without any interaction with jenkins.
If you think that it's not proper way for you I would suggest to mix using this plugin for running jobs directly from IntelliJ: https://github.com/programisci/jenkins-control-plugin/
and of course IntelliJ git integration to commit your changes to repository.
For executing from the IDE, an option is to create some automation around using the Jenkins CLI. You should be able to see the CLI commands at http://your-jenkins-url/cli.
java -jar jenkins-cli.jar -s https://jenkins.physiq.zone/ replay-pipeline JOB [-n (--number) BUILD#] [-s (--script) SCRIPT]
Replay a Pipeline build with edited script taken from standard input
JOB : Name of the job to replay.
-n (--number) BUILD# : Build to replay, if not the last.
-s (--script) SCRIPT : Name of script to edit, such as Script3, if not the main Jenkinsfile.
For example, in IntelliJ you could use a Run Configuration that:
Downloads the CLI JAR
Executes it with the path to the local file with certain parameters
You can also write a script, Gradle build, or something else that wires into the IDE to pull the CLI JAR and execute a job with your local pipeline code.
For testing you may want to use https://github.com/jenkinsci/JenkinsPipelineUnit as already brought up, or a Gradle plugin that I maintain at https://github.com/mkobit/jenkins-pipeline-shared-libraries-gradle-plugin which uses the previously mentioned library for unit testing and the jenkinsci/jenkins-test-harness for integration testing.

Can Jenkins build code to remote servers when triggered by GitHub webhook?

I have a 'master' server (docker container actually) where I want to install Jenkins in order to link it (with webhook) with a github repo, so every time a developer pushes code, jenkins will auto-pull and build the code.
The thing is that there are an arbitrary number of extra 'slave' servers that need to have the exact same code as the master.
I am thinking of writing an Ansible playbook to be executed by Jenkins everytime the webhook runs and send the code to the slaves.
Can Jenkins do something like this?
Do I need to make the same setup to all the slaves with Jenkins and webhooks?
EDIT:
I want to run a locustio master server on the server that is going to have jenkins. My load tests are going to be pulled from Github there, but the same code needs to reside in the slaves in order to run in distributed mode.
The short answer to your question is that Jenkins certainly has the ability to run Ansible playbooks. You can add a build-step to the project that is receiving the web hook that will run the playbook.
jenkins could trigger another job even on slaves. Then if i get correctly your issue , you just need something like that. https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Remote+Trigger+Plugin
You could build your job by name trigger. Also there is another useful plugin called artifactory. This manages your packages and serves. This mean , you can build your code for once and share to slave and slaves could access your build and runs job.

Trigger a build in Bamboo from a Jenkins job

Currently all our Regression tests are configured in a Jenkins job.We want once the regression tests are completed,it should trigger a plan in our Bamboo server and also record the tests results in Bamboo using TestNG parser.Is it possible?
Ps : I have already seen Bamboo rest-api but can not seem to find a solution.Any suggestions will be highly appreciated.Thanks
It hasn't been updated for a while and I'm not using it currently, thus can't confirm it is still working as desired (the download statistics suggest it being used though), but given there's not much to it, you should be able to achieve the first part of your use case with the Bamboo Notifier, which allows you to Trigger a Bamboo build upon successful completion of a Jenkins job.
The second part should be covered by the Bamboo TestNG Parser task, though you'll need to push your existing test result files to Bamboo by some means of course, possibly by Using the SCP task in Bamboo.

How to deploy artifacts of TeamCity to Amazon EC2 Server

We decided to use AMAZON AWS cloud services to host our main application and other tools.
Basically, we have a architecture like that
TESTSERVER: The EC2 instance which our main application is
deployed to. Testers have access to
the application.
SVNSERVER: The EC2 instance hosting our Subversion and
repository.
CISERVER: The EC2 instance that JetBrains TeamCity is installed and
configured.
Right now, I need CISERVER to checkout codes from SVNSERVER, build, if build is successful, unit test it, and after all tests pass, the artifacts of successful build should be deployed to TESTSERVER.
I have completed configuring CISERVER to pull the code, build, test and produce artifacts. But I couldn't manage how to deploy artifacts to TESTSERVER.
Do you have any suggestion or procedure to accomplish this?
Thanks for help.
P.S: I have read this Question and am not satisfied.
Update: There is a deployer plugin for TeamCity which allows to publish artifacts in a number of ways.
Old answer:
Here is a workaround for the issue that TeamCity doesn't have built-in artifacts publishing via FTP:
http://youtrack.jetbrains.net/issue/TW-1558#comment=27-1967
You can
create a configuration which produces build artifacts
create a configuration, which publishes artifacts via FTP
set an artifact dependency in TeamCity from configuration 2 to configuration 1
Use either manual or automatic triggering to run configuration 2 with artifacts produced by configuration 1. This way, your artifacts will be downloaded from build 1 to configuration 2 and published to you FTP host.
Another way is to create an additional build step in TeamCity for configuration 1, which publishes your files via FTP.
Hope this helps,
KIR
What we do for deployment is that the QA people log on to the system and run a script that deploys by pulling from the team city repository whenever they want. They can see in team city (and get an e-mail) if a new build happened, but regardless they just deploy when they want. In terms of how to construct such a script, the team city component involves retrieving the artifact. That is why my answer references getting the artifacts by URL - that is something any reasonable script can do using wget (which has a Windows port as well) or similar tools.
If you want an automated deployment, you can schedule a cron job (or Windows scheduler) to run the script at regular intervals. If nothing changed, it doesn't matter much. I question the wisdom of this given that it may mess up someone testing by restarting the system involved.
The solution of having team city push the changes as they happen is not something that team city does out of the box (as far as I know), but you could roll your own, for example by having something triggered via one of team city's notification methods, such as e-mail. I just question the utility of that. Do you want your system changing at random intervals just because someone happened to check something in? I would think it preferable to actually request the new version.