Start Sonnar from a node in jenkins - plugins

I wonder how to start Sonnar from a node. I have the following restrictions:
Only the machine it is installed on the jenkins has access to sonar
The build machine has no access to Sonar
I have the following settings:
Upon build I use post-build (copy files back to the job's workspace on the master node) and the target files are copied to the master, with the same jenkins project name but another directory.
I use the - Restrict where this project can be run with the name of the node (slave) which realize the build
Comments: I tried to use the Flexible publish with the execution node option as master and the Sonar, but I believe that the the restriction option must be canceling it.
My architecture is something like this:
build machine <----(Running Build)---------Jenkins Machine
build machine ----(copying files to the master)-------> Jenkins
Machine
Jenkins Machine <---> Sonar

One simple option would be to have a downstream job, where the workspace can be the area where your project code is available and you can restrict it to run in the master and in the build config you can invoke sonar analysis

ok would be a way but there is a problem with that. When I use the " copy files back to the job 's workspace on the master node" project name would have to be the same as the downstream job . However , I can not create two jobs with the same name.

questions resolve!
build machine ----(copying files to the master)-------> Jenkins Machine ---(clone workspace)---->sonar job

Related

Azure DevOps - Pipeline to pull code down to remote server

I am attempting to create a pipeline in our Azure DevOps org that will automatically 'pull' code down to a single remote server whenever a push request is sent to the master branch of my particular repo. I am having a difficult time understanding the entire process and what I actually need to accomplish this relatively simple pipeline.
Currently, my remote server has a folder on the C: drive with various .ps1 files. I would like to turn this into my repo and install the self-hosted agent on this same server so that way anytime I push something to the master branch on my local server it will automatically be pulled down to my remote server and the scheduled tasks I have running will pick be running the most up to date code.
I believe what I need to do first is install a self-hosted agent on my remote server. I am not completely sure though if this agent is suppose to be a deployment agent or a build agent.. or both? Since I am not technically building a project, but rather simply overwriting .ps1 files, I imagine it should only be given permissions for a deployment agent.
Something else I can't wrap my head around is how I specify the location of my repo on the remote server. Can I define this dynamically or do I need to specify in my path the target path of that specific repo?
According to your description, you could simplify your requirement to be: copy files from a source folder to a target folder on a remote machine over SSH using Copy Files Over SSH task, and then run related Git commands like the following.
cd repo_directory
git add .
git commit -m "upadte"
git push
Thus this remote repo is updated using SSH Deployment task.
In addition, you need to deploy a self-hosted build agent because it can be used in build pipeline.
Finally, configuring a schedule trigger for this build pipeline.

mkdocs site doesn't exist after build on codeship

I'm trying to use codeship to automate building docs from a repository.
After the Executing the command mkdocs build --clean I get a path to where my site folder is supposed to be.
INFO - Cleaning site directory
INFO - Building documentation to directory: /home/rof/src/bitbucket.org/josephkobti/test/site
The thing is that I can't find that folder using the ssh console for debugging.
The reason for the folder not existing was a misunderstanding of Codeship's SSH Debug Build feature, documented here https://documentation.codeship.com/basic/builds-and-configuration/ssh-access/
The VMs started for the debug feature are not the actual VMs that run the automated builds. They are new VMs running the same initialization steps as the automated builds (i.e. cloning the repository, configuring project specific environment variables, ...) but none of the actual setup or test commands.
Because of this the mkdocs build --clean command wasn't run either when Joseph connected via to the debug VM, and as such the generated site wasn't available.

Building multiple Gradle projects in Jenkins with AWS CodePipeline

I have a Gradle project that consists of a master project and 2 others that included using includeFlat directive. Each of these 3 projects has its own repo on GitHub. To build it I checkout all 3 projects into a common top folder then cd into the master project and run gradle build. And it works great!
Now I need to deploy the resulting app to AWS EB (Elastic Beanstalk) which is also works great when I produce the artifact locally and then deploy it manually. I want to automate the process so I'm trying to set it up using CodePipelines + Jenkins as described in this document adjusted for Gradle.
The problem is that if I specify 3 Sources in the pipe I end up with my projects extracted on top of each other creating a mess in Jenkins workspace. I need to somehow configure each project to be output to its own directory within Jenkins workspace and I just don't see a way to do it (at least in UI)
Then, of course even if I achieve what I want I need somehow to cd into the master directory to run gradle build and again I'm not sure how to do that
P.S. Great suggestions from #Phil but unfortunately is seems that CodePipeline does not currently support Git submodules or subtrees
I would start common build, when changes happened on any of 3 repos. With say 5 minutes delay, to have single build, even if changes are introduced to more then one repo.
I can't see good way to deal with deployment in other way than using eb deploy... old way... Please install aws tools at your jenkins machine. Create deployment job triggered on successful build. And put bash script doing deployment there. Please put more details about your deployment, that way I can help with deployment script.

Can Jenkins build code to remote servers when triggered by GitHub webhook?

I have a 'master' server (docker container actually) where I want to install Jenkins in order to link it (with webhook) with a github repo, so every time a developer pushes code, jenkins will auto-pull and build the code.
The thing is that there are an arbitrary number of extra 'slave' servers that need to have the exact same code as the master.
I am thinking of writing an Ansible playbook to be executed by Jenkins everytime the webhook runs and send the code to the slaves.
Can Jenkins do something like this?
Do I need to make the same setup to all the slaves with Jenkins and webhooks?
EDIT:
I want to run a locustio master server on the server that is going to have jenkins. My load tests are going to be pulled from Github there, but the same code needs to reside in the slaves in order to run in distributed mode.
The short answer to your question is that Jenkins certainly has the ability to run Ansible playbooks. You can add a build-step to the project that is receiving the web hook that will run the playbook.
jenkins could trigger another job even on slaves. Then if i get correctly your issue , you just need something like that. https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Remote+Trigger+Plugin
You could build your job by name trigger. Also there is another useful plugin called artifactory. This manages your packages and serves. This mean , you can build your code for once and share to slave and slaves could access your build and runs job.

Building and deploying from a remote server with Capistrano

I'm new to Capistrano and struggling a little to get started. A brief description of what I need to do:
git pull the latest code from our git repo, on a central build server. This build server's environment matches the deployment environment exactly. I need the code to be built here. I don't want to deploy a binary that was built on a Mac laptop, for example.
compile the binary on this machine.
deploy it from this machine to all the target machines.
There is a shared user we can all SSH into on the build machine to do the builds.
The build machine is behind a gateway machine, not directly accessible.
All of the deployment target machines also have this shared user and are also behind the gateway.
The deployed binary is a single executable, and there is an init script on the target machines. After deploying the binary and changing the symlink to it, restart the service via the init script.
Everyone has appropriate SSH keys and agent forwarding for all necessary tasks.
So in principle it seems rather simple, but Capistrano seems opinionated and a bit magical. As a result I'm not sure how to accomplish all of this. It seems like it wants to check out my code and copy it to the remote machines, for example without building it first.
I think I need to ignore all of Capistrano's default smarts and just make it run some shell commands on the appropriate servers. In pseudo-code:
ssh buildmachine via gateway "cd repo && git pull && make"
ssh targetmachine(s) via gateway "scp buildmachine:repo/binary .; <mv && symlink>; service foo restart"
Am I even using the right tool for the job? It seems a lot like a round peg in a square hole.
Can someone explain to me what the contents of the Capistrano configuration files should be, and what cap commands I'd run to accomplish this?
BTW, I've searched around and looked at questions like deploying with capistrano with remote git repo but without git running on production server and From manual pull on server to Capistrano
The question is rather old, but you never know when someone steps onto it in need of information...
First and formost, consider that Capistrano might just not be the right tool for the job you want to do.
That said, it is not impossible to accomplish what you expect. While in projects that deploy large amount of files and modify them (like css/js minify, js builds etc.) I would avoid it, in your case, you can consider runing a "deployment repository" and configure it in capistrano as the source. Your process would look like this :
run the local build with whatever tools you need
upload resulting binary to a deployment repository
run capistrano that will connect to application servers, fetch fresh binary from repository, perform any server side tasks required and symlink to "current"
As a side effect you end up with full history of deployed binaries