Can Jenkins build code to remote servers when triggered by GitHub webhook? - github

I have a 'master' server (docker container actually) where I want to install Jenkins in order to link it (with webhook) with a github repo, so every time a developer pushes code, jenkins will auto-pull and build the code.
The thing is that there are an arbitrary number of extra 'slave' servers that need to have the exact same code as the master.
I am thinking of writing an Ansible playbook to be executed by Jenkins everytime the webhook runs and send the code to the slaves.
Can Jenkins do something like this?
Do I need to make the same setup to all the slaves with Jenkins and webhooks?
EDIT:
I want to run a locustio master server on the server that is going to have jenkins. My load tests are going to be pulled from Github there, but the same code needs to reside in the slaves in order to run in distributed mode.

The short answer to your question is that Jenkins certainly has the ability to run Ansible playbooks. You can add a build-step to the project that is receiving the web hook that will run the playbook.

jenkins could trigger another job even on slaves. Then if i get correctly your issue , you just need something like that. https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Remote+Trigger+Plugin
You could build your job by name trigger. Also there is another useful plugin called artifactory. This manages your packages and serves. This mean , you can build your code for once and share to slave and slaves could access your build and runs job.

Related

How to add use Github Webhooks to deploy changes to a LEMP server running in Individual Docker Containers

I have a server running ubuntu 18.04 running 3 Docker containers for Nginx, PHP, and MySql. Everything seems to be working correctly within the application which right now is just a test wordpress blog. However I am attempting to add Github Webhook deployments and I am a little lost as to how I should proceed. A few options
Should I setup a webserver on the Host system and trigger a php file to run and execute git pull? I suppose I could subdomain it to keep SSL validation constant.
Is there a way to pass ssh keys to one of the containers such as the php one and allow that to pull from the repo? I tried this and ran into user and group permission 1000 issues.
Is there a way for the docker containered application of nginx to execute code on the host server(The naked server running docker)?
Is there a simpler solution that I am not thinking of involving deployments? I would prefer not to use a paid service.
Are you using travis-ci or Jenkins to continuous delivery?
These tools help you to do some change in your server when you do a new pull request over your repo of GitHub.
I will show you one project that I was worked using travis-ci where I could deploy my App on Aws or connecting to one host that has docker installed and make new changes.
I'll share you some continuous delivery articles below
Travis continuous delivery
Jenkins SSH credentials setting
Jenkins from scratch CI/CD
Try to get new knowledge about Continuous Integration and Continuous Delivery, is the best way to automate those kinds of processes

Start Sonnar from a node in jenkins

I wonder how to start Sonnar from a node. I have the following restrictions:
Only the machine it is installed on the jenkins has access to sonar
The build machine has no access to Sonar
I have the following settings:
Upon build I use post-build (copy files back to the job's workspace on the master node) and the target files are copied to the master, with the same jenkins project name but another directory.
I use the - Restrict where this project can be run with the name of the node (slave) which realize the build
Comments: I tried to use the Flexible publish with the execution node option as master and the Sonar, but I believe that the the restriction option must be canceling it.
My architecture is something like this:
build machine <----(Running Build)---------Jenkins Machine
build machine ----(copying files to the master)-------> Jenkins
Machine
Jenkins Machine <---> Sonar
One simple option would be to have a downstream job, where the workspace can be the area where your project code is available and you can restrict it to run in the master and in the build config you can invoke sonar analysis
ok would be a way but there is a problem with that. When I use the " copy files back to the job 's workspace on the master node" project name would have to be the same as the downstream job . However , I can not create two jobs with the same name.
questions resolve!
build machine ----(copying files to the master)-------> Jenkins Machine ---(clone workspace)---->sonar job

Building and deploying from a remote server with Capistrano

I'm new to Capistrano and struggling a little to get started. A brief description of what I need to do:
git pull the latest code from our git repo, on a central build server. This build server's environment matches the deployment environment exactly. I need the code to be built here. I don't want to deploy a binary that was built on a Mac laptop, for example.
compile the binary on this machine.
deploy it from this machine to all the target machines.
There is a shared user we can all SSH into on the build machine to do the builds.
The build machine is behind a gateway machine, not directly accessible.
All of the deployment target machines also have this shared user and are also behind the gateway.
The deployed binary is a single executable, and there is an init script on the target machines. After deploying the binary and changing the symlink to it, restart the service via the init script.
Everyone has appropriate SSH keys and agent forwarding for all necessary tasks.
So in principle it seems rather simple, but Capistrano seems opinionated and a bit magical. As a result I'm not sure how to accomplish all of this. It seems like it wants to check out my code and copy it to the remote machines, for example without building it first.
I think I need to ignore all of Capistrano's default smarts and just make it run some shell commands on the appropriate servers. In pseudo-code:
ssh buildmachine via gateway "cd repo && git pull && make"
ssh targetmachine(s) via gateway "scp buildmachine:repo/binary .; <mv && symlink>; service foo restart"
Am I even using the right tool for the job? It seems a lot like a round peg in a square hole.
Can someone explain to me what the contents of the Capistrano configuration files should be, and what cap commands I'd run to accomplish this?
BTW, I've searched around and looked at questions like deploying with capistrano with remote git repo but without git running on production server and From manual pull on server to Capistrano
The question is rather old, but you never know when someone steps onto it in need of information...
First and formost, consider that Capistrano might just not be the right tool for the job you want to do.
That said, it is not impossible to accomplish what you expect. While in projects that deploy large amount of files and modify them (like css/js minify, js builds etc.) I would avoid it, in your case, you can consider runing a "deployment repository" and configure it in capistrano as the source. Your process would look like this :
run the local build with whatever tools you need
upload resulting binary to a deployment repository
run capistrano that will connect to application servers, fetch fresh binary from repository, perform any server side tasks required and symlink to "current"
As a side effect you end up with full history of deployed binaries

How to perform automated deployment - with a Pull model

We're currently doing continuous deployment to our dev/qa servers, and manually triggered automated deployment to our production boxes. Currently we're using TeamCity/PowerShell/MsDeploy. We now have a requirement to deploy to a server that sits on an external network, on which the target server cannot be accessed externally. Instead, it will have to "call home" for updates - and presumably then push the results back if it succeeds or not.
I'm thinking we could write a service that requests a particular URL on our build server with delivers the artifacts that would have been used for deployment, pull that down - and then fire off the build script.
However, I'm not entirely sure how we'd deal with updating the updater, and failures when they occur. Does anyone have any recommendations on how to approach this?
Sounds like you need a release repository. The build server pushes files into it and each deploy job pulls from it. This would neatly decouple the two processes.
A release repository could be as simple as a shared NAS, or something more sophisticated such as the Nexus repository manager.

How to deploy artifacts of TeamCity to Amazon EC2 Server

We decided to use AMAZON AWS cloud services to host our main application and other tools.
Basically, we have a architecture like that
TESTSERVER: The EC2 instance which our main application is
deployed to. Testers have access to
the application.
SVNSERVER: The EC2 instance hosting our Subversion and
repository.
CISERVER: The EC2 instance that JetBrains TeamCity is installed and
configured.
Right now, I need CISERVER to checkout codes from SVNSERVER, build, if build is successful, unit test it, and after all tests pass, the artifacts of successful build should be deployed to TESTSERVER.
I have completed configuring CISERVER to pull the code, build, test and produce artifacts. But I couldn't manage how to deploy artifacts to TESTSERVER.
Do you have any suggestion or procedure to accomplish this?
Thanks for help.
P.S: I have read this Question and am not satisfied.
Update: There is a deployer plugin for TeamCity which allows to publish artifacts in a number of ways.
Old answer:
Here is a workaround for the issue that TeamCity doesn't have built-in artifacts publishing via FTP:
http://youtrack.jetbrains.net/issue/TW-1558#comment=27-1967
You can
create a configuration which produces build artifacts
create a configuration, which publishes artifacts via FTP
set an artifact dependency in TeamCity from configuration 2 to configuration 1
Use either manual or automatic triggering to run configuration 2 with artifacts produced by configuration 1. This way, your artifacts will be downloaded from build 1 to configuration 2 and published to you FTP host.
Another way is to create an additional build step in TeamCity for configuration 1, which publishes your files via FTP.
Hope this helps,
KIR
What we do for deployment is that the QA people log on to the system and run a script that deploys by pulling from the team city repository whenever they want. They can see in team city (and get an e-mail) if a new build happened, but regardless they just deploy when they want. In terms of how to construct such a script, the team city component involves retrieving the artifact. That is why my answer references getting the artifacts by URL - that is something any reasonable script can do using wget (which has a Windows port as well) or similar tools.
If you want an automated deployment, you can schedule a cron job (or Windows scheduler) to run the script at regular intervals. If nothing changed, it doesn't matter much. I question the wisdom of this given that it may mess up someone testing by restarting the system involved.
The solution of having team city push the changes as they happen is not something that team city does out of the box (as far as I know), but you could roll your own, for example by having something triggered via one of team city's notification methods, such as e-mail. I just question the utility of that. Do you want your system changing at random intervals just because someone happened to check something in? I would think it preferable to actually request the new version.