Central compiling on a server - eclipse

We are a team that works on liferay in eclipse.
We want to code in our clients but when we want to compile or deploy our code, this process done by server(to lower usage of clients and send main process to server).
How this can be possible?

You can use the Rundeck plugin for Jenkins to trigger a deploy based on the SCM commit message.
So your team would code and commit to a repository. Jenkins will then compile and deploy the latest build on the server.
BUT: I think I'd prefer to run my code on my client before committing it.

Related

Publish try code in Jenkins before committing

I have this development environment with Eclipse as IDE, SVN as SCM and Jenkins as CI server.
Is there a way I could start a Jenkins job from Eclipse and tell Jenkins somehow to take some code from my Eclipse workspace instead of the SVN? Without committing that code into SVN?
I know how to do the first part (start a job via Mylyn / Builds), but not the second one...
Maybe something like the way TeamCity is integrated into Intellij IDEA and the way they have facilitated gated commits...
It is not a good idea using local workspace for a continious integration tool in my opinion. Jenkins runs on a server machine in a standart configuration not in local machine. I think best practice for your scenario is using SVN branches for test committing. Configure a job in jenkins which works with the SVN branch to chechout the code. Add a svn hook for jenkins to compile after commit. Then integrate the branch to trunk after jenkins successfully build.
You can do anything with Jenkins.
Building code from my local machine in Jenkins is not a good idea though.
If at all you want to achieve this anyway, you can poll specific folder for any change and start building through Jenkins

Can Jenkins build code to remote servers when triggered by GitHub webhook?

I have a 'master' server (docker container actually) where I want to install Jenkins in order to link it (with webhook) with a github repo, so every time a developer pushes code, jenkins will auto-pull and build the code.
The thing is that there are an arbitrary number of extra 'slave' servers that need to have the exact same code as the master.
I am thinking of writing an Ansible playbook to be executed by Jenkins everytime the webhook runs and send the code to the slaves.
Can Jenkins do something like this?
Do I need to make the same setup to all the slaves with Jenkins and webhooks?
EDIT:
I want to run a locustio master server on the server that is going to have jenkins. My load tests are going to be pulled from Github there, but the same code needs to reside in the slaves in order to run in distributed mode.
The short answer to your question is that Jenkins certainly has the ability to run Ansible playbooks. You can add a build-step to the project that is receiving the web hook that will run the playbook.
jenkins could trigger another job even on slaves. Then if i get correctly your issue , you just need something like that. https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Remote+Trigger+Plugin
You could build your job by name trigger. Also there is another useful plugin called artifactory. This manages your packages and serves. This mean , you can build your code for once and share to slave and slaves could access your build and runs job.

Automated Building and Release Management VS2012

Trying to make my life easier, Currently we have 4 developers working in Visual Studio 2012 and we are using TFS 2012 for source control. The project we work on is a multi-tenant web application (single source directory with multiple dbs) that is a mixture of legacy, asp and vb6 com components, coupled with new C# code. We use TFS for source control and for managing User Stories and Bugs. Because of the way our site works it can not be ran or debugged locally only on the server.
Source Control is currently setup with a separate branch for each developer that's working directory is mapped to a shared network path on the dev server that has a web site pointed to it in IIS. Dev01-Dev05 etc. The developers work on projects in their branch test it using their dev website, then check in changes to their own branch and merge those into the trunk. The trunk's work space is mapped to the main dev website so that the developers can test their changes against the other customer's dev domains to test against customizations and variances in functionality based on the specific dbs the are connected to.
Very long explanation but basically each dev has a branch and a site, that are then merged into the trunk with its own site.
In order to deploy our staging server:
I compile the trunk's website via a bat file on the server
Run a windows app I built to query TFS for changesets associated with
specific WorkItems in a certain status, and copy all the files for
those changesets from the publish folder to a deployment folder.
Run another bat file on the server to use RedGate's Deployment Manager
to create a package from those new files
Go to the DM site on our network to create and deploy that release (haven't been able to get the command line tools to work for this, so I have to do it manually)
Run any SQL scripts that have been saved off in Folders that match ticket numbers on each database (10 or so customer dbs) to support the release
I have tried using TFS automated build stuff and never really got it to build the website correctly. Played around with Cruise Control also with little success. Using a mishmash of skunk works projects to do this is very time consuming and unreliable at best.
My perfect scenario would be:
Gated Checkin, Attempt build/publish every time a developer merges into the trunk, rejects and notifies developer if the build fails.
End of the day collect the TFS Items of a certain status and deploys files associated with them to the staging site
Deploy SQL scripts for those TFS items across all the customer dbs in staging
Eventually* run automated regression UI tests, create new WorkItems or emails to devs if failed
Update TFS WorkItems to new state so QA/Customers know their items are ready to test in our staging environment
Send report of what items were deployed successfully
How can I get here so that I am not spending hours preparing and deploying releases to staging and eventually production? Pretty open to potential solutions, things that would be hard to change would be the source control we are using, can't really switch to subversion or something else so we are pretty stuck with TFS.
Thanks
Went back in and started trying to get TFS to build/publish my web solution. I was able to get a build to complete successfully. adding msbuild argument /p:DeployOnBuild=True and setting the msbuild platform to x86 seemed to do the trick on that.
Then I found https://github.com/red-gate/deployment-manager-tfs which gives you a build process template to do the package and deployment using the redgate tools. After playing with that for a bit I finally got it to create, package and deploy my build to our staging environment.
Next up will be to modify the template to run some custom scripts to collect only the correct items to deploy, deploy all the sql files and then to set the workitems to the appropriate statuses after completion.
Really detailed description of your process. Thanks for sharing!
I believe you can set up TFS to have gated check-in on a single branch, which if you can setup on trunk would make sure that the merges built successfully. That could trigger msbuild, if you can get that working or a custom build job.
If you can get that working then you'd be able to use that trunk code as the artifact to send to Deployment Manager. That avoids having to assemble the files for deployment through the TFS change sets, as you'd be confident that the trunk could always build.
Are you using Deployment Manager to deploy the database from source control as well as the application?
That could be a way to further automate the process. SQL Source Control and SQL CI allow you to source control the structure of a database, keep a database up to date on each check-in, and run database unit tests. They also produce database packages for Deployment Manager, so you can deploy a release that contains both the application and the database.
If you want to send me the command you're using in step 4 to deploy the release using Deployment Manager I can help out with that. The commands I use are:
DeploymentManager.exe --create-release --server=http://localhost:81 --project="Project Name" --apiKey=XXXXXXXXXXX--version=1.1
DeploymentManager.exe --deploy-release --server=http://localhost:81 --project="Project Name" --apiKey=XXXXXXXXXXX--version=1.1 --deployto=CI-Environment-Name
That will create a release version 1.1 using the latest available packages for that project. You can optionally specify the package to be used when creating the release with
--packageversion=<package name>=<version>
--packageversion="application=1.5

sonar+github integration

I want to enable sonar with git but is it neccesary that first pull the project from git repository using hudson or something else and then sonar will analyse the code periodically on hudson .am I right means my steps :
1.Pull project from git using hudson.
2.Sonar on hudson will analyse the code and send the updates.?
or directly we can use git+sonar how it works ,can anybody guide me to get it work.
Yes, you need first to pull your project from GitHub, and then launch a Sonar analysis on your local copy (Sonar needs the file to exist on the file system to be able to analyse them).
So you can pull your project manually or obvioulsy using a CI server like Jenkins/Hudson.
The good news, yesterday (2015-07-08) SonarQube has launched a Github Pluging, every time a pull request is submitted, the CI system launches a SonarQube preview analysis.
Reference:
http://www.sonarqube.org/github-pull-request-analysis-helps-fix-the-leak/

How to deploy artifacts of TeamCity to Amazon EC2 Server

We decided to use AMAZON AWS cloud services to host our main application and other tools.
Basically, we have a architecture like that
TESTSERVER: The EC2 instance which our main application is
deployed to. Testers have access to
the application.
SVNSERVER: The EC2 instance hosting our Subversion and
repository.
CISERVER: The EC2 instance that JetBrains TeamCity is installed and
configured.
Right now, I need CISERVER to checkout codes from SVNSERVER, build, if build is successful, unit test it, and after all tests pass, the artifacts of successful build should be deployed to TESTSERVER.
I have completed configuring CISERVER to pull the code, build, test and produce artifacts. But I couldn't manage how to deploy artifacts to TESTSERVER.
Do you have any suggestion or procedure to accomplish this?
Thanks for help.
P.S: I have read this Question and am not satisfied.
Update: There is a deployer plugin for TeamCity which allows to publish artifacts in a number of ways.
Old answer:
Here is a workaround for the issue that TeamCity doesn't have built-in artifacts publishing via FTP:
http://youtrack.jetbrains.net/issue/TW-1558#comment=27-1967
You can
create a configuration which produces build artifacts
create a configuration, which publishes artifacts via FTP
set an artifact dependency in TeamCity from configuration 2 to configuration 1
Use either manual or automatic triggering to run configuration 2 with artifacts produced by configuration 1. This way, your artifacts will be downloaded from build 1 to configuration 2 and published to you FTP host.
Another way is to create an additional build step in TeamCity for configuration 1, which publishes your files via FTP.
Hope this helps,
KIR
What we do for deployment is that the QA people log on to the system and run a script that deploys by pulling from the team city repository whenever they want. They can see in team city (and get an e-mail) if a new build happened, but regardless they just deploy when they want. In terms of how to construct such a script, the team city component involves retrieving the artifact. That is why my answer references getting the artifacts by URL - that is something any reasonable script can do using wget (which has a Windows port as well) or similar tools.
If you want an automated deployment, you can schedule a cron job (or Windows scheduler) to run the script at regular intervals. If nothing changed, it doesn't matter much. I question the wisdom of this given that it may mess up someone testing by restarting the system involved.
The solution of having team city push the changes as they happen is not something that team city does out of the box (as far as I know), but you could roll your own, for example by having something triggered via one of team city's notification methods, such as e-mail. I just question the utility of that. Do you want your system changing at random intervals just because someone happened to check something in? I would think it preferable to actually request the new version.