Using Capistrano to deploy (a non-Rails site) via FTP? - deployment

How would I go about this?
I have a mostly static site, which is being hosted on a cheap web-host, which only allows FTP access to the hosting. The site is tracked in git. I am using OS X.
I would like to upload a new version of the site by simply doing cap deploy

We use capistrano to deploy our site which is written in PHP.
From memory (I'm not at work right now) we overload deploy and used rsync to sync over sftp. Something like this:
desc "Sync"
namespace :deploy do
desc "Sync remote by default"
task :default do
remote.default
end
namespace :remote do
desc "Sync to remote server"
task :default do
`rsync -avz "/path/to/webapp" "#{remote_host}:#{remote_root}/path/to/webapp"`
end
end
end
I'm sure you could replace rsync with whatever an ftp program and it should work fine.

I've not tried it with capistrano, but in my own shell scripts, I've always used weex
(http://weex.sourceforge.net/)
to deploy sites over FTP. Imagine you could hack it up with capistrano too.
It keeps a local cache of the state of the FTP server so that it can only upload changed files. This is good, massively speeds things up ... but (obviously?), it'll go wrong if your code/other stuff gets changed via some other means. So it can be made not to do this if need be.

Related

Play framework: too large executable

For deployment, I use "dist" command of play console, it produces zip, I copy it to the server and run it.
But problem is that produced archive if large: ~40MB, so it takes time to copy this file to remote server, and it slows me. (I need frequent update on the server, science often I need to show results to designers/another people during conversation, etc)
I came from PHP world, where deployment is simple copy (or git push/pull) of source files.
What is best practice in play framework to achieve faster deployment cycle?
Why don't you setup the environment on remote server for play and copy just the classes with your update and run this with ssh.

Puppet - recognize new build versions and deploy

I have a puppet master sources my application builds into a master folder. for eg. xxxxx_v1.0.0.zip and yyyyy_v1.0.8.zip [xxxxx gets deployed to a ser of servers and yyyyy to another set of servers].
What is the best way to handle sourcing on puppet master on new versions of my application builds, without editing the .pp files on the master to reference the new build number on the filename, preferably, automatic.
Thanks
A good way to build a suitable package for your operating system instead. Puppet can use those with
package { 'application-x': ensure => latest }
Failing that, you solve this
on the agent side, by fetching your application metadata from somewhere, e.g. with an exec of wget, then having it run a script to perform the deployment if necessary
on the master side using an ENC like the Puppet Dashboard, or better yet, Hiera, to hold your latest version information
If you really want to do this through Puppet's fileserver without touching any metadata and just dropping the files in your modules, you can try with the generate function.
$latest_zip_application_x = generate("/usr/local/bin/find_latest application_x")
file { 'application_x.zip':
...
source => "puppet:///modules/application_x/path/to/$latest_zip_application_x",
}
where /usr/local/bin/find_latest is a script that will find the most recent version of your package and write it to stdout.
This is pretty horrible practice though - you are really not catering to Puppet's strengths with constructs like these.

How can I utilize source control when my working copy needs to be on a shared host without SSH access?

I'm trying to develop a little toy PHP project, and the most convenient location to run it is on a shared host I happen to have for my ill-maintained blog. The problem with this is that I have no way to run Subversion on this shared host, nor do I even have SSH access to be able to access an external repository from the host. Had I been thinking straight a few months ago when the hosting was up for renewal, I probably should have paid a couple extra bucks to switch to something a bit better, but for now I can't justify throwing money at having a second host just for side projects.
This means that a working copy of my project would need to be checked out to my laptop, while the project itself would need to be uploaded to the shared host to run. My best option seems to be creating a virtual machine running Linux and developing everything from in there, but I know from past experience that the extra barrier that creates, small though it may be, is enough that it puts me off firing the VM up just to do a couple minutes work to make some minor change I just thought up. I'd much prefer to just be able to fire up my editor and get to work.
While I'd imagine I'm not the first to encounter such a problem, I haven't had much success finding a solution online. Perhaps there isn't one beyond the VM or "manual mirroring" options, but if there is I'd expect StackOverflow to be the place to find it.
Edit: There's some confusion, it seems, so let me attempt to clarify. The shared host here is basically my dev server, but it has no svn or ssh. In other words, I can svn checkout to my laptop, but I can't run that on my shared host. Similarly, I can run/test my code on the shared host, but I can't do that on my laptop (well, I technically could, but it's Windows, and I don't want to worry about Win-vs.-Linux differences with PHP, since I do want this to become public at some point, and it will certainly be Linux-based at that point).
You might consider writing a post-commit hook to automatically upload the code to your host, so that any time you commit a change, a script executes that:
Checks out a copy of the code into a temporary directory
Uploads that code via FTP (or whatever your preferred method is) to the shared host
Cleans up after itself, optionally informing you via e.g. email when the transfer is successful
Subversion makes enough information available to these scripts at runtime that you could get more sophisticated and opt only to upload the files that changed or alter behavior based on specific property changes, for instance, but for a small project the brute force "copy it all" approach should be fine.

How to run Selenium RC + PHPUnit + NetBeans remotely?

I have Selenium Server, PHPUnit and NetBeans up and running on a machine that I want to be my dedicated testing box. How can I set it up to were I modify test cases (I already figured that part out) and tell the test machine to run the test remotely?
I'd use a continuous integration server like Jenkins. Usually CI servers are used to build an application on every commit to a repository, but it's just as easy to manually start a "build" that just consists of running all your tests (and recording the results, and running code coverage if you want, etc).
I found Jenkins to be really easy to set up (I followed a nice tutorial at http://blog.jepamedia.org/2009/10/28/continuous-integration-for-php-with-hudson) - the only extra work I had to do besides creating a build script was to make sure that Selenium RC is running on the test machine, and it sounds like you've already done that.
To make it even easier, if you set up Jenkins (or any other CI server, I'm sure) to build on a commit to your repository, then you don't even have to log onto the test machine to edit the tests - anybody can commit test changes, the CI server will run the tests, and everybody can see the results. Not quite as important if you're developing solo, but still a handy trick.
We can run the test cases from remote server and case will get execute into local machine. We have to follow the below steps,
Install phpunit and necessary packages in server
edit the test case and change host as local ip address ( use static ip address )
run the selenium RC in local server
run the test case in server
Test case will get execute into local machine.

Setting Hudson's process user on Mac+Tomcat

Synopsis
I have Hudson set up on our Mac OS Server (Snow Leopard 10.6.5), running under the standard Tomcat (so, Tomcat 6) which is enabled using Server Admin application.
I'd like to be able to run my Hudson scripts as a Unix user/login which is not the Tomcat login.
Details
My Hudson job is a freestyle project that runs bash scripts that invoke xcodebuild (it's an iPhone project) to clean and build the build.
The problem is that using this standard set-up, Hudson (as far as I can see) runs with Tomcat's Unix user, which is _appserver.
This means that _appserver is the user which is invoking xcodebuild and all the scripts that make up the job.
I would prefer for Hudson to have it's own Unix login, complete with home directory etc. Aside from being a bit happier about the permissions etc. of the login which is trying to do the build, Xcode itself seems to prefer the user to have a Home directory, and the build logs are filled with warnings such as:
2010-11-11 17:29:11.729 Interface Builder Cocoa Touch Tool[58771:1903] CFPreferences: user home directory at file://localhost/var/empty/Library/Application%20Support/iPhone%20Simulator/User/ is unavailable. User domains will be volatile.
and
Couldn't open shared capabilities memory GSCapabilities (No such file or directory)
Plus I suspect getting the provisioning profiles to work to build device builds would be a lot easier if the login was a standard login that can build the targets from Xcode.
BUT I totally can't find any way to set the login account! It seems like exactly the sort of thing you'd want to do, but have scoured the web for info to no avail. The tomcat-users.xml felt like it may be useful, but didn't seem to link to a "real" (Unix) user?
Another approach may be to live with Hudson being _appserver, but have the scripts themselves run as my build user. This seems to point to using sudo but everything seems to be so locked down I can't find a way to run a script as another user, even one I can lock down to have minimal security access...?
Hope you can help folks!
The simplest way to achieve what you want (and the only way I know how) would be:
Create a new slave in Hudson, and point it to the Hudson server (your master system will also now be a new slave); have it log in using SSH, but with the user credentials that you want to use for the build (lets say 'hudson').
Point your project to build on the slave. This way, your job does not depend on Tomcat (or its user), but on the slave login.
In steps:
1) Click on 'Build Executor Status'
2) On the left sidebar, click on New Node
3) Give the slave a name, click "Dumb Slave", and "OK"
4) Number of executors = 1
5) remote FS root = /home/<hudson_user>
6) Launch method = UNIX SSH or JNLP
7) If launch = SSH: host = ip address of master,
username = hudson, password = hatever_password
8) If launch = JNLP, log in as the hudson user, go to hudson and start
the web service from your hudson site
9) Configure your job to use your slave (restrict where this project can be run)
9a) Possibly, under configuration, turn off all executors on your master,
and use the new slave for anything you need to build.
I know it sounds a bit convoluted, but if you need any more explanation or have any questions, let me know.