Jenkins: FTP / SSH deployment, including deletion and moving of files - deployment

I was wondering how to get my web-projects deployed using ftp and/or ssh.
We currently have a self-made deployment system which is able to handle this, but I want to switch to Jenkins.
I know there are publishing plugins and they work well when it comes to uploading build artifacts. But they can't delete or move files.
Do you have any hints, tipps or ideas regarding my problem?

The Publish Over SSH plugin enables you to send commands using ssh to the remote server. This works very well, we also perform some moving/deleting files before deploying the new version, and had no problems whatsoever using this approach.

The easiest way to handle deleting and moving items is by deleting everything on the server before you deploy a new release using one of the 'Publish over' extensions. I'd say that really is the only way to know the deployed version is the one you want. If you want more versioning-system style behavior you either need to use a versioning system or maybe rsync that will cover part of it.
If your demands are very specific you could develop your own convention to mark deletions and have them be performed by a separate script (like you would for database changes using Liquibase or something like that).
By the way: I would recommend not automatically updating your live sites after every build using the 'publish over ...' extension. In case we really want to have a live site automatically updated we rely on the Promoted Builds Plugin to keep it nearly fully-automated but add a little safety.

I came up with a simple solution to remove deleted files and upload changes to a remote FTP server as a build action in Jenkins using a simple lftp mirror script. Lftp Manual Page
In Short, you create a config file in your jenkins user directory ~/.netrc and populate it with your FTP credentials.
machine ftp.remote-host.com
login mySuperSweetUsername
password mySuperSweetPassword
Create an lftp script deploy.lftp and drop it in the root of your .git repo
set ftp:list-options -a
set cmd:fail-exit true
open ftp.remote-host.com
mirror --reverse --verbose --delete --exclude .git/ --exclude deploy.lftp --ignore-time --recursion=always
Then add an "Exec Shell" build action to execute lftp on the script.
lftp -f deploy.lftp
The lftp script will
mirror: copy all changed files
reverse: push local files to a remote host. a regular mirror pulls from remote host to local.
verbose: dump all the notes about what files were copied where to the build log
delete: remove remote files no longer present in the git repo
exclude: don't publish .git directory or the deploy.lftp script.
ignore-time: won't publish based on file creation time. If you don't have this, in my case, all files got published since a fresh clone of the git repo updated the file create timestamps. It still works quite well though and even files modified by adding a single space in them were identified as different and uploaded.
recursion: will analyze every file rather than depending on folders to determine if any files in them were possibly modified. This isn't technically necessary since we're ignoring time stamps but I have it in here anyway.
I wrote an article explaining how I keep FTP in sync with Git for a WordPress site I could only access via FTP. The article explains how to sync from FTP to Git then how to use Jenkins to build and deploy back to FTP. This approach isn't perfect but it works. It only uploads changed files and it deletes files off the host that have been removed from the git repo (and vice versa)

Related

rsync'ed outputs are always marked as outdated

Using {targets} to manage a workflow, which is great.
We don't have a proper cluster setup, but I have access to remote machines with much better specs than my laptop, so I can use git to keep the plan in sync locally and remotely.
When I want to work with something locally, I use rsync to move the files over.
rsync -avxP -e "ssh -p ..." remote:path/to/_targets .
When I query the remote cache with tar_network, I see that a bunch of my targets are "uptodate".
When I query the local cache after the rsync above, those same targets are "outdated".
I'm wondering if there is either better calls to rsync or certain arguments to tar_network(), or if this is a bug and the targets should stay as "uptodate" after an rsync like this?
OK, so I figured this out.
It was because I was being foolish about what was a target in this case. To make sure that a package dependency was being captured, I was using something that grabbed the entire DESCRIPTION of the package (packageDescription() I think). The problem with that is, is that when you install the package using remotes::install_github(), is that it adds some more information to the DESCRIPTION upon installation (packaged and built fields), and that information differed between the installation on my local machine and the installation on the remote machine.
What I really wanted was just the GithubSHA1 bit from the packageDescription(), to verify that I was using the same package at the same commit from my GitHub repo. Once I switched to using that instead of the entire DESCRIPTION, then targets had no issues with the meta information and things would stay current when rsync'ing them between machines.

Git actions, ignoring specific folders on .yml file

im new using git actions.
So im using actions from git to automate deployment from my repository to my sftp server, everything works fine, but when the action is on execution takes to much time, around 20 minutes, on my repo exist files like system, application, and dist so, that kind of files i don't want to re-upload.
Doing a little research I found out that they can be ignored certains paths.
I found out that they can be ignored with "paths-ignore" but for some reason it's not working, this is my file.
I want to ignore all the content inside application/cache, application/config, application/core .. etc.Or all the folder "application/cache", "application/config".. etc
What am I doing wrong? It's possible to do that?
First I have a suggestion:
Don't put all these files inside your repository, git is for versioning code and not for storage.
paths-ignore is a config for on: event, which means when that on you have changes in your repository, except changes in paths-ignore, start the workflow.
There is no way to reduce too much time in your workflow because every job creates a new machine that gets all your git repository and starts some workflow. But you can read more about the SFTP-Deploy-Action and use the parameter local-path to upload just what you want, maybe your workflow drop to 10 minutes.

Use Git list output to copy files for archiving

I'm currently helping to maintain a project for a client remotely. I'm the only developer ergo some of my unorthodox approaches/thinking.
the problem
The client is using Visual Studio 2010 + Team Foundation Server for their source control. I am working on a Mac over VPN and have tried several approaches to make committing to their TFS workable. I've tried TFS plugin for Eclipse with no luck (VPN really hoses the connection to TFS). Currently I am having to do a full "checkout for edit" through a virtual machine to the TFS, then transferring the project over the VPN to overwrite those files. Not a sustainable solution to say the least.
the solution?
I'm wondering if there is a way to:
get a list of changed files from GIT (I think this is the solution
(How to list all the files in a commit?)
then use that list as a means to go in and fetch those file, maintaining their folder structure
from there I can do my dump over
VPN into the VM that has the project mapped in TFS.
Or if there is something I've overlooked or hadn't thought of, please do recommend them, I'm all ears.
First, I'm assuming you are running the VM on or near the TFS server, not on your Mac. If not, you can just share a directory using VMware/VirtualBox and edit away on your Mac...
It sounds like you could achieve what you want with plain old Git. If you:
Create a bare repository on the VM (git init --bare)
Add a post-receive hook to copy the files from the master branch (for example) into the TFS directory, overwriting merrily (http://git-scm.com/book/en/Customizing-Git-Git-Hooks)
Initialise your local copy of the source as a Git repository (git init)
Add the remote repository. Assuming it's a Windows box you can use an SMB shared folder over the VPN so your remote is "local" as far as Git is concerned. (git remote add tfsserver file:///Volumes/tfsmount/code
Your first push will be expensive (but you could prepopulate the remote repo to get around that), but subsequent pushes would be just the changesets. The post-receive hook would then take care of updating the files, and you're laughing.
Of course, you then get to impress them with how amazing Git is, get them to migrate, and your problem goes away forever :).
Update: Here's a link which describes these steps in more detail, under the guise of updating a remote website: http://toroid.org/ams/git-website-howto.

How Do I Update a Live Server Correctly Using Mercurial?

I'm new to Mercurial and version control, and although I'm only working on personal PHP application projects (until I hopefully get a job soon) I'm well overdue learning how it all works.
I've been reading about Mercurial all day, but I'm still confused on a few elements...
Firstly, I understand Mercurial CAN push my files straight to my live server, but I don't see many tutorials or examples explaining how this is done, so it leads me to think it's not used often? I currently use FTP to upload my files, and it's error prone to know which files have been modified, so I'd like to eliminate this obviously.
I also see services like BitBucket being mentioned a lot, but if I'm pushing to BitBucket how do I then get my files to my live server? Can I get only the changed files to upload via FTP, or do I need to install Mercurial on my server too or something?
Apologies if this is a basic question, I'm just a little lost as to how companies would/should use this service, and how files and uploads are handled elegantly. How should i go about version control on a personal project?
There are many ways to do that, but I'll try to narrow it down to the basic steps involved in a scenario using BitBucket:
1) Install Mercurial on both your dev machine and your live server.
2) Create a repository in BitBucket.
3) Clone the repository to your dev machine using the URL that appears in BitBucket, e.g.:
hg clone https://your_user#bitbucket.org/your_account/your_repos .
4) Clone the repository to your live server in the same way.
5) Do your dev and commit your code to the local repository on your dev machine (using hg commit). Then push the changesets to BitBucket using hg push.
6) Once you're ready to deploy the changes to your live server, log in to your live server and run hg pull -u.
I just use rsync to upload everything. If you're working by yourself, it's simple and works fine.
I set up an SSL certificate, and then made a bash shortcut ,p (target directory):
,p() { rsync -avz --delete ./ "user#server.com:/var/www/html/$#/"; }
Then, on my local host I can type ,p images and the current directory will be uploaded to mysite/images.
If you're always uploading to the same place, you can make a shortcut with no argument:
alias ,pm="rsync -avz --delete ./ "user#server.com:/var/www/html/";
Finally, if you just want to type the command:
rsync -avz --delete ./ "user#server.com:/var/www/html/

Drupal 6: using bitbucket.org for my Drupal projects as a real version control system dummy

Here is a real version control system dummy! proper new starter!
The way I have worked so far:
I have a Drupal-6 web project www.blabla.com and making development under www.blabla.com/beta . I'm directly working on blabla.com/beta on server. nothing at my local, nothing at anywhere else. Only taking backup to local, time to time. I know horrible and not safe way :/
The new way I want to work from now on:
I decided to use Mercurial. I have one more developer to work on same project with me. I have a blabla.com Drupal-6 project on bluehost and making development blabla.com/beta. I found out http://bitbucket.org/ for mercurial hosting. I have created an account.
So now how do I set up things? I'm totally confused after reading tens of article :/
bitbucket is only for hosting revised files? so if I or my developer friend edit index.php, bitbucket will host only index.php?
from now on do I have to work at localhost and upload the changes to blueshost? no more editing directly at blabla.com/beta? or can I still work on bluehost maybe under blabla.com/beta2?
When I need to edit any file, do I first download update from bitbucket, I make my change at localhost, update bitbucket for edited files, and uploading to bluehost?
Sorry for silly questions, I really need a guidance...
Appreciate helps so much! thanks a lot!
bitbucket is only for hosting revised files?
The main service of bitbucket is to host files under revision control, but there is also a way to store arbitrary files there.
so if I or my developer friend edit index.php, bitbucket will host only index.php?
I a typical project every file which belongs to the product is cheked into revision control, not only index.php. see this example
from now on do I have to work at localhost and upload the changes to blueshost? no more editing directly at blabla.com/beta? or can I still work on bluehost maybe under blabla.com/beta2?
Mercurial does not dictate a fix workflow. But I recommend that you have mercurial installed where you edit the files. For example then you can see direct which changes you did since the last commit, without to need to copy the files from your server to your local repository.
I absolutely recommend a workflow where somewhere in the repository is a script which generates the archive file which is transmitted to the server, containing the revision of the repository when the archive got created. This revision information should also be somewhere stored on the server (not necessarily in a public accessible area), since this information can get very handy when something went wrong.
When I need to edit any file, do I first download update from bitbucket, I make my change at localhost, update bitbucket for edited files, and uploading to bluehost?
There are several different approaches to get the data to the server:
export the local repo into an archive and transmit this onto the server (hg archive production.tar.bz2), this is the most secure variant, since it does not depend on any extra software on the server. Also depending on how big the archive is this approach can waste lots of bandwidth.
work on the server and copy changed files back, but I don't recommend this since is is very easy to miss something important
install mercurial on the server, work in a working copy there and hg export locally there into the production area
install mercurial on the server and hg fetch from bitbucket(or any other server-accessible repository)
install mercurial on the server and hg push from your local working copy to the server (and hg update on the server afterwards)
The last two points can expose the repository to the public. This exposition can be both good and bad, depending on what your repository contains, and if you want to share the content. When you want to share the content, or you can limit the access to www.blabla.com/beta/.hg, you can clone directly from your web server.
Also note that you should not check in any files with passwords or critical secrets, even when you access-limit the repository. It is much more save to check in template files (with a different name than in production), and copy-and-edit these files on the server.