Tool to upload modified files over FTP - command-line

I know there's similar questions on here, but they all seem to end up with a recommended answer that wouldn't suit mine - possibly more specific requirements; I'm dealing with a rubbish shared host.
I'm wondering if there is a command-line tool which can do a one-way sync over FTP, i.e. it will upload new or modified files, given a host, username, password, etc..
I know rsync can do this, but unfortunately in this instance I can't use it.
If there is no such tool, does FTP support any kind of hashing (md5, sha1, etc) so I could make my own.
The ultimate goal here is to have this running automatically, as a git hook, or makefile script.
Thanks

Since asking this question, I've found git-ftp which works really well, assuming you can keep all your files under git.
I'm using something along the lines of git add .;git commit -m "Recent changes";git ftp push to upload modified files.

Related

Protecting the sensitive files from pushing to version control

Here's the problem: I'm developing a few projects that need to somehow store credentials used for external services - for example, an e-mail address and password. I figured it falls into "configuration" and decided to move the variable data into a separate file the program reads from. I don't want this file to be pushed to upstream though, because in some of my smaller programs, written for personal needs, I test on a production environment and the data are usually quite sensitive.
On the other hand, I don't want a basic .gitignore solution or its equivalent - instead of not uploading the file at all, I'd prefer to send another file in its place, an example configuration file, while keeping the "real" file on its place on my computer. Is there any simple way of achieving it?
If you need more details to answer the question, I'd prefer an answer regarding Git VCS, Python scripts and Linux OS.
One possible solution:
Commit and push a "sample" config file. Then, make the modifications you want to the local config file. It will now be marked as modified in Git.
Use git update-index --assume-unchanged config to permanently ignore future local modifications to the config file (use --no-assume-unchanged to resume tracking modifications).
This way, you will have a sample config in the upstream repo, a customized config in your repo, and you will not accidentally commit the changed config because Git will not mark it as modified.
Another thing you can do is use .gitattributes filters. It lets you specify certain files to pipe through a unix command. You could write a ruby script to scrub out your passwords and replace them with dummy values, or just use sed or awk. You could go a lot crazier than that and use it for some really dangerous things also, heh. See this other answer for some details.

Which version control system should I use for my small personal code files?

I have some general scripts that I use and they keep getting modified over time. Right now, I do not use any version control software for them so basically the old files are lost unless I explicitly save them.
I need a good minimal version control system that I can use on a single machine. Which one do you use for such projects?
Git or mercurial both work great. No server required.
I've used subversion for this in the past. Mostly this is because I'm in windows, and TortoiseSVN is a dead simple UI for my repo.
For a scenario like yours, which is relatively simple, I'd recommend using either what you're familiar with, or what is easy to use on your platform.
Git is actually really easy to use in such a setting, and it scales just as well to really small repositories with a few commits a month as it does to huge ones with a hundred a day. Here's how you would set up such a repository:
$ cd ~/your-scripts
$ git init
$ git add .
$ git commit -m 'Start script repository'
Ta-da!
As a hosting solution we make use of http://codesion.com/free_cvs_svn, you will note they also support Git hosting. They also host a bunch of other services that go hand in-in-hand with versioning.
Check out some of the personal version control systems. Hers is a short list:
FileHamster
History Explorer
FolderTrack
Oops! Backup
They are super easy to use and "automatically checkin" when ever you modify your files.
Note: I am the author of FolderTrack and recomend it for software because it can treat a bunch of files as 1 big project. Therefore if you need to revert your project to where it was yesterday, it will revert the 8, 10, or how ever many other files you modified since that time.
Free code: BOS

Drupal 6: using bitbucket.org for my Drupal projects as a real version control system dummy

Here is a real version control system dummy! proper new starter!
The way I have worked so far:
I have a Drupal-6 web project www.blabla.com and making development under www.blabla.com/beta . I'm directly working on blabla.com/beta on server. nothing at my local, nothing at anywhere else. Only taking backup to local, time to time. I know horrible and not safe way :/
The new way I want to work from now on:
I decided to use Mercurial. I have one more developer to work on same project with me. I have a blabla.com Drupal-6 project on bluehost and making development blabla.com/beta. I found out http://bitbucket.org/ for mercurial hosting. I have created an account.
So now how do I set up things? I'm totally confused after reading tens of article :/
bitbucket is only for hosting revised files? so if I or my developer friend edit index.php, bitbucket will host only index.php?
from now on do I have to work at localhost and upload the changes to blueshost? no more editing directly at blabla.com/beta? or can I still work on bluehost maybe under blabla.com/beta2?
When I need to edit any file, do I first download update from bitbucket, I make my change at localhost, update bitbucket for edited files, and uploading to bluehost?
Sorry for silly questions, I really need a guidance...
Appreciate helps so much! thanks a lot!
bitbucket is only for hosting revised files?
The main service of bitbucket is to host files under revision control, but there is also a way to store arbitrary files there.
so if I or my developer friend edit index.php, bitbucket will host only index.php?
I a typical project every file which belongs to the product is cheked into revision control, not only index.php. see this example
from now on do I have to work at localhost and upload the changes to blueshost? no more editing directly at blabla.com/beta? or can I still work on bluehost maybe under blabla.com/beta2?
Mercurial does not dictate a fix workflow. But I recommend that you have mercurial installed where you edit the files. For example then you can see direct which changes you did since the last commit, without to need to copy the files from your server to your local repository.
I absolutely recommend a workflow where somewhere in the repository is a script which generates the archive file which is transmitted to the server, containing the revision of the repository when the archive got created. This revision information should also be somewhere stored on the server (not necessarily in a public accessible area), since this information can get very handy when something went wrong.
When I need to edit any file, do I first download update from bitbucket, I make my change at localhost, update bitbucket for edited files, and uploading to bluehost?
There are several different approaches to get the data to the server:
export the local repo into an archive and transmit this onto the server (hg archive production.tar.bz2), this is the most secure variant, since it does not depend on any extra software on the server. Also depending on how big the archive is this approach can waste lots of bandwidth.
work on the server and copy changed files back, but I don't recommend this since is is very easy to miss something important
install mercurial on the server, work in a working copy there and hg export locally there into the production area
install mercurial on the server and hg fetch from bitbucket(or any other server-accessible repository)
install mercurial on the server and hg push from your local working copy to the server (and hg update on the server afterwards)
The last two points can expose the repository to the public. This exposition can be both good and bad, depending on what your repository contains, and if you want to share the content. When you want to share the content, or you can limit the access to www.blabla.com/beta/.hg, you can clone directly from your web server.
Also note that you should not check in any files with passwords or critical secrets, even when you access-limit the repository. It is much more save to check in template files (with a different name than in production), and copy-and-edit these files on the server.

Is version control possible on a shared host w/o shell access?

I have a client who's host doesn't allow shell access. Is there any multi-user revision control system that can work in that situation (on linux)? He's reluctant to switch hosts.
Yes, because you don't do development directly on the production server! The content of your production server is just a view of your source repository, which is kept elsewhere so that work can be done on a separate dev server. This way, a stupid mistake on the dev server won't hose your production system. If that means doing a manual checkout to transfer the files, so be it.
Not the answer you're looking for, but get a better hosting provider. Is there something special your hosting provider is doing for you that makes you want to put up with no shell access, or even not just preinstalling SVN for you? There's a ton of really good hosts for really cheap that will give you SVN already installed, and shell access.
I use Bazaar for exactly this reason. If the server supports ftp or ftps, it supports Bazaar.
http://bazaar.canonical.com/
I've been looking for the same thing, I have a no-shell-access hosting provider with no included source control and don't want to change.
Currently, I'm using git. But instead of using git push to update the remote repository, I use a script and FTP to update the server's copy.
git pull works normally from any client, if the ftp git directory is accessible over http.
git push replacement:
git update-server-info
perl ftpsync.pl -v .git ftp://ftp.example.com/gitrepo/project.git
ftpuser=user#example.com ftppasswd=*
That's using ftpsync, from the Sourceforge ftpsync page. It's an imperfect replacement for git's push, it mirrors the local repo, instead of merging it with the remote, so make sure the local repo is up to date with git pull first.
git-ftp purports to do the same thing. Github's git-ftp page. Probably works better than ftpsync, because it's designed for the purpose, but I haven't tried it.
Sure, SVN can have multiple users and multiple repositories. Depending of course on whether your host is willing to install it. If that doesn't work, maybe you'd consider hosting your version control somewhere else?
Do you mean that you want to store your version control repository on the host and then access it from multiple clients? If yes, then all modern version control systems can work like that.
I just posted this answer on a Mercurial specific question, but it applies here too. I use Mercurial and I found a guide that let me install it with only FTP/control panel access (no shell).
http://javadocs.wordpress.com/2010/04/27/set-up-mercurial-1-5-1-on-a-shared-host-simplified/

How do you update your web application on the server?

I am aware of Capistrano, but it is a bit too heavyweight for me. Personally, I set up two Mercurial repositories, one on the production server and another on my local dev machine. Regularly, when a new feature is ready, I push changes from repository on my local machine to repository on the server, then update on the server. This is a pretty simple and quick way to keep files in sync on several computers, but does not help to update databases.
What is your solution to the problem?
I used to use git push to publish to my web server but lately I've just been using rsync. I try to make my site as agnostic about where it's running as possible (using relative paths, etc) and so far it's worked pretty well. The only challenge is keeping databases in sync, and for that I usually use the production database as the master and make regular backups and imports into my testing database.
Or Fabric, if you prefer Python.
what's heavyweight about capistrano? if you want to sync files then sure rsync is great. but if you're then going to need to do db updates maybe cap isn't so bad ?
I'm assuming you're speaking of Ruby on Rails.
Check out the HowTo wiki:
http://wiki.rubyonrails.com/rails/pages/Howtos#deployment
#Andrew
To use git push to deploy your site you will need to do first set up a remote server in your .git/config file to push to. Then you need to configure a hook that will basically perform a git reset --hard to copy the code you just copied to the repository to the working directory.
I know this is a little vague, but I actually deleted the server-side .git folder once I switched to rsync, so I don't have the exact scripts that I used to make the magic happen. That might be a good candidate for a full question though, so you might get more responses that way.
edit: I know it's been a while, but I eventually found what I was using again:
Deploy a project using Git push