Drupal 6: using bitbucket.org for my Drupal projects as a real version control system dummy - version-control

Here is a real version control system dummy! proper new starter!
The way I have worked so far:
I have a Drupal-6 web project www.blabla.com and making development under www.blabla.com/beta . I'm directly working on blabla.com/beta on server. nothing at my local, nothing at anywhere else. Only taking backup to local, time to time. I know horrible and not safe way :/
The new way I want to work from now on:
I decided to use Mercurial. I have one more developer to work on same project with me. I have a blabla.com Drupal-6 project on bluehost and making development blabla.com/beta. I found out http://bitbucket.org/ for mercurial hosting. I have created an account.
So now how do I set up things? I'm totally confused after reading tens of article :/
bitbucket is only for hosting revised files? so if I or my developer friend edit index.php, bitbucket will host only index.php?
from now on do I have to work at localhost and upload the changes to blueshost? no more editing directly at blabla.com/beta? or can I still work on bluehost maybe under blabla.com/beta2?
When I need to edit any file, do I first download update from bitbucket, I make my change at localhost, update bitbucket for edited files, and uploading to bluehost?
Sorry for silly questions, I really need a guidance...
Appreciate helps so much! thanks a lot!

bitbucket is only for hosting revised files?
The main service of bitbucket is to host files under revision control, but there is also a way to store arbitrary files there.
so if I or my developer friend edit index.php, bitbucket will host only index.php?
I a typical project every file which belongs to the product is cheked into revision control, not only index.php. see this example
from now on do I have to work at localhost and upload the changes to blueshost? no more editing directly at blabla.com/beta? or can I still work on bluehost maybe under blabla.com/beta2?
Mercurial does not dictate a fix workflow. But I recommend that you have mercurial installed where you edit the files. For example then you can see direct which changes you did since the last commit, without to need to copy the files from your server to your local repository.
I absolutely recommend a workflow where somewhere in the repository is a script which generates the archive file which is transmitted to the server, containing the revision of the repository when the archive got created. This revision information should also be somewhere stored on the server (not necessarily in a public accessible area), since this information can get very handy when something went wrong.
When I need to edit any file, do I first download update from bitbucket, I make my change at localhost, update bitbucket for edited files, and uploading to bluehost?
There are several different approaches to get the data to the server:
export the local repo into an archive and transmit this onto the server (hg archive production.tar.bz2), this is the most secure variant, since it does not depend on any extra software on the server. Also depending on how big the archive is this approach can waste lots of bandwidth.
work on the server and copy changed files back, but I don't recommend this since is is very easy to miss something important
install mercurial on the server, work in a working copy there and hg export locally there into the production area
install mercurial on the server and hg fetch from bitbucket(or any other server-accessible repository)
install mercurial on the server and hg push from your local working copy to the server (and hg update on the server afterwards)
The last two points can expose the repository to the public. This exposition can be both good and bad, depending on what your repository contains, and if you want to share the content. When you want to share the content, or you can limit the access to www.blabla.com/beta/.hg, you can clone directly from your web server.
Also note that you should not check in any files with passwords or critical secrets, even when you access-limit the repository. It is much more save to check in template files (with a different name than in production), and copy-and-edit these files on the server.

Related

Use Git list output to copy files for archiving

I'm currently helping to maintain a project for a client remotely. I'm the only developer ergo some of my unorthodox approaches/thinking.
the problem
The client is using Visual Studio 2010 + Team Foundation Server for their source control. I am working on a Mac over VPN and have tried several approaches to make committing to their TFS workable. I've tried TFS plugin for Eclipse with no luck (VPN really hoses the connection to TFS). Currently I am having to do a full "checkout for edit" through a virtual machine to the TFS, then transferring the project over the VPN to overwrite those files. Not a sustainable solution to say the least.
the solution?
I'm wondering if there is a way to:
get a list of changed files from GIT (I think this is the solution
(How to list all the files in a commit?)
then use that list as a means to go in and fetch those file, maintaining their folder structure
from there I can do my dump over
VPN into the VM that has the project mapped in TFS.
Or if there is something I've overlooked or hadn't thought of, please do recommend them, I'm all ears.
First, I'm assuming you are running the VM on or near the TFS server, not on your Mac. If not, you can just share a directory using VMware/VirtualBox and edit away on your Mac...
It sounds like you could achieve what you want with plain old Git. If you:
Create a bare repository on the VM (git init --bare)
Add a post-receive hook to copy the files from the master branch (for example) into the TFS directory, overwriting merrily (http://git-scm.com/book/en/Customizing-Git-Git-Hooks)
Initialise your local copy of the source as a Git repository (git init)
Add the remote repository. Assuming it's a Windows box you can use an SMB shared folder over the VPN so your remote is "local" as far as Git is concerned. (git remote add tfsserver file:///Volumes/tfsmount/code
Your first push will be expensive (but you could prepopulate the remote repo to get around that), but subsequent pushes would be just the changesets. The post-receive hook would then take care of updating the files, and you're laughing.
Of course, you then get to impress them with how amazing Git is, get them to migrate, and your problem goes away forever :).
Update: Here's a link which describes these steps in more detail, under the guise of updating a remote website: http://toroid.org/ams/git-website-howto.

git hub and coding processes

Ive been programming for a little while now and have built a little application which is now hosted on a dedicated server.
Now i have been rolling out different versions of my app with no real understanding on how to manage the process properly.
Is this the proper way to manage a build of an application when using a product like git hub ?
Upload my entire application onto github.
Each time i work on it, download it and install it on my dev server.
When im done working on it and it appears to be ok, do i then upload the changed files with the current project i am working on or am i meant to update the entire lot or am i mean to create a new version of the project?
once all my changes are updated, is there anyway of pushing these to a production machine from git hub or generating a listing of the newly changed files so i can update production machine easily with a checklist of some kind ?
My application has about 900 files associated with it and is stored in various folder structures and is a server based app (coldfusion to be precise) and as i work alone majority of the time, im struggling to understand how to manage the development of an app...
I also have no idea on using the command line and my desktop machine is a mac, with a VM running all my required server apps (windows server 2012, MSSQL 2012 etc)
I really want to make sure i can keep my dev process in order, but ive struggled with how to understand how to manage a server side apps development when im using a mac my dev machine is a windows machine i feel like im stuck in the middle.
You make it sound more complicated than it is.
Upload my entire application onto github.
Well, this is actually 2 steps: First, create a local git repo (git init), then push your repo up to github.
Each time i work on it, download it and install it on my dev server.
Well, you only need to "download" it once to a new dev box. After that, just git pull (or git fetch depending on workflow), which ensures any changes on the server are pulled down. Just the deltas are sent.
Git is a distributed version control system. That means every git repo has the full history of the entire project. So only deltas need to be sent. (This really helps when multiple people are hacking on a project).
When im done working on it and it appears to be ok, do i then upload the changed files with the current project i am working on or am i meant to update the entire lot or am i mean to create a new version of the project?
Hmm, you are using fuzzy terminology here. When you are done editing, you first commit locally (git add ...; git commit), then you push the changes to github (git push). Only the deltas are sent. Every commit is "a new version" if you squint.
Later on, if you want to think in terms of "software releases" (i.e. releasing "version 1.1" after many commits), you can use git tags. But don't worry about that right away.
once all my changes are updated, is there anyway of pushing these to a production machine from git hub or
generating a listing of the newly changed files so i can update production machine easily with a checklist of some kind ?
Never manually mess around with files manually on your server. The server should ONLY be allowed to run a valid, checked-out version of your software. If your production server is running random bits of code, nobody will be able to reproduce problems because they aren't in the version control system.
The super-simple way to deploy is to do a git clone on your server (one time), then git pull to update the code. So you push a change to github, then pull the change from your server.
More advanced, you will want something like capistrano that will manage the checkouts for you, and break up "checking out" from "deploying" to allow for easier rollback, etc. There may be windows-specific ways of doing that too. (Sorry, I'm a Linux guy.)

Using GitHub to listen to changes made to files on remote server

I know there are a lot of posts about running GitHub on a remote server, but I can't find any that I understand or can follow. Command line stuff and all this talk about SSH completely befuddle me, so I am hoping for a step-by-step answer which is literally written for a dummy and hopefully provides an easy solution (I am having my fingers and toes crossed).
My scenario:
I have built a site using Statamic as a CMS, which uses text files to manage the site's content. I also have a GitHub repository which contains most of the site's files here:
https://github.com/katrinkerber/katrinkerber
I am using the GitHub app on OSX to push any changes I make to, for example, my local CSS or HTML files to the remote Github repository. That is as far as my basic understanding of Git takes me really.
Whenever existing content is edited or a new page/entry is published through the CMS's Control Panel, a file is updated or created inside the *_content* folder on the server where the site is hosted.
What I want is for Github to listen to and keep track of any changes made on the server, particularly that *_content* folder.
One of my attempts was to just upload the .git folder in my local files to my server and change the Primary remote repository path, but that didn't work.
What do I need to do?
Really the only way to run Git (the version control system, not GitHub the web application/network) is via SSH.
Here's a good article: http://git-scm.com/book/en/Getting-Started-Installing-Git#Installing-on-Linux
And if you get that up and running, here's a good way to set up deployments: http://blog.ekynoxe.com/2011/10/22/automated-deployment-on-remote-server-with-git/

Mercurial. Version control and deployment. Different config files. How to?

I have a setup as follows.
A private repository at bitbucket where I keep the 'master' repository.
A repository on my server which acts as the 'live' website.
A repository on my laptop which acts as my working copy.
My process is as follows. I make a change to a file in my local repository. I commit these locally. I push these changes to bitbucket. I then pull these changes from my bitbucket to the webserver.
The problem that I have however is that my local copy utilizes different configuration settings for databases, paths etc, ergo what I want is my 'config.php' file at bitbucket to contain the server settings, and the config.php on my local host to contain local settings.
I believe this can be achieved with .hgignore but i have had no success researching.
The problem i encounter is that i make my server settings file, push it to bitbucket, 'forget' the file in my local repository, create a .hgignore, and then recreate the file. However when i 'forget' the file TortoiseHG notices and asks me to commit the change to bitbucket....
Any ideas would be greatly appreciated.
Thanks
Additional Points.
Following the advice below I have developed a setup as follows:
I have my local repository on my laptop where i do my edits.
I have bitbucket which is essentially the 'main' repository - if any other developers join the team they clone this.
I have my live repository on my web host.
On my live repository I have a .hgignore file whichs ignores the respective config files.
As such when I do hg pull from my host, it pulls the repository as is with the localhost configuration files, but when i type hg update (to the live working copy), these files are ignored/not updated.
Could someone clarify as to if i have understood this correctly, and as to whether this is a suitable way of achieving what I want?
Thanks
.hgignore only ignores files if they are not versioned already, so I don’t think your idea in the question will work.
The common approach regarding local configuration is generally a variation on the same theme, like of one of the following:
Do not check in the config.php at all. You can check in a config.example.php with the most common settings, and document in the README that users have to copy it to config.php and then edit it.
Put any shared settings in config.php, and add an include statement to point to an unversioned file with settings specific to the machine, e.g. config.local.php. You can also provide an config.local.example.php-file for this.
Like 2, but the config.php contains all default settings and the local file has the ability to override them.
Check in a config.dev.php and config.server.php-file containing the settings for both environments, and then have an unversioned config.php which includes one of the above files. Advantage is that the configurations themselves are versioned and you can update them.
Which of these variations to pick, or whether you make another variation, depends on your environment.
The basic idea for working with version control and different configuration files is always the same, but I don't know enough PHP to give a detailed answer how you can do this in PHP.
I answered a similar question for .net/Visual Studio a few months ago, so I'll just give you the link to this answer and try to describe the basic idea again, but this time language-agnostic:
For your use case, the basic idea is to have two config files in the repository, one with your local data and one with your server data, for example like this:
config.local.php
config.server.php
The "real" config.php is not in the repository, and it should be in .hgignore, so it never will be in the repository either.
After pulling, you need to run something that copies one of these files (the "correct" one depending on the current environment, local or server) to config.php.
And exactly this last part is the part that I can not answer in detail, because I don't know how to do that in PHP and/or on a web server because I'm a .net/Windows guy.
As far as I know, deploying a PHP site is just copying the files on the web server, so there is no "build/compile" step where the copying/renaming of the config file could be done (where I would do it in .net). Correct me if I'm wrong...
EDIT:
Thomas, I'm not sure if I understood your edits correctly. Your "local" repository on your laptop and your "live" repository on your webserver are basically clones of your "main" repository on Bitbucket, correct?
If yes, are you saying that you have different .hgignore files in the different clones?? That's the part that confuses me.
No matter how you actually do it in the end (there are several possibilities to deal with configuration files, see below), the .hgignore file should be the same in all clones of your repository.
So all your repositories (no matter which clone on which machine) should all contain the same configuration file(s).
Then, you only need to make sure that different configurations are used in different environments. There's already an excellent list of different ways to achieve this in Lauren Holst's answer, so I'll just point you there.
As Laurens Holst already said, we can't tell which of these ways is the best for you - it depends on your environment.
You might want to check here. If both the config file and .hgignore are commited, the .hgignore will have no effect. You could also add a domain check conditional:
$domain = $_SERVER['HTTP_HOST'];
if ($domain=="localhost") {
//local copy config
}
else if ($domain=="yourdomain.com") {
//webserver config
}

How do you update your web application on the server?

I am aware of Capistrano, but it is a bit too heavyweight for me. Personally, I set up two Mercurial repositories, one on the production server and another on my local dev machine. Regularly, when a new feature is ready, I push changes from repository on my local machine to repository on the server, then update on the server. This is a pretty simple and quick way to keep files in sync on several computers, but does not help to update databases.
What is your solution to the problem?
I used to use git push to publish to my web server but lately I've just been using rsync. I try to make my site as agnostic about where it's running as possible (using relative paths, etc) and so far it's worked pretty well. The only challenge is keeping databases in sync, and for that I usually use the production database as the master and make regular backups and imports into my testing database.
Or Fabric, if you prefer Python.
what's heavyweight about capistrano? if you want to sync files then sure rsync is great. but if you're then going to need to do db updates maybe cap isn't so bad ?
I'm assuming you're speaking of Ruby on Rails.
Check out the HowTo wiki:
http://wiki.rubyonrails.com/rails/pages/Howtos#deployment
#Andrew
To use git push to deploy your site you will need to do first set up a remote server in your .git/config file to push to. Then you need to configure a hook that will basically perform a git reset --hard to copy the code you just copied to the repository to the working directory.
I know this is a little vague, but I actually deleted the server-side .git folder once I switched to rsync, so I don't have the exact scripts that I used to make the magic happen. That might be a good candidate for a full question though, so you might get more responses that way.
edit: I know it's been a while, but I eventually found what I was using again:
Deploy a project using Git push