Mercurial deployment - deployment

I'm quite new to mercurial so this might sound silly..
I am developing a PHP application locally and pushing changes to remote linux server. I have a hgweb.wsgi script publishing my repository which is accessible via url (hg.example.com/repository).
Now I am wondering what is the best way to automate deployment of the app to see it in action on the same server as repository? Obviously I can't just go to hg.example.com/repository since it just shows the web interface of the repository and not the app..

You're mixing up two things which are not necessarily related:
the path part of the URL of your repository, and
the path of the actual repository on your server.
To get a better picture, consider we use SSH, not HTTP, for repository access. This means we probably specify the full path to the repo in the server file system within the path, e.g., to sync with my server in a similar setup, I push to ssh://example.com//var/www/wsgi/example.com (I have a WSGI app, not a PHP one, but that's not important now). The app itself is available at http://example.com/, i.e., site root if /var/www/wsgi/example.com.
Well now, nothing can prevent me to set up HTTP access to this repo using hgweb on a hg.example.com subdomain, so the repo push path is http://hg.example.com/example.com.
Thus:
I push to http://hg.example.com/example.com (repo pulished at this URL)
The repo is located under /var/www/wsgi/example.com (server file system path)
This directory is in some way set up to be considered the site root by the web server
Site root = http://example.com/
P.S. Don't forget about the changegroup hook Ton mentioned.

There are two things you'll need to do:
Add a webserver that can serve PHP-pages (like apache) and let it serve the repository root (as always make sure it's save)
Add a hook to the mercurial server for the changegroup hook. This hook can be as simple as:
update.changegroup = hg up
That will update the working folder of the repository with the latest version.

Related

Synchronize github with godaddy account

I develop front-end and back-end of many websites hosted on godaddy. I was looking for a way to synchronize the godaddy file manager with my local repository so as to prevent me from uploading the edited files each time. I push my code to github directly, but is there a way to push the code directly to godaddy account without using its file manager?
Also sometimes, with other systems, I edit the code directly on the server if I get some problems with the code, which becomes then difficult to get it on my local system.
It would be of great help to directly push it without using the file manager each time.
It would be best to:
install Git on Godaddy (as in this blog post)
setup a bare repo on the upstream side (ie, the GoDady side, the one where you would push your code)
add a post-receive hook on that upstream repo in order for a non-bare repo to update itself: see links in the "Is --bare option equal to core.bare config in Git?" answer.

Using GitHub to listen to changes made to files on remote server

I know there are a lot of posts about running GitHub on a remote server, but I can't find any that I understand or can follow. Command line stuff and all this talk about SSH completely befuddle me, so I am hoping for a step-by-step answer which is literally written for a dummy and hopefully provides an easy solution (I am having my fingers and toes crossed).
My scenario:
I have built a site using Statamic as a CMS, which uses text files to manage the site's content. I also have a GitHub repository which contains most of the site's files here:
https://github.com/katrinkerber/katrinkerber
I am using the GitHub app on OSX to push any changes I make to, for example, my local CSS or HTML files to the remote Github repository. That is as far as my basic understanding of Git takes me really.
Whenever existing content is edited or a new page/entry is published through the CMS's Control Panel, a file is updated or created inside the *_content* folder on the server where the site is hosted.
What I want is for Github to listen to and keep track of any changes made on the server, particularly that *_content* folder.
One of my attempts was to just upload the .git folder in my local files to my server and change the Primary remote repository path, but that didn't work.
What do I need to do?
Really the only way to run Git (the version control system, not GitHub the web application/network) is via SSH.
Here's a good article: http://git-scm.com/book/en/Getting-Started-Installing-Git#Installing-on-Linux
And if you get that up and running, here's a good way to set up deployments: http://blog.ekynoxe.com/2011/10/22/automated-deployment-on-remote-server-with-git/

DVCS, Databases, and User Generated Content?

I want to create a development environment with my central repository hosted somewhere like bitbucket/github. Then on my dev server and my production server I will have clones.
I will work on new features and make local commits on the dev server. Once this is at a stage that it can be pushed to production, I will push from the development clone to the central repository, then pull from the central repo to the production server.
All this makes sense, but there are 2 parts I cannot figure out.
How to keep the data-base and user-generated content (file uploads, etc.) in sync?
Also, will user generated content get wiped out when I do my next pull+update on the production server?
How do others address this?
Additional info:
This is going to be a MySQL/PHP website. I am also planing on using a mvc framework (probably cake) and I haven't firmly decided which DVCS to use but so far Mercurial is what I am thinking. Not sure if this info matters but adding just in case.
That is why a DVCS is not always the right tool for release management: once your code is on the server remote repo, you should have another "rsync" mechanism to:
extract the right tag (the one to put into prod)
transform/copy the right files
leave intact other set of files/database.

Drupal 6: using bitbucket.org for my Drupal projects as a real version control system dummy

Here is a real version control system dummy! proper new starter!
The way I have worked so far:
I have a Drupal-6 web project www.blabla.com and making development under www.blabla.com/beta . I'm directly working on blabla.com/beta on server. nothing at my local, nothing at anywhere else. Only taking backup to local, time to time. I know horrible and not safe way :/
The new way I want to work from now on:
I decided to use Mercurial. I have one more developer to work on same project with me. I have a blabla.com Drupal-6 project on bluehost and making development blabla.com/beta. I found out http://bitbucket.org/ for mercurial hosting. I have created an account.
So now how do I set up things? I'm totally confused after reading tens of article :/
bitbucket is only for hosting revised files? so if I or my developer friend edit index.php, bitbucket will host only index.php?
from now on do I have to work at localhost and upload the changes to blueshost? no more editing directly at blabla.com/beta? or can I still work on bluehost maybe under blabla.com/beta2?
When I need to edit any file, do I first download update from bitbucket, I make my change at localhost, update bitbucket for edited files, and uploading to bluehost?
Sorry for silly questions, I really need a guidance...
Appreciate helps so much! thanks a lot!
bitbucket is only for hosting revised files?
The main service of bitbucket is to host files under revision control, but there is also a way to store arbitrary files there.
so if I or my developer friend edit index.php, bitbucket will host only index.php?
I a typical project every file which belongs to the product is cheked into revision control, not only index.php. see this example
from now on do I have to work at localhost and upload the changes to blueshost? no more editing directly at blabla.com/beta? or can I still work on bluehost maybe under blabla.com/beta2?
Mercurial does not dictate a fix workflow. But I recommend that you have mercurial installed where you edit the files. For example then you can see direct which changes you did since the last commit, without to need to copy the files from your server to your local repository.
I absolutely recommend a workflow where somewhere in the repository is a script which generates the archive file which is transmitted to the server, containing the revision of the repository when the archive got created. This revision information should also be somewhere stored on the server (not necessarily in a public accessible area), since this information can get very handy when something went wrong.
When I need to edit any file, do I first download update from bitbucket, I make my change at localhost, update bitbucket for edited files, and uploading to bluehost?
There are several different approaches to get the data to the server:
export the local repo into an archive and transmit this onto the server (hg archive production.tar.bz2), this is the most secure variant, since it does not depend on any extra software on the server. Also depending on how big the archive is this approach can waste lots of bandwidth.
work on the server and copy changed files back, but I don't recommend this since is is very easy to miss something important
install mercurial on the server, work in a working copy there and hg export locally there into the production area
install mercurial on the server and hg fetch from bitbucket(or any other server-accessible repository)
install mercurial on the server and hg push from your local working copy to the server (and hg update on the server afterwards)
The last two points can expose the repository to the public. This exposition can be both good and bad, depending on what your repository contains, and if you want to share the content. When you want to share the content, or you can limit the access to www.blabla.com/beta/.hg, you can clone directly from your web server.
Also note that you should not check in any files with passwords or critical secrets, even when you access-limit the repository. It is much more save to check in template files (with a different name than in production), and copy-and-edit these files on the server.

Heroku and Github integration (how to structure the project)

I'm creating a webservice and I want to store the source on github and run the app on heroku. I haven't seen my exact scenario addressed anywhere on the 'net so far, so I'll ask it here:
I want to have the following directory structure:
/project
.git
README <-- project readme file
TODO.otl <-- project outline
... <-- other project-related stuff
/my_rails_app
app
config
...
README <-- rails' readme file
In the above, project corresponds to http://github.com/myuser/project, and my_rails_app is the code that should be pushed to heroku. Do I need a separate branch for the rails app, or is there a simpler way that I'm missing?
I guess my project-related non-rails files could live in my_rails_app, but the rails README already lives there and it seems inconsistent to overwrite that. However, if I leave it, my github page for the rails app will contain the rails readme, which makes no sense.
Also ... I tried just setting it up as described above and running
git push heroku
from the main project folder. Of course, heroku doesn't know I want to deploy the subfolder:
-----> Heroku receiving push
! Heroku push rejected, no Rails or Rack app detected.
Here's a simple solution that may or may not work for you.
Create two projects on GitHub. One project should be just the Rails app (i.e. everything inside the Rails app directory). The other project should be everything outside the Rails app directory.
Add the Rails app project as a git-submodule within the "container" project.
Now you can add Heroku as a remote on the Rails app repository separately and push it to heroku. Heroku will accept the push because it is just a Rails app with the expected directories and files.
A solution for the Heroku situation (not the README file):
If you're using the new Heroku Cedar (I believe it wasn't available when you first asked your question) then your processes (like the rails server process) start up using Foreman. Thus, you can place a Procfile in the root github directory that looks like this:
web: my_rails_app/script/runserver.sh
And then my_rails_app/script/runserver.sh could be a simple
#!/bin/sh
cd my_rails_app
bundle exec rails server -p $PORT
Locally, you should also create a file called .env (note the . at the beginning), which contains
PORT=3000
This file is read by foreman and used to set environment variables so that the port is set when you execute foreman start on your machine (from the root github directory, where the Procfile lies). The Heroku server takes care of the .env file on your dyno. The big advantage is you can set up multiple processes on the dyno that way!
Just overwrite Rails' default README file. There's no reason to keep it around. Put your other project-management-related stuff in the doc directory. While you certainly have valid reasons for wanting to set it up the way you did, you're just creating a headache for yourself by going against convention, and it's probably not worth the benefit.
I would add everything underneath /my_rails_app to the Heroku git repository. Then add GitHub as a remote and add everything underneath /project to the GitHub repository. Then you can push the Rails application to Heroku (from /my_rails_app) and push the full project to GitHub (from /project).