DVCS, Databases, and User Generated Content? - version-control

I want to create a development environment with my central repository hosted somewhere like bitbucket/github. Then on my dev server and my production server I will have clones.
I will work on new features and make local commits on the dev server. Once this is at a stage that it can be pushed to production, I will push from the development clone to the central repository, then pull from the central repo to the production server.
All this makes sense, but there are 2 parts I cannot figure out.
How to keep the data-base and user-generated content (file uploads, etc.) in sync?
Also, will user generated content get wiped out when I do my next pull+update on the production server?
How do others address this?
Additional info:
This is going to be a MySQL/PHP website. I am also planing on using a mvc framework (probably cake) and I haven't firmly decided which DVCS to use but so far Mercurial is what I am thinking. Not sure if this info matters but adding just in case.

That is why a DVCS is not always the right tool for release management: once your code is on the server remote repo, you should have another "rsync" mechanism to:
extract the right tag (the one to put into prod)
transform/copy the right files
leave intact other set of files/database.

Related

git hub and coding processes

Ive been programming for a little while now and have built a little application which is now hosted on a dedicated server.
Now i have been rolling out different versions of my app with no real understanding on how to manage the process properly.
Is this the proper way to manage a build of an application when using a product like git hub ?
Upload my entire application onto github.
Each time i work on it, download it and install it on my dev server.
When im done working on it and it appears to be ok, do i then upload the changed files with the current project i am working on or am i meant to update the entire lot or am i mean to create a new version of the project?
once all my changes are updated, is there anyway of pushing these to a production machine from git hub or generating a listing of the newly changed files so i can update production machine easily with a checklist of some kind ?
My application has about 900 files associated with it and is stored in various folder structures and is a server based app (coldfusion to be precise) and as i work alone majority of the time, im struggling to understand how to manage the development of an app...
I also have no idea on using the command line and my desktop machine is a mac, with a VM running all my required server apps (windows server 2012, MSSQL 2012 etc)
I really want to make sure i can keep my dev process in order, but ive struggled with how to understand how to manage a server side apps development when im using a mac my dev machine is a windows machine i feel like im stuck in the middle.
You make it sound more complicated than it is.
Upload my entire application onto github.
Well, this is actually 2 steps: First, create a local git repo (git init), then push your repo up to github.
Each time i work on it, download it and install it on my dev server.
Well, you only need to "download" it once to a new dev box. After that, just git pull (or git fetch depending on workflow), which ensures any changes on the server are pulled down. Just the deltas are sent.
Git is a distributed version control system. That means every git repo has the full history of the entire project. So only deltas need to be sent. (This really helps when multiple people are hacking on a project).
When im done working on it and it appears to be ok, do i then upload the changed files with the current project i am working on or am i meant to update the entire lot or am i mean to create a new version of the project?
Hmm, you are using fuzzy terminology here. When you are done editing, you first commit locally (git add ...; git commit), then you push the changes to github (git push). Only the deltas are sent. Every commit is "a new version" if you squint.
Later on, if you want to think in terms of "software releases" (i.e. releasing "version 1.1" after many commits), you can use git tags. But don't worry about that right away.
once all my changes are updated, is there anyway of pushing these to a production machine from git hub or
generating a listing of the newly changed files so i can update production machine easily with a checklist of some kind ?
Never manually mess around with files manually on your server. The server should ONLY be allowed to run a valid, checked-out version of your software. If your production server is running random bits of code, nobody will be able to reproduce problems because they aren't in the version control system.
The super-simple way to deploy is to do a git clone on your server (one time), then git pull to update the code. So you push a change to github, then pull the change from your server.
More advanced, you will want something like capistrano that will manage the checkouts for you, and break up "checking out" from "deploying" to allow for easier rollback, etc. There may be windows-specific ways of doing that too. (Sorry, I'm a Linux guy.)

How to set up GIT as version control tool for a small team

We are using Eclipse with a SVN client plug-in. This client needs a server running; what about Git? We need to work in a LAN environment without internet access. I have read some basic tutorials about using Git with Eclipse. If I got a Java project in my Git repository, how can I share it with my teammate?
Even though you can share your local repositories, I would suggest setting up a server. There many free alternatives like:
gitlab (http://gitlab.org)
gitorious (http://gitorious.org)
gitolite (https://github.com/sitaramc/gitolite)
gitblit (http://gitblit.com/)
But IMO the best one is Atlassian Stash which for small team will cost you only $10.
if you need to share it, you need some way to access it from each other. Bitbucket is great for small teams who need private code.
If you are always using it from inside a LAN one of you should set up a shared section which you can all push your git changes too (a shared folder or shared drive is good enough) but i would recommend using github / bitbucket if possible
from a command line (can probably use it within eclipse too)
git clone file:////192.168.1.100/code
and then you can psuh and pull from 192.168.1.100/code assuming you have write permissions there
if you're coming from subversion to git, you will be faced with the concept of local repository vs shared repository. You will be able to have a local repository on your computer where you can do as many commits as you want and then only push relevant changes to the shared repository (the one that your teammates will be able to see).
Here's an useful link on the possibilities to share a repository: http://www.jedi.be/blog/2009/05/06/8-ways-to-share-your-git-repository/ (ignore the last one, GITHUB, which will require internet access).
In your particular situation I would recommend sharing via SSH or via GIT daemon.
I also really recommend you to take a look on Eric Sink's book here. He's even offering hardcopies for free!
as suggested you can run your own instance of gitolite or gitlab, but for a rudimentary solution i suggest you just check the following answer:
https://serverfault.com/a/113688/181010
basically you can use any folder as a shared repository as long as all users can access the files either locally or via ssh. that link discribes how to tell git to create its file with rights that are appropriate for usage by all users of one unix group (instead of only the single user owning the files).

How do you use Git within Eclipse as it was intended?

I've recently been looking at using Git to eventually replace the CVS repository we have at work. However after watching Linus Torvalds' video on YouTube about Git it seems that every tutorial I find suggests using Git in the same way CVS is used except that you have a local repository which I agree is very useful for speed and distribution.
However the tutorials suggest that what you do is each clone the repository you want to develop on from a remote location and that when changes are made you commit locally building up a history to help with merge control. When you are ready to commit your changes you then push them to the remote location, but first you fetch changes to check for merge conflicts (just like CVS).
However in Linus' video he describe the functionality of Git as a group of developers working on some code pushing and fetching from each other as needed, not using a remote location i.e. a centralized location. He also describes people pushing their changes out to verifiers who fetch and push code also. So you can see it's possible to create scalable structure within a company also.
My question is can anybody point me in the direction of some tutorials that actually explain how to do this distributed development of code using Git so that developers push and fetch code from each other with out committing to the remote repository and if possible it would be very nice to have this tutorials Eclipsed based.
Thanks in advance,
Alexei Blue.
I don't know any specific tutorial about this. In general, for connecting to a repository, you have to be running a git server that listens (and authenticates) to git requests.
To have a specific repository for each developer is possible - but each repository needs that server component. It is possible to store multiple repositories on the same computer, that allows reducing the number of servers required.
However, for other reasons it is beneficial to have some kind of central structure (e.g. a repository for stuff to be released; or a repository for stuff not verified yet). This structure is not required to be a single central repository, but multiple ones with well-defined workflows regarding the data move between repositories (e.g. if code from the verification repository is validated, it should be pushed to the release repository).
In other words, you should be ready to create Git servers (e.g. see http://tumblr.intranation.com/post/766290565/how-set-up-your-own-private-git-server-linux for details; but there are other tutorials for this as well), and define workflows for your own company to use it.
Additionally, I recommend looking at AlBlue's blog series called Git Tip of the Week.
Finally, to ease the introduction I suggest to first introduce Git as a direct replacement for CVS, and then present the other changes one by one.
Take a look at alblue's blog entry on Gerrit
This shows a workflow different from the classic centralized server such as CVS or SVN. Superficially it looks similar as you pull the source from a central Git server, but you push your commits to the Gerrit server that can compile and test the code to make sure it works before eventually pushing the changes to the central Git server.
Now instead of pushing the changes to Gerrit, you could have pushed the code to your pair programming buddy and he could have manually reviewed and tested the code.
Or maybe you're going on holiday and a colleague will finish the task you've started. No problem, just push your changes to their Git repo.
Git doesn't treat any of these other Git instances any different from each other. From Git's perspective, none of them are special.

howto configure rights on webserver for successful bzr repository

I'm trying to get a better workflow with bzr between deploy server and local development machine.
Web server running as www-user and bzr-login with local account, wdev.
at server groups are setup, www-user: www-user,wdev and vice versa.
For simplicity, urgent bugfixes are fixed in trunk at server through ssh.
What would be the recommended setup? Trunk in /home/wdev?
Should the deployment be a trunk or branch?
Currently, I have to su root to commit, which puzzels me.. I can "su www-data" and have read/write access. Still, the webserver still doesn't have write permissions with php.
Current solution is chown:ing everything to www-data, a dislikable solutions since any merge
would blur new files with wdev-ownership.
Thankful for any basic howto regarding prefered setup
(this is more or less a x-post from https://serverfault.com/questions/299436/directory-rights-with-webserver-and-bzr but didn't get much response there).
regards,
By trunk I assume you mean the master-branch ? With distributed revision control systems there is no distinction between a trunk and a branch it is just a convention, so just organise them so they are convenient for you.
Maybe you can use sticky bits:
http://michael.lustfield.net/content/creating-your-own-bazaar-server
I have not had it working nicely ever (maybe I did it wrong), so I use a cronjob to fixup permissions hourly.
If you use a smart server all the users commit through 1 os user, so the permissions are handled at another level.

Drupal 6: using bitbucket.org for my Drupal projects as a real version control system dummy

Here is a real version control system dummy! proper new starter!
The way I have worked so far:
I have a Drupal-6 web project www.blabla.com and making development under www.blabla.com/beta . I'm directly working on blabla.com/beta on server. nothing at my local, nothing at anywhere else. Only taking backup to local, time to time. I know horrible and not safe way :/
The new way I want to work from now on:
I decided to use Mercurial. I have one more developer to work on same project with me. I have a blabla.com Drupal-6 project on bluehost and making development blabla.com/beta. I found out http://bitbucket.org/ for mercurial hosting. I have created an account.
So now how do I set up things? I'm totally confused after reading tens of article :/
bitbucket is only for hosting revised files? so if I or my developer friend edit index.php, bitbucket will host only index.php?
from now on do I have to work at localhost and upload the changes to blueshost? no more editing directly at blabla.com/beta? or can I still work on bluehost maybe under blabla.com/beta2?
When I need to edit any file, do I first download update from bitbucket, I make my change at localhost, update bitbucket for edited files, and uploading to bluehost?
Sorry for silly questions, I really need a guidance...
Appreciate helps so much! thanks a lot!
bitbucket is only for hosting revised files?
The main service of bitbucket is to host files under revision control, but there is also a way to store arbitrary files there.
so if I or my developer friend edit index.php, bitbucket will host only index.php?
I a typical project every file which belongs to the product is cheked into revision control, not only index.php. see this example
from now on do I have to work at localhost and upload the changes to blueshost? no more editing directly at blabla.com/beta? or can I still work on bluehost maybe under blabla.com/beta2?
Mercurial does not dictate a fix workflow. But I recommend that you have mercurial installed where you edit the files. For example then you can see direct which changes you did since the last commit, without to need to copy the files from your server to your local repository.
I absolutely recommend a workflow where somewhere in the repository is a script which generates the archive file which is transmitted to the server, containing the revision of the repository when the archive got created. This revision information should also be somewhere stored on the server (not necessarily in a public accessible area), since this information can get very handy when something went wrong.
When I need to edit any file, do I first download update from bitbucket, I make my change at localhost, update bitbucket for edited files, and uploading to bluehost?
There are several different approaches to get the data to the server:
export the local repo into an archive and transmit this onto the server (hg archive production.tar.bz2), this is the most secure variant, since it does not depend on any extra software on the server. Also depending on how big the archive is this approach can waste lots of bandwidth.
work on the server and copy changed files back, but I don't recommend this since is is very easy to miss something important
install mercurial on the server, work in a working copy there and hg export locally there into the production area
install mercurial on the server and hg fetch from bitbucket(or any other server-accessible repository)
install mercurial on the server and hg push from your local working copy to the server (and hg update on the server afterwards)
The last two points can expose the repository to the public. This exposition can be both good and bad, depending on what your repository contains, and if you want to share the content. When you want to share the content, or you can limit the access to www.blabla.com/beta/.hg, you can clone directly from your web server.
Also note that you should not check in any files with passwords or critical secrets, even when you access-limit the repository. It is much more save to check in template files (with a different name than in production), and copy-and-edit these files on the server.