How can I know how much disk space is occupying a Plastic SCM repository?
It depends on the backend you have configured (SQL Server, MySQL, Firebird, etc... ).
Once you know the backend you're using, the problem is as simple as knowing the size of the databases in that backend. Plastic SCM databases starts with the prefix:
rep_X.plastic
where X is a numeric id.
Related
I have a small Debian VPS-box on which I host and develop a few small, private PHP websites.
I develop on a Windows desktop with PHPStorm.
Most of my projects only have a few dozen source files but also contain a few thousand lib files.
I don't want to run a webserver on my local machine because this creates a whole set of problems, I don't want to be bothered with for such small projects (e.g. setting up another webserversynching files between my Desktop and the VPS-box; managing different configurations for Windows and Debian (different hosts, paths...); keeping db schema and data in synch).
I am looking for a good way to work with PHPStorm on a large amount of remote files.
My approaches so far:
Mounting the remote file system in Windows (tried via pptp/smb, ftp, webdav) and working on it with PHPStorm as if it were local files.
=> Indexing, synching, and PHPStorms VCS-support became unusably slow. This is probably due to the high latency for file access.
PHPStorm offers the possibility to automatically copy the remote files to the local machine and then synching them when changes are made.
=> After the initial copying, this is fast. Unfortunately, with this setup, PHPStorm is unable to provide VCS support, which I use heavily.
Any ideas on this are greatly appreciated :)
I use PhpStorm in a very similar setup as your second approach (local copies, automatic synced changes) AND importantly VCS support.
Ideal; Easiest In my experience the easiest solution is to checkout/clone your VCS branch on your local machine and use your remote file system as a staging platform which remains ignorant of VCS; a plain file system.
Real World; Remote VCS Required If however (as in my case) it is necessary to have VCS on each system; perhaps your remote environment is the standard for your shop or your shop's proprietary review/build tools are platform specific. Then a slightly different remote setup is required, however treating your remote system as staging is still the best approach.
Example: Perforce - centralized VCS (client work-space)
In my experience work-space based VCS systems (e.g. Perforce) can be handled best by sharing the same client work-space between local and remote systems, which has the benefit of VCS file status changes having to be applied only once. The disadvantage is that file system changes on the remote system typically must be handled manually. In my case I manually chmod (or OS equivalent) my remote files and wash my hands (problem solved). The alternative (dual work-space) approach requires more moving parts, which I do not advice.
Example: Git - distributed VCS
The easier approach is certainly Git which has it's wonderful magic of detecting file changes without file permissions being directly coupled to the VCS. This makes life easy as you can simply start with a common working branch and create two separate branches "my-feature" and "my-feature-remote-proxy" for example. Once you decide to merge your changes upstream, you do so (ideally) from your local environment. The remote proxy branch could be reverted or whatever you want. NOTE: in the case of Git I always have two branches because it's easy. And when you hard drive melts in a freak lighting strike you have extra redundancy :|
Hope this helps.
How to take backup of repositories in Plastic SCM ?
is taking a manual backup of database is enough ?
will I be able to restore and start using the repositories in future if system server crashes ?
Or is there any other method or interface to do this job ?
That would be covered by the chapter "Backup and restore" of the Plastic SCM (4.xw) Administration guide:
The backup and restore procedures are closely related to the database backend used in Plastic SCM
so yes, backing up the database seems enough.
we are small team of 3 developers (Boss, me and another developer working mostly remote), and I am tasked to setup a repository server for Mercurial HG.
It seems like I can simply put our centralized repository on a shared network drive. It will extremely easy to setup, but seems like there is a risk that any one of us could abuse the convenient of working/modify the source repository directly. That is why I am thinking about using HgWebdir server as a way to control access to central repository. So directly access to the central source repository is not encouraged, but the shared drive will be here just in case.
I guess it is a question of defined our in-house version-control procedure, not a really version-control question, but I am still go ahead and ask the question. As I don't feel I am experienced enough to make the decision, and if I am not 100% sure that my reason and means a valid, it is probably hard for me to enforce the way version-control system should be used by other developers.
Edit:
I can see that there are potential issues on shared folder working with Version-control software. But anyone care to explain bit more what happened behind the scene, when pushes to shared folder? My understanding is that shared drive is essentially a shared link/shortcut, so for a shared drive, Mercurial on local machine is only hold the lock for that link, but the fact is that each users machine could had a different instance of Mercurial holding the links' lock, while the server's Mercurial instance will hold its own link on physical drive. I can see it is complicated, but how it is going to fail? I can understand the conclusion, but couldn't by myself link the facts to the conclusion
You should not place the Mercurial repository on a shared folder on a network server because Mercurial cannot reliably hold locks in all situations in such a setup, and during pushes to that central repository, locks are crucial to avoid corrupting the repository.
In fact, I would remove the "not encouraged" and replace it with "not possible", and only serve the repository either with hgweb or hg serve, the former being the recommended setup for long-running servers.
If you have a centralized server you can install hgweb there and push and pull from it as a central and BACKED-UP source. We still have Windows 2003 servers (I am in no position to change that) and with a little searching on the web was able to find info on how to setup a hgweb on a Windows server though most of it referred to Windows Server 2007.
After reading Storing Images in DB - Yea or Nay? I think that the file system is the right place for storing images. But I would like to know how you handle backup/version control of uploaded images in your different environments (dev/stage/prod) and for network load balancing?
These problems is pretty easy to handle when working with a database e.g. to make a backup from the production environment and restore the DB in the development environment.
What do you think of using for example git to handle version control of uploaded files e.g?
Production Environment:
A image is uploaded to a shared folder at the web server.
Meta data is stored in the database
The image is automatically added to a git repository
Developer at work:
Checks out the source code.
Runs a script to restore the database.
Runs a script to get the the latest images.
I think the solution above is pretty smooth for the developer, the images will be under version control and the environments can be isolated from each other.
For us, the version control isn't as important as the distribution. Meta data is added via the web admin and the images are dropped on the admin server. Rsync scripts push those out to the cluster that serves prod images. For dev/test, we just rsync from prod master server back to the dev server.
The rsync is great for load balancing and distribution. If you sub in git for the admin/master server, you have a pretty good solution.
If you're OK with backup that preserves file history at the time of backup (as opposed to version control with every revision), then some adaption of this may help:
Automated Snapshot-style backups with rsync.
It can work, but I would store those images in a git repository which would then be a submodule of the git repo with the source code.
That way, a strong relationship exists between the code and and images, even though the images are in their own repo.
Plus, it avoids issues with git gc or git prune being less efficient with large number of binary files: if images are in their own repo, and with few variations for each of them, the maintenance on that repo is fairly light. Whereas the source code repo can evolve much more dynamically, with the usual git maintenance commands in play.
Some of our projects are still on cvs. We currently use tar to backup the repository nightly.
Here's the question:
best practice for backing up a cvs repository?
Context: We're combining a several servers across the country onto one central server. The combined repsitory size is 14gb. (yes this is high, most likely due to lots of binary files, many branches, and the age of the repositories).
A 'straight tar' of the cvs repository yields ~5gb .tar.gz file. Restoring files from 5gb tar files will be unwieldy. Plus we fill up tapes quickly.
How well does a full-and-incremental backup approach, i.e. weekly full backup, nightly incremental backups? What open source tools solve this problem well? (e.g. Amanda, Bacula).
thanks,
bill
You can use rsync to create backup copy of your repo on another machine if you don't need history of backups. rsync works in incremental mode, so bandwidth will be consumed only for sending changed files.
I don't think that you need full history of backups as VCS provides its own history management and you need backups ONLY as failure-protection measure.
Moreover, if you worry about consistent state of backed up repository you MAY want to use filesystem snapshots, e.g. LVM can produce them on Linux. As far as I know, ZFS from Solaris also has snapshots feature.
You don't need snapshots if and only if you run backup procedure deeply at night when noone touches your repo and your VCS daemon is stopped during backup :-)
As Darkk mentioned rsync makes for good backups since only charged things are copied. Dirvish is a nice backup system based on rsync. Backups run quickly. Restores are extremely simple since all you have to do is copy things. Multiple versions of the backups are store efficiently stored.