I'm forced to use SourceSafe at my job. There is no way this is going to change. I would like to use another source control for my own need in parallel. I want to be able to keep an history of my modifications, branch easily and merge. I can install any application that doesn't requires admin rights. I cannot install Python or anything that integrates in File Explorer.
I'm not much of a command line guy so a GUI is a must. I managed to install Mercurial but not TortoiseHG. There is a chance msysgit would install but the GUI isn't very good.
Any suggestions?
you can install svn command line just by unzipping it, but if you want TortoiseSVN for the GUI then I you may need admin rights, not sure. But you don't need a separate gui if your IDE supports SVN, like Eclipse or any other java IDE does.
Git has a pretty nice command-line interface with color and auto-completion. After reading the Pro Git Book I found the command-line is great.
There is GUI bundled with it. It is nice for viewing logs and merges but may be not to your taste. There is also a TortoiseGit shell extension (like the famous TortoiseSVN), but that would require admin privileges to install (as opposed to Git portable).
Install and use a Virtual Machine product and go crazy with whatever you want, then look for another job.
I would check out SourceGear Vault, it has SourceSafe Import and SourceSafe Feature support. This may need admin rights though...
Another tack is to synchronize a copy of the directory to another machine where you do have some rights. I would recommend rsync -- I think there are several Windows versions available.
On this other machine you can now use whatever tools you like. I know it's a kludge, but then so is working on a system where you aren't even trusted enough to install something like Python.
AFAIK any TortoiseXX will need admin rights as it needs to hook up explorer.exe to use it. you should still be able to use hgtk part of tortoisehg to get at the windows though
There is no way this is going to
change
I don't mean to say that you should shout your head off about how svn, or whatever, is great, and moan about VSS all the time. But I find it hard to believe that a well-reasoned proposal to switch to a newer, better version control system outlining the pitfalls of VSS (no security - all users have write access to the history of everything, for example) would be ignored.
If you can't install programs that integrate with explorer, then using any version control system is going to mean learning to use it from the command line!
Check out bazaar (bzr). I've not used it personally but it claims to have an excellent gui which may mean you don't need a TortoiseXX install.
Just use a version control tool from the command line. It isn't painful and it can be automated quite easily via your existing tools. SVN isn't going to be great when interacting with an existing version control system (it is finicky about files/directories being deleted, renamed etc), whereas the DVCS tools (I prefer Mercurial) are much smarter about it.
My recommendation: Use Mercurial. It has sane ignore rules so it can be trained to ignore VSS cruft, a single .hg directory that contains all the VC data, and easy branching (which will help you change gears more often). git is fine too, but has a steeper learning curve.
Related
Currently I am working on a project that involves the following daily workflow:
Update local code and edit
commit to subversion repository
ftp to a testing server
I have been using Netbeans to handle all of this but frankly it, combined with the other stuff I am running, eats up all of my machine's resources frequently leaving it sluggish. By switching to a lighter text editor, a standalone ftp client and a standalone svn client I avoid the slowdowns and resource hogging but working becomes clunkier as I move between apps. Basically I really like Netbeans but until I can get a more powerful machine (Macbook Pro next week?) I am stuck.
What is your workflow? Any suggestions on how I can improve mine? Can I cut out FTP with Subversion in some way?
p.s. Subversion use is cast in stone so no git. Also, I'm on a Mac.
On Mac, I use TextMate as my editor of choice. Lots of language goodies for speeding development in whatever language you're doing via Bundles. It has an SVN bundle, which lets you update/checkout/commit directly. I use that for quick updates/checkouts. On my test server, I have another SVN working directory. I set up an SVN Post Commit hook to 1) automatically update the test server with the latest code, and then 2) send a twitter message to inform other developers of the change.
If I want to do more in depth work on the SVN repository (tags, commit logs, diffs) I tend to use the command line, or use a dedicated client like Cornerstone.
Eclipse is an IDE, which also includes syncing with version control, and FTP.
maybe install svn on the testing machine and do an update automatically every ten minutes or so. Or at a specific time.
Just an idea.
Sascha
Almost all the programming editors (Vim, Emacs, etc) support subversion integration.
The only missing link is the FTP to test server. You can do this easily with a post-commit hook in subversion.
If you want to run some pre-commit tests as well, check out this script I had written some time back:
http://code.google.com/p/svn-pre-check/
In case someone is still looking for svn ftp connection i would suggest svn2ftp.
What options are there for saving and retrieving documents to and from the cloud, from within Emacs?
I use Emacs at work, on a Windows machine, and at home, on a Linux box, so ideally I would want a solution that works more or less out of the box for both operating systems.
I touched on g-client, but could not quite get it to work. Obviously, if there are no other, simpler options, I'm just going to have to spend a couple of more hours on it.
Many thanks,
Andreas
Dropbox is pretty universal. I store even my Emacs config files there. Works on Windows, Linux, OS X, and iPhone. Syncs automatically. Stores history. Is free. What else do you want?:-)
Two options that I can think of:
If you have access to a server somewhere that runs ssh, then use ssh with tramp. You can also run a ssh server at your home linux box and access your home files through from work. Tramp works perfectly fine on Windows with ssh from cygwin. It will automatically grab a file (provided that you give emacs something like /ssh:yourusername#yourserverhost:~/yourfile), put it to a temporary file at your computer, then copy it back to the host when you save it.
Use a source control system like SVN or Git. Again you can host the server at your home or you can find online hosts (most are for open source and are thus public, but some are free and private; I use unfuddle.com). You would have to regularly commit/update, but you can easily automate that if you want, and the source control system gives you a nice history of your files and a safety net in case you did something very wrong.
Emacs has excellent integration with source control system. If you find the build-in one not sufficient (it is quite generic and thus does not offer interface to some specific features of a particular source control system), there are plenty of good alternative (psvn for SVN, and magit for Git, for example).
sshfs, if you have good connection speed.
Otherwise there's always tramp-mode for Emacs.
Edit: Just saw you are using Windows.
It's been some years since I used Windows as my desktop, but I used WebDrive back then. It sort-of works, although it always was a bit unstable.
Emacs has great support for remote file systems via Tramp. So the real question is what should you use as a remote FS. There are a bunch of them and as long as they have a way of mounting them or logging in via ssh (for Tramp) you should be ok.
I use JungleDisk - works great for Windows, Linux and Mac. Starts around $2 per month and there's a cap of around $90 per year. You can back up to S3 or to Rackspace.
It integrates at the file system level so you can either read/write directly to it or create links from it to your local file system. I use that to share my .emacs, .bash etc between multiple machines.
Chris
I'm using TkCVS as the GUI front-end for a CYGWIN CVS client, on a Windows XP machine.
It's a good compromise, since on my Linux machine I'm also running TkCVS (the same machine running the CVS server, BTW...).
I'm interested in replacing the diff utility (which has a tkdiff.tcl GUI front-end, for TkCVS) with a commercial product (like BeyondCompare or ExamDiff...)
Does anyone have a way to do this?
Thanks!
From a tkcvs faq:
Q4. Can I use a diff tool other than
tkdiff with tkcvs?
A. Yes, by changing cvscfg(tkdiff).
You usually have to write a wrapper
for your diff tool to get it to
check out the versions, and and deal
with its particular command-line
options, which are probably different
from tkdiff's.
In the contrib directory, there is
a gvim-wrapper called "cvsdiff" which
can be used as-is or as a model for
wrapping your favorite diff tool.
what about just using the cvs diff command?
Or download the cvscommand plugin module for vim.
I am using ubuntu 8.04 and windows xp. I mount the fat32 disk which contains eclipse workspace to ubuntu. but I find I could not use the workspace, maybe I have no right to use it.
the fat32 disk I mounted has the 755 right,I try to use chmod to change it to 777 but failed. I try to mount it to 777 mode, but I find there is nothing about mode in vfat option.
How should I do next ? how could I share the workspace? Help me. thanks.
Instead of trying to share the raw workspace data between two different systems, I suggest to do it like in typical big software development projects. Use a version control system to store your code and commit/update to and from that version control system instead of sharing files.
This may not be the answer you were originally interested in, but rest assured, you will notice many advantages of that version control system after some time, including:
Easily get back to the code version before todays "genius" changes which didn't really work at the end
There is a backup of your project in case your workstation dies
You may even access your project from a completely different machine/location.
If your project is going to be open source, you can even use public services like Sourceforge.net.
I believe that the fat32 doesn't support the same kind of permissions as the linux ones you are familiar with. Once you have sorted out the rw option in /etc/mtab then I think you will have a better time.
However, the step after that is to have two different installations of Eclipse working on the same workspace.
I haven't had a lot of success with this (though haven't tried you're exact scenario), but I would be careful to:
keep the Eclipse versions in synch
only use relative paths, and relative to the workspace. This is probably good practice any way, but is worth repeating.
If all goes well, then you should be sharing everything, including preferences across both installations.
There are two refinements I can think of, which may be useful to reason about, if not actually do:
you could probably share most of the installation of eclipse (the plugins and features directory, if not the config.ini and eclipse.ini files). If you can't put both executables in the same directory, consider the -install and -configuration runtime options.
if you can't do any of these things, then you may need to work on two parallel workspaces. You can keep them in synch with tools such as rsync or even a distributed source control like Mercurial.
I agree with bananeweizen.myopenid, and have the following tip to add:
When creating your build path entries, reference all outside resources (eg, jarfiles) using classpath variables. This will allow you to move the .classpath file between environments (or even check it into source control, if you're the sole developer) without running into problems with pathnames.
To reference a JARFile via variable, go into the "Libraries" tab of the Build Path, remove any existing reference to the library, and click "Add Variable...". You will need to define common variables, such as M2_REPO or LOCAL_LIBS, and you will need to make sure that those definitions are available in all your environments.
Perhaps the problem you're having is with capitalization. Be sure to create the workspace in Ubuntu first. This should rule out any filename capitalization issues.
I've got a number of batch processes that run behind the scenes for a Linux/PHP website. They are starting to grow in number and complexity, so I want to bring a small amount of process to bear on them.
My source tree has a bunch of cpp files and scripts, organized with development but not deployment in mind. After compiling all the executables, I need to put various scripts and binaries on a cluster of machines. Different machines need different executables, scripts, and config files for their batch processes. I also have a few of tools that I've written that belong on every machine. At the moment, this deployment process is manual and error prone.
I'm guessing I'm just going to end up with a script that runs at the root of the source tree and builds a smaller tree of everything necessary for any of the machines. Then, I'll just rsync that to the appropriate machines. But I'm curious how other people are managing this type of problem. Any ideas?
There are a several categories of tool here. Some people use a combination of tools from these categories. I sometimes use, for example, both Puppet and Capistrano. See Puppet or Capistrano - Use the Right Tool for the Job for a discussion.
Scripting Tools aimed at Deploying an Application:
The general pattern with tools in this category is that you create a script and/or config file, often with sets of commands similar to a Makefile, and the tool will ssh over to your production box, do a checkout of your source, and run whatever other steps are necessary.
Tools in this area usually have facilities for rollback to a previous version. So they'll check out your source to releases/ directory, and create a symbolic link from "current" to "releases/" if all goes well. If there's a problem, you can revert to the previous version by running a command that will remove "current" and link it to the previous releases/ directory.
Capistrano comes from the Rails community but is general-purpose. Users of Capistrano may be interested in deprec, a set of deployment recipes for Capistrano.
Vlad the Deployer is an alternative to Capistrano, again from the Rails community.
Write your own shell script or Makefile.
Options for getting the files to the production box:
Direct checkout from source. Not always possible if your production boxes lack development tools, specifically source code management tools.
Checkout source locally, then tar/zip it up. Use scp or rsync to copy the tarball over. This is sometimes preferred for something like an Amazon EC2 deployment, where a compressed tarball can save time/bandwidth.
Checkout source locally, then rsync it over to the production box.
Packaging Tools
Use your OS's packaging system to generate packages containing the files for your app. Create a master package that has as dependencies the other packages you need. The RubyWorks system is an example of this, used to deploy a Rails stack and sample application. Then it's a matter of using apt, yum/rpm, Windows msi, or whatever to deploy a given version. Rollback involves uninstalling and reinstalling an old version.
General Tools Aimed at Installing Apps/Configs and Maintaining a Set of Systems
These tools do not specifically target the problem of deploying a web app, but rather the more general problem of deploying/maintaining Apps/Configs for a set of servers, or an entire company's workstations. They are aimed more at the system administrator than the web developer, though either can find them useful.
Cfengine is a tool in this category.
Puppet aims to improve on Cfengine. It's got a learning curve but many find it worth the time to figure out how to do the configs. Once you've got it going, each box checks the central server periodically and makes sure everything is up to date. If someone edits a file or changes a permission, this is detected and corrected. So, unlike the deployment tools above, Puppet not only puts files in the right place for you, it ensures they stay that way.
Chef is a little younger than Puppet with a similar approach.
Smartfrog is another tool in this category.
Ansible works with plain YAML files and does not require agents running on the servers it manages
For a comparison of these and many more tools in this category, see the Wikipedia article, Comparison of open source configuration management software.
Take a look at the cfengine tutorial to see if cfengine looks like the right tool for your situation. It may be a little too complicated for a small website, but if it is going to involve more computers and more configuration in the future, at some point you will end up using cfengine or something like that.
Create your own packages in the format your distribution uses, e.g. Debian packages (.deb). These can either be copied to each machine and installed manually, or you can set up your own repository, and add it to your list of sources.
Your packages should be set up so that the scripts they contain consult a configuration file, which is different on each host, depending on what scripts need to be run on each.
To tie it all together, you can create a meta package that just depends on each of the other packages you create. That way, when you set up a new server, you install that one meta package, and the other packages are brought in as dependencies.
Although this process sounds a bit complicated, if you have many scripts and many hosts to deploy them to, it can really pay off in the long run.
I have to roll out PHP scripts and Apache configurations to several customers on a frequent basis. Since they all run Debian Linux, I've set up a Debian package repository on my server and the all the customer has to do is type apt-get upgrade and they get the latest version.
The first thing to do is get all these scripts into a source control repository (svn or git are good) so that you can track changes to these scripts over time.
If you are interested in ruby, check out Capistrano, it is well suited deploying things to multiple machines in a cluster, and is fairly easy to set up. It can read files directly from your version control system.
Puppet is another tool that can be used in this situation. It is similar to cfengine - you create a model of the desired deployment and Puppet figures how to get the environment to this state.