share settings between multiple machines by source control system - version-control

I have multiple linux machine. I want to share my .vimrc, .hgrc. But, there are a little difference between different machines. Is there smart method?
I'm using mercurial.
mkdir settings
ln ~/.vimrc settings/vimrc
ln ~/.hgrc settings/hgrc
then use mercurial to keep settings.
There are a little difference of hgrc file on different machine.
I require to branch merge always.
I'm wonder is there better method?

What I typically do is have a .bashrc, etc. file that is common on all systems, and then I source a local version called .bashrc.local or something like that, that doesn't get saved in source control with the machine specific settings.
In .bashrc:
. ~/.bashrc.local
In .vimrc
:source ~/.vimrc.local
etc.
If you want to get fancy you can store your local settings files in source control by using your hostname as a discriminator on the local file name. So instead of .vimrc.local you could do .vimrc.machine1.local, etc.

There is another (overkill ?) technique that may complement other answers.
I use puppet which is a tool for automatically configuring machines. I write my puppet config (stored in version control) then deploy it on the target machines. It offers a "template" feature which works for any file.
For example, you write a .vimrc.erb file like that :
ENV_VAR=<%= varvalue %>
with "varvalue" depending on the target machine. With correct puppet declarations ("for machine xyz, put a .vimrc file in home generated from template .vimrc.erb with varvalue=xyz") you then deploy the config to the target machine.
It uses the powerful "embedded ruby" templating mechanism, so you can even write some ruby code to generate the values. You may want to use it directly without puppet : erb

Related

Eclipse project.properties backslash paths considered harmful

I am working in a team that is developing Android software. Some team members use Windows, some use Macs, and I have been known to use Linux. Everyone uses Eclipse.
Eclipse writes a file called project.properties; here's an example. The important part is the last three lines, the android library reference paths.
# This file is automatically generated by Android Tools.
# Do not modify this file -- YOUR CHANGES WILL BE ERASED!
#
# This file must be checked in Version Control Systems.
#
# To customize properties used by the Ant build system edit
# "ant.properties", and override values to adapt the script to your
# project structure.
#
# To enable ProGuard to shrink and obfuscate your code, uncomment this (available properties: sdk.dir, user.home):
#proguard.config=${sdk.dir}/tools/proguard/proguard-android.txt:proguard-project.txt
# Project target.
target=android-17
android.library.reference.1=../private-code/lib/SomeLibrary
android.library.reference.2=../google-play-services_lib
android.library.reference.3=../FacebookSDK
The above is what the file looks like when Eclipse on Mac or Linux writes it. When Eclipse on Windows writes it, the library reference lines are written with backslashes.
Of course on Windows, backslashes are acceptable path separators. But on Mac and Linux such paths do not work. The thing is, on Windows, forward slashes work perfectly well. So, our policy now is always to commit the file with forward slashes, so that it will work for everyone.
But this is a pain for our Windows users, and it's a pain for the rest of us when the Windows users make a mistake, so I'm looking for a technical solution. I have two ideas:
Find a setting somewhere in Eclipse on Windows, telling it to use forward slashes when saving paths in files like project.properties. (Why the heck isn't that the default?!?)
We use Mercurial, so: install some sort of "hooks" that will solve the problem.
Install a commit hook on the Windows computers, so that the file is committed into the repository with the backslashes replaced by forward slashes.
Install a pull hook on the Mac and Linux computers; so if the file gets committed with backslashes, they get fixed up by the time the files are written.
The commit hook seems cleaner, so if both are available I'd take a commit hook over a pull hook.
I found a Mercurial extension that edits tabs to spaces, which is at least sort of similar to what I want. It's complex enough that I'm a bit leery of trying to modify it into what I need.
https://www.mercurial-scm.org/wiki/CheckFilesExtension
The other strategy is to add a hook that detects backslashes in the paths, and simply aborts the commit, forcing the Windows user to fix the file by hand before committing. That would be better than nothing.
I would keep both versions in the project (as project.properties.windows and project.properties.linux) and create a symbolic link pointing to the right file depending on the OS. Call this symbolic link project.properties and let it be ignored by the version control.
Obviously the disadvantage of this setup is that when windows users update their project.properties file (which points to project.properties.windows), the linux version must be updated manually, and vice-versa, but it doesn't sound like a big deal tho, I presume you don't update this file very often.
- To create the links -
Create a file make_link.sh to setup Linux environments, with the following command:
ln -s $(readlink -m project.properties.linux) $(readlink . -m)/project.properties
Create a file make_link.bat to setup Windows environments, with the following command:
mklink project.properties project.properties.windows
You can commit those scripts as well.
We faced the similar situation due to the path of the local library varies, so after searching a while we found that the best practice to use centralized repository tools (Git for us), "Remove all eclipse dependent/Specific settings files from Repository". And that works fine for us. This way, the change to eclipse settings file will not be effecting the central repository or get committed.

Netbeans: Remote project w/source files over SSH?

Is it possible to set up a remote NetBeans C++ project where the source files are only accessible via SSH?
My project needs to build on a Linux box, but I'd like to develop it on a Windows machine.
Checking out the code via SVN to my Windows machine is not an option since there are a few files that differ only by case, and NTFS is not case sensitive (unfortunately, I can not change them).
I'm well aware that Windows can be kind-of forced be case-aware and the ideal solution is to just re-name those file to something sane.
However, I'm just trying to solve this using NetBeans. Since it's a remote project anyway, why bother to keep any files locally.
Thanks
Currently, no. In general programming files with different cases of the same name is a bad practice.
You can enable case sensitivity in Windows - you may need to have a Professional version or better.
For Windows XP: http://support.microsoft.com/kb/817921
For Windows 7: http://technet.microsoft.com/en-us/library/cc732389.aspx
See also: Windows Services for Unix
Another solution would be to setup VNC/RDP on the remote Unix system. The overall solution should be to conform to a better file naming convention:
Programmer 1: "Hey man, take a look at noCamelCase.cs - I just rewrote it."
Programmer 2: "Um, nocamelcase.cs is blank."
There are two ways of doing remote builds with Netbeans. The first, the project is stored locally. You just create a regular project and on the 2nd page of the wizard you specify the network directory with the source and the remote build host. I've used this for Solaris client to Linux server, but not from Windows as we don't have the mounts exported by SMB. This uses ssh and some shared lib interposers to get the build info.
The second way is to create a remote project. In this case the project is created on the remote host and date is copied on demand to the client. I've only doe a few tests with this as I preferred the first method as it had much better latency.
Lastly, you could either use vnc or install X on your windows machine and do everything on the Linux machine.

Should I check in *.mo files?

Should I check in *.mo translation files into my version control system?
This is a general question. But in particular I'm working on Django projects with git repositories.
The general answer is:
if you do need those files to compile or to deploy (in shot: to "work" with) your component (set of files queried from your VCS), then yes, they should be stored in it (here: in Git).
This is the same for other kind of files (like project files for instance)
.mo files are particular:
django-admin.py compilemessages utility.
This tool runs over all available .po files and creates .mo files, which are binary files optimized for use by gettext
Meaning:
you should be able to rebuild them every time you need them (guarantying in effect that they are in synch with their .po couterparts)
Git is not so good with binary storage and that would avoid it to store a full version for every changes
So the specific answer is not so clear-cut:
if your po files are stables and will not evolve too often, you could definitively store the .mo file
you should absolutely store a big README file explaning how to generate mo from po files.
The general answer is to not store generated contents in version control.
You can include it in tarball, if it requires rare tools, or even have separate repository or disconnected branch with only those generated files (like 'html' and 'man' branches in git.git repository).
For asked question Jakub answer is pretty neat.
But one could ask:
So where should I store such files? Should I generate them every time I deploy my code?
And for that... it depends. You could deploy it in tarball (as Jakub sugested) or even better - create pip or system package (RPM for fedora, DEB for debian, etc.).

how could I share workspace between ubuntu and windows xp?

I am using ubuntu 8.04 and windows xp. I mount the fat32 disk which contains eclipse workspace to ubuntu. but I find I could not use the workspace, maybe I have no right to use it.
the fat32 disk I mounted has the 755 right,I try to use chmod to change it to 777 but failed. I try to mount it to 777 mode, but I find there is nothing about mode in vfat option.
How should I do next ? how could I share the workspace? Help me. thanks.
Instead of trying to share the raw workspace data between two different systems, I suggest to do it like in typical big software development projects. Use a version control system to store your code and commit/update to and from that version control system instead of sharing files.
This may not be the answer you were originally interested in, but rest assured, you will notice many advantages of that version control system after some time, including:
Easily get back to the code version before todays "genius" changes which didn't really work at the end
There is a backup of your project in case your workstation dies
You may even access your project from a completely different machine/location.
If your project is going to be open source, you can even use public services like Sourceforge.net.
I believe that the fat32 doesn't support the same kind of permissions as the linux ones you are familiar with. Once you have sorted out the rw option in /etc/mtab then I think you will have a better time.
However, the step after that is to have two different installations of Eclipse working on the same workspace.
I haven't had a lot of success with this (though haven't tried you're exact scenario), but I would be careful to:
keep the Eclipse versions in synch
only use relative paths, and relative to the workspace. This is probably good practice any way, but is worth repeating.
If all goes well, then you should be sharing everything, including preferences across both installations.
There are two refinements I can think of, which may be useful to reason about, if not actually do:
you could probably share most of the installation of eclipse (the plugins and features directory, if not the config.ini and eclipse.ini files). If you can't put both executables in the same directory, consider the -install and -configuration runtime options.
if you can't do any of these things, then you may need to work on two parallel workspaces. You can keep them in synch with tools such as rsync or even a distributed source control like Mercurial.
I agree with bananeweizen.myopenid, and have the following tip to add:
When creating your build path entries, reference all outside resources (eg, jarfiles) using classpath variables. This will allow you to move the .classpath file between environments (or even check it into source control, if you're the sole developer) without running into problems with pathnames.
To reference a JARFile via variable, go into the "Libraries" tab of the Build Path, remove any existing reference to the library, and click "Add Variable...". You will need to define common variables, such as M2_REPO or LOCAL_LIBS, and you will need to make sure that those definitions are available in all your environments.
Perhaps the problem you're having is with capitalization. Be sure to create the workspace in Ubuntu first. This should rule out any filename capitalization issues.

Best practices for deploying tools & scripts to production?

I've got a number of batch processes that run behind the scenes for a Linux/PHP website. They are starting to grow in number and complexity, so I want to bring a small amount of process to bear on them.
My source tree has a bunch of cpp files and scripts, organized with development but not deployment in mind. After compiling all the executables, I need to put various scripts and binaries on a cluster of machines. Different machines need different executables, scripts, and config files for their batch processes. I also have a few of tools that I've written that belong on every machine. At the moment, this deployment process is manual and error prone.
I'm guessing I'm just going to end up with a script that runs at the root of the source tree and builds a smaller tree of everything necessary for any of the machines. Then, I'll just rsync that to the appropriate machines. But I'm curious how other people are managing this type of problem. Any ideas?
There are a several categories of tool here. Some people use a combination of tools from these categories. I sometimes use, for example, both Puppet and Capistrano. See Puppet or Capistrano - Use the Right Tool for the Job for a discussion.
Scripting Tools aimed at Deploying an Application:
The general pattern with tools in this category is that you create a script and/or config file, often with sets of commands similar to a Makefile, and the tool will ssh over to your production box, do a checkout of your source, and run whatever other steps are necessary.
Tools in this area usually have facilities for rollback to a previous version. So they'll check out your source to releases/ directory, and create a symbolic link from "current" to "releases/" if all goes well. If there's a problem, you can revert to the previous version by running a command that will remove "current" and link it to the previous releases/ directory.
Capistrano comes from the Rails community but is general-purpose. Users of Capistrano may be interested in deprec, a set of deployment recipes for Capistrano.
Vlad the Deployer is an alternative to Capistrano, again from the Rails community.
Write your own shell script or Makefile.
Options for getting the files to the production box:
Direct checkout from source. Not always possible if your production boxes lack development tools, specifically source code management tools.
Checkout source locally, then tar/zip it up. Use scp or rsync to copy the tarball over. This is sometimes preferred for something like an Amazon EC2 deployment, where a compressed tarball can save time/bandwidth.
Checkout source locally, then rsync it over to the production box.
Packaging Tools
Use your OS's packaging system to generate packages containing the files for your app. Create a master package that has as dependencies the other packages you need. The RubyWorks system is an example of this, used to deploy a Rails stack and sample application. Then it's a matter of using apt, yum/rpm, Windows msi, or whatever to deploy a given version. Rollback involves uninstalling and reinstalling an old version.
General Tools Aimed at Installing Apps/Configs and Maintaining a Set of Systems
These tools do not specifically target the problem of deploying a web app, but rather the more general problem of deploying/maintaining Apps/Configs for a set of servers, or an entire company's workstations. They are aimed more at the system administrator than the web developer, though either can find them useful.
Cfengine is a tool in this category.
Puppet aims to improve on Cfengine. It's got a learning curve but many find it worth the time to figure out how to do the configs. Once you've got it going, each box checks the central server periodically and makes sure everything is up to date. If someone edits a file or changes a permission, this is detected and corrected. So, unlike the deployment tools above, Puppet not only puts files in the right place for you, it ensures they stay that way.
Chef is a little younger than Puppet with a similar approach.
Smartfrog is another tool in this category.
Ansible works with plain YAML files and does not require agents running on the servers it manages
For a comparison of these and many more tools in this category, see the Wikipedia article, Comparison of open source configuration management software.
Take a look at the cfengine tutorial to see if cfengine looks like the right tool for your situation. It may be a little too complicated for a small website, but if it is going to involve more computers and more configuration in the future, at some point you will end up using cfengine or something like that.
Create your own packages in the format your distribution uses, e.g. Debian packages (.deb). These can either be copied to each machine and installed manually, or you can set up your own repository, and add it to your list of sources.
Your packages should be set up so that the scripts they contain consult a configuration file, which is different on each host, depending on what scripts need to be run on each.
To tie it all together, you can create a meta package that just depends on each of the other packages you create. That way, when you set up a new server, you install that one meta package, and the other packages are brought in as dependencies.
Although this process sounds a bit complicated, if you have many scripts and many hosts to deploy them to, it can really pay off in the long run.
I have to roll out PHP scripts and Apache configurations to several customers on a frequent basis. Since they all run Debian Linux, I've set up a Debian package repository on my server and the all the customer has to do is type apt-get upgrade and they get the latest version.
The first thing to do is get all these scripts into a source control repository (svn or git are good) so that you can track changes to these scripts over time.
If you are interested in ruby, check out Capistrano, it is well suited deploying things to multiple machines in a cluster, and is fairly easy to set up. It can read files directly from your version control system.
Puppet is another tool that can be used in this situation. It is similar to cfengine - you create a model of the desired deployment and Puppet figures how to get the environment to this state.