Should I check in *.mo files? - version-control

Should I check in *.mo translation files into my version control system?
This is a general question. But in particular I'm working on Django projects with git repositories.

The general answer is:
if you do need those files to compile or to deploy (in shot: to "work" with) your component (set of files queried from your VCS), then yes, they should be stored in it (here: in Git).
This is the same for other kind of files (like project files for instance)
.mo files are particular:
django-admin.py compilemessages utility.
This tool runs over all available .po files and creates .mo files, which are binary files optimized for use by gettext
Meaning:
you should be able to rebuild them every time you need them (guarantying in effect that they are in synch with their .po couterparts)
Git is not so good with binary storage and that would avoid it to store a full version for every changes
So the specific answer is not so clear-cut:
if your po files are stables and will not evolve too often, you could definitively store the .mo file
you should absolutely store a big README file explaning how to generate mo from po files.

The general answer is to not store generated contents in version control.
You can include it in tarball, if it requires rare tools, or even have separate repository or disconnected branch with only those generated files (like 'html' and 'man' branches in git.git repository).

For asked question Jakub answer is pretty neat.
But one could ask:
So where should I store such files? Should I generate them every time I deploy my code?
And for that... it depends. You could deploy it in tarball (as Jakub sugested) or even better - create pip or system package (RPM for fedora, DEB for debian, etc.).

Related

How to install vim which is cloned from github.com?

I've cloned it but I didn't find any .exe file, Nor do i see it in programs list in Control Panel of windows. I'am a bit confused as to what cloning means. I know that there is direct download .exe file on vim.org website. Its for sure that I'am beginner for all these. Please help. Thanks for the help in advance.
reading the "installation" section found in the README.md of the vim repo, you can see the filenames containing the instructions that will help you with the installation, depending on your OS.
README_ami.txt Amiga
README_unix.txt Unix
README_dos.txt MS-DOS and MS-Windows
README_mac.txt Macintosh
README_haiku.txt Haiku
README_vms.txt VMS
So, for the full information I suggest you go to those files, or go to the vim website where there is also good information about the installation.
Anyway, I will briefly explain below the information that those files and the vim website say for most common operating systems
If you're on Unix:
git clone https://github.com/vim/vim.git
cd vim/src
make
If you're on Mac
The Macintosh binaries are not on the Vim ftp site. They are produced by a few Macintosh lovers. Often they lag behind a few versions.
MacVim has more a Mac look and feel, is developed actively and most people prefer this version. Most of MacVim was made by Björn Winckler.
MacVim can be downloaded here: link
Or if you prefer, here is the MacVim homepage.
If you're on Windows:
The next instructions were copied from here.
Option A: Using the self-installing .exe
Go to vim.org/download.php and click on self-installing executable (or just click here) and follow the prompts.
Watch out for:
When an existing installation is detected, you are offered to first remove
this. The uninstall program is then started while the install program waits
for it to complete. Sometimes the windows overlap each other, which can be
confusing. Be sure the complete the uninstalling before continuing the
installation. Watch the taskbar for uninstall windows.
When selecting a directory to install Vim, use the same place where other
versions are located. This makes it easier to find your _vimrc file. For
example "C:\Program Files\vim" or "D:\vim". A name ending in "vim" is
preferred.
After selecting the directory where to install Vim, clicking on "Next" will
start the installation.
Option B: Using .zip files
Go to the directory where you want to put the Vim files. Examples:
cd C:\
cd D:\editors
If you already have a "vim" directory, go to the directory in which it is
located. Check the $VIM setting to see where it points to:
set VIM
For example, if you have
C:\vim\vim82
do
cd C:\
Binary and runtime Vim archives are normally unpacked in the same location,
on top of each other.
Unpack the zip archives. This will create a new directory "vim\vim82",
in which all the distributed Vim files are placed. Since the directory
name includes the version number, it is unlikely that you overwrite
existing files.
Examples:
pkunzip -d gvim82.zip
unzip vim82w32.zip
You need to unpack the runtime archive and at least one of the binary
archives. When using more than one binary version, be careful not to
overwrite one version with the other, the names of the executables
"vim.exe" and "gvim.exe" are the same.
After you unpacked the files, you can still move the whole directory tree
to another location. That is where they will stay, the install program
won't move or copy the runtime files.
Change to the new directory:
cd vim\vim82
Run the "install.exe" program. It will ask you a number of questions about
how you would like to have your Vim setup. Among these are:
You can tell it to write a "_vimrc" file with your preferences in the
parent directory.
It can also install an "Edit with Vim" entry in the Windows Explorer
popup menu.
You can have it create batch files, so that you can run Vim from the
console or in a shell. You can select one of the directories in your
$PATH. If you skip this, you can add Vim to the search path manually:
The simplest is to add a line to your autoexec.bat. Examples:
set path=%path%;C:\vim\vim82
set path=%path%;D:\editors\vim\vim82
Create entries for Vim on the desktop and in the Start menu.
That's it!
Vim is open source software, and its source code, i.e. all the technical files that make up Vim is (nowadays) hosted at GitHub.
Cloning that repository means you'll download all of those files to your computer (and with Git as the underlying revision control system, you'll even get the full history of all changes ever done). As Vim supports a very big set of very diverse platforms (Windows, Linux, Mac, ...), the repository itself does not (and should not) contain pre-built binaries, nor a full installer that most users expect to run. So, unless you have the intention to actively contribute to Vim by submitting bug fixes or enhancements, you don't need to clone or do anything with GitHub. If you do want to get technical, src/INSTALLpc.txt contains the instructions for building Vim on Windows. This includes choosing a compiler, installing it and the required dependencies, configuring the build, building, and then finally copying the files to a permanent location on your PC, either manually or by building and then running an installer.
For plain passive consumption of Vim (which is rewarding in itself, but may even lead you to eventually also programming it), the Downloading Vim page on vim.org has all the information that you need, with links to the most popular installers right at the top.
a word on versions
For a casual user, using the latest stable version is recommended; this is 8.2 right now; gvim82.exe is a corresponding installer for Windows. This offers the best compromise between stability and latest features. In the case of Vim, expect a new release roughly every year.
You'll also find development builds (something like 8.2.0740); these usually function as well and have the very latest features under development, but often are less stable. I would use these only if you really need a leading-edge feature, or want to report a bug. You should then probably update very frequently, and from there it's only a small step to actually cloning the repository and building everything on your own!

Eclipse project.properties backslash paths considered harmful

I am working in a team that is developing Android software. Some team members use Windows, some use Macs, and I have been known to use Linux. Everyone uses Eclipse.
Eclipse writes a file called project.properties; here's an example. The important part is the last three lines, the android library reference paths.
# This file is automatically generated by Android Tools.
# Do not modify this file -- YOUR CHANGES WILL BE ERASED!
#
# This file must be checked in Version Control Systems.
#
# To customize properties used by the Ant build system edit
# "ant.properties", and override values to adapt the script to your
# project structure.
#
# To enable ProGuard to shrink and obfuscate your code, uncomment this (available properties: sdk.dir, user.home):
#proguard.config=${sdk.dir}/tools/proguard/proguard-android.txt:proguard-project.txt
# Project target.
target=android-17
android.library.reference.1=../private-code/lib/SomeLibrary
android.library.reference.2=../google-play-services_lib
android.library.reference.3=../FacebookSDK
The above is what the file looks like when Eclipse on Mac or Linux writes it. When Eclipse on Windows writes it, the library reference lines are written with backslashes.
Of course on Windows, backslashes are acceptable path separators. But on Mac and Linux such paths do not work. The thing is, on Windows, forward slashes work perfectly well. So, our policy now is always to commit the file with forward slashes, so that it will work for everyone.
But this is a pain for our Windows users, and it's a pain for the rest of us when the Windows users make a mistake, so I'm looking for a technical solution. I have two ideas:
Find a setting somewhere in Eclipse on Windows, telling it to use forward slashes when saving paths in files like project.properties. (Why the heck isn't that the default?!?)
We use Mercurial, so: install some sort of "hooks" that will solve the problem.
Install a commit hook on the Windows computers, so that the file is committed into the repository with the backslashes replaced by forward slashes.
Install a pull hook on the Mac and Linux computers; so if the file gets committed with backslashes, they get fixed up by the time the files are written.
The commit hook seems cleaner, so if both are available I'd take a commit hook over a pull hook.
I found a Mercurial extension that edits tabs to spaces, which is at least sort of similar to what I want. It's complex enough that I'm a bit leery of trying to modify it into what I need.
https://www.mercurial-scm.org/wiki/CheckFilesExtension
The other strategy is to add a hook that detects backslashes in the paths, and simply aborts the commit, forcing the Windows user to fix the file by hand before committing. That would be better than nothing.
I would keep both versions in the project (as project.properties.windows and project.properties.linux) and create a symbolic link pointing to the right file depending on the OS. Call this symbolic link project.properties and let it be ignored by the version control.
Obviously the disadvantage of this setup is that when windows users update their project.properties file (which points to project.properties.windows), the linux version must be updated manually, and vice-versa, but it doesn't sound like a big deal tho, I presume you don't update this file very often.
- To create the links -
Create a file make_link.sh to setup Linux environments, with the following command:
ln -s $(readlink -m project.properties.linux) $(readlink . -m)/project.properties
Create a file make_link.bat to setup Windows environments, with the following command:
mklink project.properties project.properties.windows
You can commit those scripts as well.
We faced the similar situation due to the path of the local library varies, so after searching a while we found that the best practice to use centralized repository tools (Git for us), "Remove all eclipse dependent/Specific settings files from Repository". And that works fine for us. This way, the change to eclipse settings file will not be effecting the central repository or get committed.

Internal CPAN - what module

I want to setup in-house CPAN for distributing our internal code.
So I was looking at CPAN::Mini as recommended here. But it looks there are other options as CPAN::Site, CPAN::Dark, Dist::Zilla ...
I'm little bit overwhelmed with all these options. What do people mostly use/ recommend?
What I need is a way to push internal modules to repository which can be accesses from several machines.
The quick answer is that you want to use CPAN::Mini to create a local mirror of all that is current on the CPAN, and then CPAN::Mini::Inject to add your own distributions to it.
The long answer is that it helps to understand how a CPAN mirror is constructed. Broadly speaking, it is simply a directory that contains two sub-directories.
The 'modules' directory contains in turn two files, 03modlist.data.gz, whose contents is ignored by modern CPAN clients but there's legacy code that assumes this file exists, so just copy it from an existing mirror. The other is 02packages.details.txt.gz, which I shall describe later.
The 'authors' directory contains a file '01mailrc.txt.gz' which is another relic of the past whose contents can be ignored, so just copy it from another mirror, and it contains the 'id' directory. This in turn contains sub-directories and distributions, whose names follow a pattern. For example, my PAUSE id is DCANTRELL, and one of my distributions is XML-Tiny-2.06.tar.gz, so that file lives at .../authors/id/D/DC/DCANTRELL/XML-Tiny-2.06.tar.gz.
The 02packages.details.txt.gz file is the index that maps module names to distributions, and this must be up to date for your mirror to work properly. It consists of a few header lines, which must be present and correct, followed by a blank line, followed by one line for each module. Those lines are three fields separated by spaces:
module name
module version
distribution filename
eg
XML::Tiny 2.06 D/DC/DCANTRELL/XML-Tiny-2.06.tar.gz
(you may also see .tgz, .zip, and a coupla others)
A distribution may appear in several lines, once for each module it contains. eg
XML::Tiny::DOM 1.1 D/DC/DCANTRELL/XML-Tiny-DOM-1.1.tar.gz
XML::Tiny::DOM::Element 1.1 D/DC/DCANTRELL/XML-Tiny-DOM-1.1.tar.gz
In a normal CPAN mirror, there may be several versions of a distribution, and several versions of a module - for example, the current version and a few older ones, or the current stable one and a dev one. The index file contains the most recent stable version. You can tell dev versions of distributions because they have an underscore in their version, or contain the string '-TRIAL'.
So, knowing all that, you can construct a CPAN-a-like that contains only your code. But using CPAN::Mini and CPAN::Mini::Inject to add your stuff to a "real" CPAN is less work.
Once you've created your CPAN-a-like, you can either expose it on HTTP and access it using any client as normal, or you could just have it in the filesystem and configure the CPAN client to access it using a file:/// URL.
You might also consider Pinto. Pinto allows you to curate your own stable CPAN repository, which can contain any number of both public and private distributions. Pinto also helps you to manage change as your dependencies evolve over time.
DrHyde gave a very nice answer to the question. But if you don't want to maintain a CPAN mirror, you can use MyCPAN::App::DPAN together with MyCPAN::Indexer.
Cave: Both distributions are under development. Not all combinations will work. What I use is the latest version of MyCPAN::App::DPAN on github (1.28_11) and MyCPAN::Indexer version 1.28_10 (later versions don't work with MyCPAN::App::DPAN).
MyCPAN::App::DPAN will create a CPAN-like directory structure on your local disk from the distributions you feed it. You will need to create a config file for it (say, .dpanrc):
# contents of .dpanrc
indexer_id Edward Baudrez <my.email.address#example.org>
dpan_dir /home/ebaudrez/rsync.net/dpan
merge_dirs /home/ebaudrez/rsync.net/dpan/dists
report_dir /home/ebaudrez/rsync.net/dpan/indexer_reports
Put your distribution tarballs in the directory merge_dirs (I think there's no reason that directory should reside under dpan_dir, but I'm too lazy to figure it out right now). Then call dpan:
dpan -f $HOME/.dpanrc
dpan will create a CPAN-like structure in dpan_dir (containing, in particular, authors and modules). This directory can then be used with cpanm (for instance):
cpanm --mirror $HOME/rsync.net/dpan --mirror http://search.cpan.org/CPAN
Note that I use real CPAN as fallback, because the DarkPAN is by definition incomplete. If you also happen to have a mini CPAN mirror, you can also use it here:
cpanm --mirror $HOME/rsync.net/dpan --mirror $HOME/mirrors/minicpan --mirror-only
Do note that, for this scheme to work, you will need to create distribution tarballs from your source code. I like and use Dist::Zilla, but note that you can also generate tarballs from Makefile.PL, so you definitely don't need to use Dist::Zilla. But it takes care of a lot of the details.
Creating a real distribution from your source code may seem like a lot of work, but Dist::Zilla helps lift the burden, and the transition to a real CPAN module, some day in the future ;-), is also simplified a lot when you already have a distribution.

How do I use rpm to update/replace existing files?

I have several applications that I wish to deploy using rpm. Some of the files in my application deployments override files from other deployed packages. Simply including the new files in the deployment package will cause rpm conflicts.
I am looking for the proper way to use rpm to update/replace already installed files.
I have already come up with a few solutions but nothing seems quite right.
Maintain custom versions of the rpms containing the original files.
This seems like a large amount of work for a relatively small reward even though it feels less like a hack than some of the other possible solutions.
Include the files in the rpm with another name and copy them over in the post section.
This would work but will mean littering the system with multiple copies of the files. Also it means additional maintenance in the rpm build spec for each file.
Use wget in the post section to replace the original files from some known server.
This is similar to the copy technique but the files wouldn't even live in the rpm. This might act like a nice central configuration authority though.
Deploy the files as new files, then use symlinks to override the originals.
This is also similar to the copy technique but with less clutter. The problem here is that some files don't behave well as symlinks.
To the best of my knowledge, RPM is not designed to permit updating / replacing existing files, so anything that you do is going to be a hack.
Of the options you list, I'd choose #1 as the least bad hack if the target systems are systems that I admin (as you say, it's more work but is the cleanest solution) and a combination of #2 and #4 (symlinks where possible, copies where not) if I'm creating the RPMs for others' systems (to avoid having to distribute a bunch of RPMs, but I'd make it very clear in the docs what I'm doing).
You haven't described which files need to be updated or replaced and how they need to be updated. Depending on the answers to those questions, you may have a couple of other options:
Many programs are designed to use a single default configuration file and also to grab configuration files from a .d subdirectory. For example, Apache uses /etc/httpd/conf/httpd.conf and /etc/httpd/conf.d/*.conf, so your RPMs could drop files under /etc/httpd/conf.d instead of modifying /etc/httpd/conf/httpd.conf. And if the files that you need to modify are config files that don't follow this pattern but could be made to, you can suggest to the package maintainers that they add this capability; this wouldn't help you immediately but would make future releases easier.
For command-line utilities like sendmail and lpr that can be provided by multiple packages, the alternatives system (see man alternatives) permits more than 1 RPM that provides these utilities to be installed side by side. Again, if the files that you need to modify are command-line utilities that don't follow this pattern but could be made to, you can suggest to the package maintainers that they add this capability.
Config file changes on systems that you administer are better managed through a tool like Cfengine or Puppet rather than through custom RPMs. I think that Red Hat favors Puppet.
If I were creating the RPMs for systems I don't administer, I'd consider using a third-party tool like Bitrock and dumping all of my stuff under /opt just so I wouldn't have to stomp on files installed by other admins' RPMs.
Edit (2019): Nowadays, Software Collections offers a useful alternative. You can create packages that install somewhere under /opt, and the Software Collections tools offer a standardized way for users to opt in to using those instead of whatever's normally installed under /usr. Red Hat uses this to distribute newer versions of tools for their otherwise stable and long-lived (i.e., older) Red Hat Enterprise Linux distributions.
You can also execute rpm -U --replacefiles --replacepkgs ..., which will give you what you want.
See here for more info on RPM %files directives:
http://www.rpm.org/max-rpm/s1-rpm-inside-files-list-directives.html
You can use the arguments from the %post and %pre sections in the RPM scriptlets to determine if you are installing, upgrading or removing packages.
If $1 is 0 - then we're removing old stuff. Targeting 0 packages installed.
If $1 is 1 - then we're installing new stuff. Targeting a total of 1 package to be installed.
If $1 is 2 or more - then we're upgrading this package and $1 represents the number of packages already installed.
These sections help with managing files among the versions.
Keep track of what you're doing between versions and consider what one might do if they were to skip a version or two.
Have consideration for these things and you should be good to go!

how could I share workspace between ubuntu and windows xp?

I am using ubuntu 8.04 and windows xp. I mount the fat32 disk which contains eclipse workspace to ubuntu. but I find I could not use the workspace, maybe I have no right to use it.
the fat32 disk I mounted has the 755 right,I try to use chmod to change it to 777 but failed. I try to mount it to 777 mode, but I find there is nothing about mode in vfat option.
How should I do next ? how could I share the workspace? Help me. thanks.
Instead of trying to share the raw workspace data between two different systems, I suggest to do it like in typical big software development projects. Use a version control system to store your code and commit/update to and from that version control system instead of sharing files.
This may not be the answer you were originally interested in, but rest assured, you will notice many advantages of that version control system after some time, including:
Easily get back to the code version before todays "genius" changes which didn't really work at the end
There is a backup of your project in case your workstation dies
You may even access your project from a completely different machine/location.
If your project is going to be open source, you can even use public services like Sourceforge.net.
I believe that the fat32 doesn't support the same kind of permissions as the linux ones you are familiar with. Once you have sorted out the rw option in /etc/mtab then I think you will have a better time.
However, the step after that is to have two different installations of Eclipse working on the same workspace.
I haven't had a lot of success with this (though haven't tried you're exact scenario), but I would be careful to:
keep the Eclipse versions in synch
only use relative paths, and relative to the workspace. This is probably good practice any way, but is worth repeating.
If all goes well, then you should be sharing everything, including preferences across both installations.
There are two refinements I can think of, which may be useful to reason about, if not actually do:
you could probably share most of the installation of eclipse (the plugins and features directory, if not the config.ini and eclipse.ini files). If you can't put both executables in the same directory, consider the -install and -configuration runtime options.
if you can't do any of these things, then you may need to work on two parallel workspaces. You can keep them in synch with tools such as rsync or even a distributed source control like Mercurial.
I agree with bananeweizen.myopenid, and have the following tip to add:
When creating your build path entries, reference all outside resources (eg, jarfiles) using classpath variables. This will allow you to move the .classpath file between environments (or even check it into source control, if you're the sole developer) without running into problems with pathnames.
To reference a JARFile via variable, go into the "Libraries" tab of the Build Path, remove any existing reference to the library, and click "Add Variable...". You will need to define common variables, such as M2_REPO or LOCAL_LIBS, and you will need to make sure that those definitions are available in all your environments.
Perhaps the problem you're having is with capitalization. Be sure to create the workspace in Ubuntu first. This should rule out any filename capitalization issues.