Merging two files in windows - merge

I was playing GTA San Andreas and I want to install some mods which require to replace the
root-directory\data\script\main.scm and
root-directory\data\script\script.img
but I already have a mod which replaces these so I want to combine both the files from both the mods.

Related

How to create many sub packages automatically in rpm spec

The number of sub-packages I want to create are so many (about 300 over).
I think.. to make the sub-package, the files should be installed (%install) early.
So I installed whole files to some specific directories.
Now I want to pack the file for each directory name.
in summaries,
Is it possible to repeat rpm macros? (ex, %package %description %files)
If it's possible, what should I use to repeat? (ex for ??)
As I know, to use %files macro, the real files should be installed previously. then where should I write the codes?
Natively, no there isn't. You'll have to use an external templating language like jinja2 to create the spec file on-the-fly.
That being said, having 300 subpackages is going to be an absolute nightmare for both your CM folks and for your users. You might want to ask another question explaining the use case to see if there are better alternatives.

Why are there several modules in Perl with the same name but different file size?

I am trying to install GeneMark-ES but when trying to run as specified in the documentation several PERL modules are missing. I have tried to point to all the necessary files by defining PERL5LIB variable.
However, I have come across several files which are installed but there is more than one file in various directories. Not only that, each file has different file sizes.
Why is that happening? What file should I use?
Here is a GUI search for files named Simple.pm
Those Simple.pm files are module files, and each one is for a different "distribution".
For example, the one highlighted in the image is for Locale::Maketext::Simple. Simple.pm is the actual module file itself. The first one in the image is for Bio::Location::Simple etc. The Bio/Location part of the path signifies the actual name of the distribution as you can see.
The installation instructions should outline exactly which distributions it requires. You don't just use the Simple.pm file directly.
You need to read Perl Modules from the documentation to understand how Perl uses module names
After absorbing that, you will see that there are in reality only three different library locations which together contain ten module files that end with Simple.pm
/home/pollo/perl5/lib/perl5
/usr/share/perl5/core_perl
/usr/share/perl5/vendor_perl
Nowhere is there anything that looks like GeneMark-ES, but it seems unlikely that it would end with ::Simple if it were even there
Please open a new question and describe your experience trying to install the module that you require instead of offering misleading and irrelevant facts

iOS Localization - Updating Localizable.strings with just new strings

I have searched Google and StackOverflow and still have no clear answer on an easy and automated way of doing this but here is the scenario:
I have an app with 1000 strings localized into en, fr, de, es, it.
I build a new feature that makes 10 distinctly new NSLocalizedString() keys.
I just want those 10 new strings appended onto the ends of the files:
en.lproj/Localizable.strings
fr.lproj/Localizable.strings
es.lproj/Localizable.strings
de.lproj/Localizable.strings
it.lproj/Localizable.strings
genstrings will retrieve all 1010 distinct strings. This is a pain since I'll need to "needle in a haystack" find those 10 strings every time I do an update.
UPDATE 19-SEP-2014 -- XCode 6 - Apple has finally released support for XLIFF export and import of your .strings files
Whats new in XCode 6? Localisation
Linguan (v1.1.3) whilst it is a lovely tool most of the time, it is starting to be a tool in the other sense. It merges the changes but some strings aren't matching correctly when it merges, so everytime it does a Scan Sources it creates 100 new duplicate keys as well as the 10 strings I am after so it is making more work.
FileMerge As suggested below try doing a diff between old and new versions of the genstrings output files. The genstrings output has the strings sorted alphabetically so 10 strings scattered throughout 1000 means that there are 200 differences to review. it keeps matching the /*...*/ and the "..." = "..." and saying that the ... has been updated. It hasn't been updated, just shifted to a new location in the file. More and more it is looking like I am going to have to write a custom tool.
MacHG + FileMerge on a side note, for some strange reason doesn't like doing diffs out of the repository with the working copy of Localizable.strings. Both the left and right panes appear empty.
UPDATE: Turns out variations in some changesets being saved as UTF-16 and some as UTF-8 are screwing with it being able to do a proper diff.
Bash Script + FileMerge I have written the following script to help maintain my english reference file after each time I add new NSLocalizedString entries:
#LOCALISATION UPDATE SCRIPT
#
#This will create a temporary copy of the current 'en' reference file then generate the
#latest reference file using the 'genstrings' tool. Finally forcing FileMerge to launch
#and diff the changes.
#
#Last Updated: 2014-JAN-06
#Author(s): Josh Wilson
clear
#assuming this script is run from $SRCROOT
#Backup Existing 'en' reference
cp "en.lproj/Localizable.strings" "en.lproj/Localizable-src.strings"
#Scan source files for 'NSLocalizableString' macros
genstrings -q -u -o en.lproj Classes/*.{m,mm}
genstrings -q -u -a -o en.lproj Classes/iPad/*.{m,mm}
genstrings -q -u -a -o en.lproj Classes/iPhone/*.{m,mm}
#Force FileMerge to launch and diff the update (NOTE: piping to cat forces GUI to open)
opendiff "en.lproj/Localizable-src.strings" "en.lproj/Localizable.strings" | cat
#Cleanup up temporary file
rm "en.lproj/Localizable-src.strings"
But this only updates the EN file and I am lacking a way of having the other language files updated with the new keys. This one has been good for instances where I don't have an english word as the key and genstrings bombs my
"welcome_message" = "Welcome!" with "welcome_message" = "welcome_message"
POEditor http://poeditor.com/. This is an online tool and subscription based after 1000 strings. Seems to work well but it would be good if there was a non subscription based tool.
Traducto Pro Seems to do an alright job of integrating with XCode and extracting the strings and merging things together. But it is impossible to get anything back out of it until it is fully translated so you are coerced into using their translation services.
Surely this functionality has been implemented before. How does Apple keep their Apps localised?
Script junkies, I call upon thee! iOS development has been going on for some time now and localisation is kind of common, surely there is a mature solution to this by now?
Python Script update_strings.py: Stackoverflow finally recommended a related question and the python script in this answer Best practice using NSLocalizedString looks promising...
Tested it and in its current form (31-MAY-2013) it doesn't handle multiline comments if you have duplicate comments entries (expects single line comments).
Might just need to tweak the regex's a bit.
Checkout BartyCrouch, it perfectly solves your problem. Also it is open source, actively maintained and can be easily installed and integrated within your project.
Install BartyCrouch via Homebrew:
brew install bartycrouch
Alternatively, install it via Mint:
mint install Flinesoft/BartyCrouch
Incrementally update your Localizable.strings files:
$ bartycrouch update
This will do exactly what you were looking for.
In order to keep your Storyboards/XIBs Strings files updated over time I highly recommend adding a build script (instructions on how to add a build script here):
if which bartycrouch > /dev/null; then
bartycrouch update -x
bartycrouch lint -x
else
echo "warning: BartyCrouch not installed, download it from https://github.com/Flinesoft/BartyCrouch"
fi
In addition to incrementally updating your Storyboards/XIBs Strings files this will also make sure your Localizable.strings files stay updated with newly added keys in code using NSLocalizedString and show warnings for duplicate keys or empty values.
Make sure to checkout BartyCrouch on GitHub for additional information.
if you have the genstrings for the previous version, just a "diff" between new and old could do the tricks
EDIT: best use vimdiff to deal with utf-16 files
You can check out this Xcode Plugin I built for OneSky, it aims to improve the localization work flow for iOS/Mac OSX developers.
The string generation feature of the plugin runs genstrings and ibtool --export-strings-file to the selected source/IB files, new files will be added the project and target automatically, new strings will be merged into existing files with comments.
It will only generate/update strings for the base language, but you can make use of other features of the plugin to automate translation export and import with OneSky platform, which is free for crowdsource projects.
You may want to check out my solution here: SwiftyLocalization
With few steps to setup, you will have a very flexible localization in Google Spreadsheet (comment, custom color, highlight, font, multiple sheets, and more).
In short, steps are: Google Spreadsheet --> CSV files --> Localizable.strings
Moreover, it also generates Localizables.swift, a struct that acts like interfaces to a key retrieval & decoding for you (You have to manually specify a way to decode String from key though).
Why is this great?
You no longer need have a key as a plain string all over the places.
Wrong keys are detected at compile time.
Xcode can do autocomplete, so you can do something like this:
// It's defined as computed static var, so it's up-to-date every time you call.
// You can also have your custom retrieval method there.
button.setTitle(Localizables.login.button_title_login, forState: .Normal)
The project uses Google App Script to convert Sheets --> CSV Python script to convert CSV files --> Localizable.strings
You can have a quick look at this example sheet to know what's possible.

Multiple repositories in one directory (same level) - is it possible?

My original problem is that I have a directory where I write various scripts. Each of them is independent of others, and usually one-file-long. I want to have some versioning applied to them, but I have the following problems/requirements:
I don't want to have to store each small script in a separate directory!
I don't want to store them all in one repository OTOH, as they are completely unrelated, and:
some of them may later grow to more files (and then they will need a separate dir),
I sometimes want to copy one of them to a different machine (and I want to clone the whole repo).
I want to benefit from (distributed) version control mechanisms -- at least:
"infinite" number of revisions,
ability to clone repositories on different computers,
ability to do "atomic" multi-file commits.
Is it possible?
I'd prefer to do it in some mainstream distributed VCS (a solution using Mercurial would be preferable, but I'm not fixed).
EDIT: the solution has to be free (at least "as in beer") and cross-platform (at least Win32 & Linux).
Related, but didn't help:
"two-git-repositories-in-one-directory" -- didn't find it helpful: the accepted answer looks like point 2. (above) to me; the current "community voted" answer sounds like 1.
"Version control of single files using Subversion" -- also too much of 2. or 1.
These requirements seem pretty "special" to me, so here is a solution on par with them ^^
You may use two completely different VCS, in the same directory. Even two "instances" of SVN might work: SVN stores its metadata in a directory called .SVN and has (for historical reasons regarding ASP) the option to use _SVN. The Directory listing should look like this
.SVN // Metadata for rep1
_SVN // Metadata for rep2
script1 // in rep1
script2 // in rep2
...
Of course, you will need to hide or ignore the foreign scripts or folders from each VCS...
Added:
This only accounts for two scripts in one folder and needs one additional VCS per script beyond that, so if you even consider this route and need more repositories, rename each Metadir and use a script to rename it back before updating:
MOVE .SVN-script1 .SVN
svn update
MOVE .SVN .SVN-script1
Why don't you simply create a separate branch (in the git sense) for each (group of) script(s)?
You can develop them individually as you please. Switching to a branch will show you only the scripts from that branch. It's sort of like directories but managed by the version control system. If you later want to pluck a branch out into another repository, you can do that and if you want to combine two scripts into a single project, you can do that as well. The copying them to the different machine point might be a problem but you can clone the branch you're interested in and you it should work for you.
Another proposition for my own consideration is "Using Convert to Decompose Your Repository" article on hgtip.com. It fails as a "standalone" solution, but could be helpful as an addition to the "mv .hgN .hg / MOVE .SVN-script1 .SVN" idea.
You can create multiple hidden repository directories and symlink .hg to whichever one you want to be active. So if you have two repositories, create directories for them:
.hg_production
.hg_staging
Then to activate either of them just do:
ln -sf .hg_production .hg
You could easily create a bash command to do this. So instead you could write something like activate-repo production, which would run ln -sf .hg_production .hg.
Note: Mac doesn't seem to support ln -sf so instead you'll need to do:
rm .hg; ln -s .hg_production .hg
I can only think of these two lightweight versioning systems:
1) Using Dropbox with the Pack-Rat upgrade, to keep a full history of versions for each file automatically backed up and with the possibility to be shared with multiple Dropbox users: https://www.dropbox.com/help/113
If you have multiple machines managed by the same user (you), the synching would be automatic. Also if the machines are in the same LAN, Dropbox is smart enough to sync the files over the local network, so big files shouldn't be a worry.
2) Using a 'Versions' aware text editor for Mac OS X Lion. I'd expect TextMate, Coda and other popular Mac code editors to be updated to support this feature when Lion is released.
How about a compromise between 1 and 2? Instead of a folder+repo for each script, can you bundle them into loosely related groups, such as "database", "backup", etc. and then make one folder+repo for each group? Then if you clone a repo on another machine, you're only pulling down a smaller number of unrelated files. (Is the bandwidth/drivespace really a concern?) To me, this sounds WAAAY simpler than all of the other suggestions so far.
(Technically this approach meets your requirements because (1) each script isn't in its own directory, (2) not all scripts are in the same repository, and (3) you can easily do this with any popular DVCS. :D)
UPDATE (2016): Apparently, a guy named Cosmin Apreutesei created a tool named multigit, which seems to implement what I wished for in this question! If you ever read it, thanks a lot Cosmin! I've started using your tool this year and find it awesome.
I'm starting to think of some kind of an overlay over Mercurial/git/... which would keep a couple "disabled" repository meta-directories, let's say:
.hg1/
.hg2/
.hg3/
etc., and then on hg commit FILENAME would find the particular .hgN that is linked to FILENAME, and would then temporarily:
mv .hgN .hg
hg commit FILENAME
mv .hg .hgN
The main disadvantage is that it would require me to spend some time writing the tool. Or does anybody know of some ready-made one like this? If you do, please post as a full-featured answer (not a comment), I'm more than willing to accept it.

How do you compare the content of two archive files programmatically?

I'm doing some testing to ensure that the all in one zip file that i created using a script file will produce the same output as the content of a few zip files that i must manually click and create via web interface. Therefore the zip will have different folder structure.
Of course i can manually extracted them out and using my powerful eyeball technique to scan them or even lazier i can write a script to do that, but before i invest more time and get accused by my boss for company time robbery, i'm asking if there's a better way to do this?
I'm using perl LAMP stack by the way.
thanks.
You can use perl's Archive::ZIP or Python's zipfile to extract the filenames, sizes and CRC checksums of the files in the archives. Create a file which contains the results sorted by file name (ignore the path).
For your smaller ZIPs, merge the results of the script (cat list1 list2 list3 | sort).
Now, you can use diff to compare the results.
I can wholeheartly recommend Beyond Compare. Unless you're really getting underpaid, it's the biggest bang for your (bosses) buck.
[Edit] I seem to have scanned over the different folder structure, sorry about that.Beyond Compare can compare all files in folders with the same folderstructure. It does not have (I believe) the intelligence to go searching for matches in files in different folders.
Regards,
Lieven
Create a crc checksum for your files.
If your checksum is the same for the original files and the unzipped files, you can be sure the files are the same. And even works for non text data.
A checksum be easily be created with an external program such as "SFV Checker" or programmatically (.net/java for example include libraries to do this).
Taking a cue from Carra's answer...if A.zip is your single big archive and B.zip is the archive generated through the web then use the following algorithm
Extract all files from A.zip and recursively (w.r.t folders) compute the checksum of the files present in the folder (using cksum, md5sum etc) where the contents were extracted and save this information after sorting it (pipe it through sort) to a file (say A.txt)
Do the same for B.zip and generate B.txt
Compare A.txt with B.txt they should be exactly the same.
OR
Use unzip -l to get file/directory lists for both the (zip) archives and then flatten the hierarchy of the user generated zip file and compare with the contents of your script generated zip file using some thing like diff. By flattening of hierarchy I mean you may need to do some kind of pre-precessing on one or both lists before you can do a meaningful comparison with diff.