Copying sections in hexl-mode - emacs

When using hexl-mode in emacs to view a binary file, is there any way of copying and pasting a section into another file?
I have tried it with the standard C-Spc, select a region, M-w
But pasting this into a new file treats the whole thing like normal text, i.e. I get a text file that looks like lots of these:
000159a0: 6e00 1295 00e0 3400 0a51 0942 0701 1295 n.....4..Q.B....
i.e. its making a literal copy of the text, not copying the binary data it is representing
What I want to do is copy a section, paste it into a new file such that I get a binary representation of that section
In other words I want to be able to generate new binary files from parts of an original binary file using hexl-mode for viewing the original binary file
Hope that makes sense..

That sounds like a cool feature, but unforuntately hexl-mode doesn't do that. The next best thing you can do is clip the file with head and tail, for example to yank file.txt from 000002a0 to 00000340, you'd run
head file.txt -c $((0x00000340)) | tail -c +$((0x000002a0)) | xclip -sel c

Related

i would like to extract the pspictures from a tex file and put the in another file so they can processed into ps or pdf files really easily

I have a list of files .tex file that contain fragments in the tex that build ps pictures which can be slow to process.
There are multiple fragments across multiple files and the end delimiter is \end{pspicture}
% this is the beginning of the fragment
\begin{pspicture}(0,0)(23,5)
\rput{0}(0,3){\crdKs}
\rput(1,3){\crdtres}
\rput(5,3){\crdAh}
\rput(6,3){\crdKh}
\rput(7,3){\crdsixh}
\rput(8,3){\crdtreh}
\rput(12,3){\crdQd}
\rput(13,3){\crdeigd}
\rput(14,3){\crdsixd}
\rput(15,3){\crdfived}
\rput(16,3){\crdtwod}
\rput(20,3){\crdKc}
\rput(21,3){\crdfourc}
\end{pspicture}
I would like to extract the fragments.
I am not sure how to go about this? can awk do this or sed?
They seem to work line by line, rather than work on the whole fragment.
I am not really looking for a solution just a good candidate tool.
sed -En '/^\\begin\{pspicture\}.*$/,/^\\end\{pspicture\}.*$/p' file
Utilising sed with -E for regular expressions.
Use //,// to determine start and ending regular expressions and print all lines from the start to the end.

en masse inline editing in an uncompressed PDF

I have a large PDF (~20mb, 160 mb. uncompressed).
I need to do a find and replace in the text in it, about 1000 times.
Here is what I tried.
Via SVG
Tranform to SVG (inkscape)
Read SVG line by line and do the replace in the file
Transform back to PDF
=> bad output, probably due to some geometric transform matrix in the SVG, the text is not well rendered
Creating ~1000 sed command
Uncompress PDF
Perform each replace with a sed command
Recompress PDF
=> way too long. each sed command takes about 20 sec, leading to several hours of process
Read line-by-line and replace
Uncompress PDF
Read line by line the PDF
find text to be replaced
replace using perl
write line to a new file
Compress the new file
=> due to left data-stream in the uncompressed PDF, the new file is apparently damaged (writing binary as lines of text)
I wonder if it would be possible to read line-by-line the uncompressed PDF, but do the editing directly in it. How could I do this?
I have searched for perl inline editing, but it performs the changes in the whole file at once, while I'd like to edit a single line.
Other ideas are more than welcome ;)
Following advise, I used CAM::PDF, this was the most efficient and simple solution
There is no difference between 2. and 3. Sed reads the input file line by line and writes changed lines into the output file. If you fed -i switch to it, sed just opens the input file and then unlinks (it's what rm do) then opens the output file with the same name and writes into. That's it. No magic involved. So if you damaged content by Perl, but not by sed you do something different than by sed. The main difference is, you can make Perl script way faster for replacing many strings. See Using sed on text files with a csv
The main trick is you can compile regexp for search nad replace which works in linear time.
my %replace = ( foo => 'bar' );
my $re = join '|', map quotemeta, keys %replace;
$re = qr/($re)/;
while (<>) {
s/$re/$replace{$1}/g;
}
You can use it with your original approach, but I would recommend to make it in Perl script which allows you to keep the regexp and replace hash between pdf files. You can also try it to combine with CAM::PDF. There is the example script changepagestring.pl in it. You can also look at PDF::API2 which would require more work but may provide better result. But remember, PDF format is not intended for modification.
You can follow the pdftk steps as described in
How to find and replace text in a existing PDF file with PDFTK (or other command line application)
You can first split the PDF into smaller documents with a few pages each, replace the text and again merge them together - all using pdftk.
There is also the PDFEdit software (http://pdfedit.cz/en/index.html). It is a GUI app with a scripting interface. You can process individual pages and then do a find replace using scripting commands. See if it loads your PDF.

Extract texts from XIB files for translation in human readable way

ibtool is the tool to extract strings from XIB files. Example command:
find . -name \*.xib | xargs -t -I '{}' ibtool --generate-strings-file '{}'.strings.txt '{}'
But output generated by ibtool is NOT READABLE for 'normal' (read: non-developers) human being.
Example:
/* Class = "IBUILabel"; text = "Regards:"; ObjectID = "201"; */
"201.text" = "Regards:";
There are few problems with it (from the perspective of translator);
This format is different that one expected by Localizable.strings.
It it confusing which texts to translate: a) this in commented line, b) one in uncommented line, c) or both maybe
It has just a lot of clutter.
I need XIB strings extracted in Localizable.strings format (strings extracted from NSLocalizableString macros using genstrings:
"key"="to translate";
It there a way to make ibtool output text this way?
I wrote my own script in bash, which does the following:
Extracts all strings from all xib files in the directory
Converts it to the .strings format (expected by NSLocalizableString macro). "text to translate"="text to translate"; It can be used directly by your translator.Only right side, text in quotes after = needs to be edited.
Removes duplicates (I agree - sometimes it is not the best idea as translations may differ depending on context)
Removes comments (leaves just the juice)
Sorts alphabetically (optional)
Saves result to the 1 output file (all merged and ready for translation)
Script is not perfect but it works quite well.
It has a bit too long to past it here (90 lines), so here is the direct GitHub link:
https://github.com/lukaszmargielewski/extract-strings-from-xib-files/blob/master/extract-strings-from-xib-files.sh

How can I force emacs (or any editor) to read a file as if it is in ASCII format?

I could not find this answer in the man or info pages, nor with a search here or on Google. I have a file which is, in essence, a text file, but it somehow got screwed up upon saving. (I think there are a few strange bytes at the front of the file accidentally.)
I am able to open the file, and it makes sense, using head or cat, but not using any sort of editor.
In the end, all I wish to do is open the file in emacs, delete the "messy" characters, and save it once cleaned up. The file, however, is huge, so I need something powerful like emacs to be able to open it.
Otherwise, I suppose I can try to create a script to read this in line by line, forcing the script to read it in text format, then write it. But I wanted something quick, since I won't be doing this over & over.
Thanks!
Mike
perl -i.bk -pe 's/[^[:ascii:]]//g;' file
Found this perl one liner here: http://www.perlmonks.org/?node_id=619792
Try M-xfind-file-literally in Emacs.
You could edit the file using hexl-mode, which lets you edit the file in hexadecimal. That would let you see precisely what those offending characters are, and remove them.
It sounds like you either got a different line ending in the file (eg: carriage returns on a *nix system) or it got saved in an unexpected encoding.
You could use strings to grab "printable characters in file". You might have to play with the --encoding though I have only ever used it to grab ascii strings from executable files.

Rename file containing '©' character

We received as input in our application (running on Windows) a list of files. These files were automatically extracted from a database with a script.
Apparently some of the names are containing special characters (like accents) and these characters are rendered as '©' on our side.
How can rename programmatically these text files (around 900'000) to get rid of this character?
We cannot change the source neither re-extract the files.
The problem is that because of this character another program involved with our system does not accept the files.
Have a look at the unix command rename. It allows you to apply a perl regex to the names of a bunch of files. In this case you might want something like:
$ rename 's/[^a-zA-Z0-9]//' *
In debian the rename command is part of the perl package. It should also be available on CPAN.
I ended up creating a new script that reads the input files and search for special characters in their title.
It was quite easy indeed:
string filename = filename.Replace("©", "e");
Since the '©' is in the filename, the script (in C#) is able to recognize it and replace the match accordingly. In this way I can loop through all the folders and subfolders simply reading the filename and change specials characters.
Thank you all for the contributions!