perl seek and remove at varying offsets in binmode - perl

This is my script I am writing.
#usr/bin/perl
use warnings;
open(my $infile, '<', "./file1.bin") or die "Cannot open file1.bin: $!";
binmode($infile);
open(my $outfile, '>', "./extracted data without 00's.bin") or die "Cannot create extracted data without 00's.bin: $!";
binmode($outfile);
local $/; $infile = <STDIN>;
print substr($infile, 0, 0x840, '');
$infile =~ s/\0{16}//;
print $outfile;
I'm loading a binary file in perl.
I have been able to seek and patch at certain offsets, but what I would like to do is, now be able to find any instance of "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00" (16 bytes?) and remove it from the file, but no less than 16 bytes. Anything less than that I would want to leave. In some of the files the offset where the 00's start will be at different offsets, but if I am thinking correctly, if I can just search for 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 and remove any instance of it, then it won't matter what offset the 00's are at. I would extract the data first from specific offsets, then search the file and prune 00's from it. I can already extract the specific offsets I need, I just need to open the extracted file and shave off 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
EF 39 77 5B 14 9D E9 1E 94 A9 97 F2 6D E3 68 05
6F 7B 77 BB C4 99 67 B5 C9 71 12 30 9D ED 31 B6
AB 1F 81 66 E1 DD 29 4E 71 8D 54 F5 6C C8 86 0D
5B 72 AF A8 1F 26 DD 05 AF 78 13 EF A5 E0 76 BB
8A 59 9B 20 C5 58 95 7C E0 DB 44 6A EC 7E D0 10
09 42 B1 12 65 80 B3 EC 58 1A 2F 92 B9 32 D9 07
96 DE 32 51 4B 5F 3B 50 9A D1 09 37 F4 6D 7C 01
01 4A A4 24 04 DC 83 08 17 CB 34 2C E5 87 26 C1
35 38 F4 C4 E4 78 FE FC A2 BE 99 48 C9 CA 69 90
33 87 09 A8 27 BA 91 FC 4B 77 FA AB F5 1E 4E C0 I want to leave everything from
F2 78 6E 31 7D 16 3B 53 04 8A C1 A8 4B 70 39 22 <----- here up
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <----- I want to prune everything
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 from here on
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00<---- this IS the end of the file, and
just need to prune these few rows
of 00's
Say that "F2 78 6E" from the example above, is at offset 0x45000 BUT in another file the 00 00's will start at a different offset, how could I code it so the 00 00's would get pruned. In any file that I am opening?
If I need to be more specific, just ask.
Seems like I would peekk so far into the file until I hit a long 00 00 string, then prune any remaining lines. Does that make sense at all?
All I want to do is search the file for any instances of 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 and delete/prune/truncate it. I want to save everything but the 00's
EDIT #2
this did it:
open($infile, '<', './file1') or die "cannot open file1: $!";
binmode $infile;
open($outfile, '>', './file2') or die "cannot open file2: $!";
binmode $outfile;
local $/; $file = <$infile>;
$file =~ s/\0{16}//g;
print $outfile $file;
close ($infile);
close ($outfile);
Thank you ikegami for all your help and patience :)

No such thing as removing from a file. You have to either
copy the file without the undesired bits, or
read the rest of the file, seek back, print over the undesired bits, then truncate the file.
I went with option 1.
$ perl -e'
binmode STDIN;
binmode STDOUT;
local $/; $file = <STDIN>;
$file =~ s/\0{16}//;
print $file;
' <file.in >file.out
I'm loading the entire file into memory. Either option can be done in chunks, but it complicates things because your NULs could span two chunks.
In a poorly phrased update, you seem to have asked to avoid changes in the first 0x840 bytes. Two solutions:
$ perl -e'
binmode STDIN;
binmode STDOUT;
local $/; $file = <STDIN>;
substr($file, 0x840) =~ s/\0{16}//;
print $file;
' <file.in >file.out
$ perl -e'
binmode STDIN;
binmode STDOUT;
local $/; $file = <STDIN>;
print substr($file, 0, 0x840, '');
$file =~ s/\0{16}//;
print $file;
' <file.in >file.out

Related

How to filter not simple binary data column in PySpark?

Sample of my df is:
+------------------------------------------------------------------------------------------------------------------+
|id | binary_col |
+------------------------------------------------------------------------------------------------------------------+
| 1 | [08 01 10 0D 00 0E CC 93 01 00 00 00 01 00 00 00 00 00 00 00 80 FF BF 40 00 00 00 00 00 00 F0 3F BE 2B 00 00]|
| 2 | [08 01 10 0D 00 0E CC 93 01 00 00 00 01 00 00 00 00 00 00 00 F0 FF BF 40 00 00 00 00 00 00 F0 3F 57 66 00 00]|
| 3 | [08 01 10 0D 00 0E CC 93 01 00 00 00 01 00 00 00 00 00 00 00 C0 FF BF 40 00 00 00 00 00 00 F0 3F D5 69 00 00]|
| 4 | [08 01 10 0D 00 0E CC 93 01 00 00 00 01 00 00 00 00 00 00 00 80 FF BF 40 00 00 00 00 00 00 F0 3F 5A 60 00 00]|
+------------------------------------------------------------------------------------------------------------------+
with these schema (df.printSchema())
|-- id: int (nullable = true)
|-- binary_col: binary (nullable = true)
And I want to filter only the values with [08 01 10 0D 00 0E CC 93 01 00 00 00 01 00 00 00 00 00 00 00 80 FF BF 40 00 00 00 00 00 00 F0 3F BE 2B 00 00] (It doesn't work filtering id=1 because there are other ids in the df)
I've tried to cast binary to bigint to filter later like here: Spark: cast bytearray to bigint
by doing df.withColumn('casted_bin', F.conv(F.hex(F.col("binary_col")), 16, 10).cast("bigint")).show(truncate=False) but it didn't work.
How can I filter any kind of binary data type?
Note: I had asked previously here (How to filter Pyspark column with binary data type?) but it was a very simple binary data and the answer generated the binary from a numeric value while now I don't know how to generate the numeric value.

How to import a key into a MUSCLE card?

I am trying to import a key into the card, but it is giving response as 6F00 (UNKNOWN ERROR).The procedure i followed to import a key is
Load the (MUSCLE) applet
Initialize the applet
Verify the pin
create the object with id (FF FF FF FE):
-> B0 5A 00 00 0E FF FF FF FE 00 00 00 44 00 00 00 00 00 00 00
<- 90 00
write into the object
-> B0 54 00 00 8D FF FF FF FE 00 00 00 00 84 00 01 00 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
<- 90 00
Import key
-> B0 32 04 00 07 00 00 FF FF 00 00 00 00
<- 6F 00
Please provide a solution for the above problem.
If you are still looking for the solution: 7 bytes seems to be a bit high for importing a key... ;)
The ACL in the data block is only six bytes, so this might cause your error. The following "optional parameters" are AFAIK completely unused.

Shell magic wanted: format output of hexdump in a pipe

I'm debugging the output of a program that transmits data via TCP.
For debugging purposes i've replaced the receiving program with netcat and hexdump:
netcat -l -p 1234 | hexdump -C
That outputs all data as a nice hexdump, almost like I want. Now the data is transmitted in fixed blocks which lengths are not multiples of 16, leading to shifted lines in the output that make spotting differences a bit difficult:
00000000 50 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |P...............|
00000010 00 50 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |.P..............|
00000020 00 00 50 00 00 00 00 00 00 00 00 00 00 00 00 00 |..P.............|
How do I reformat the output so that after 17 bytes a new line is started?
It should look something like this:
50 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |P...............|
00 |. |
50 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |P...............|
00 |. |
50 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |P...............|
00 |. |
Using hexdumps -n parameter does not work since it will exit after reaching the number of bytes. (Unless there is a way to keep the netcat programm running and seamlessly piping the next bytes to a new instance of hexdump).
Also it would be great if I could use watch -d on the output to get a highlight of changes between lines.
For hexdump without characters part.
hexdump -e '16/1 "%0.2x " "\n" 1/1 "%0.2x " "\n"'
I use this:
use strict;
use warnings;
use bytes;
my $N = $ARGV[0];
$/ = \$N;
while (<STDIN>) {
my #bytes = unpack("C*", $_);
my $clean = $_;
$clean =~ s/[[:^print:]]/./g;
print join(' ', map {sprintf("%2x", $_)} #bytes),
" |", $clean, "|\n";
}
Run it as perl scriptname.pl N where N is the number of bytes in each chunk you want.
also you can use xxd -p to make a hexdump .

Extracting "plaintext" header from HEX file using Perl

Have a file that appears to have plaintext headers in them that I would like to extract and convert to plaintext.
Using HEXedit, this is what I'm seeing, which is in a file:
3a40 - 31 65 33 38 00 00 00 00 00 00 00 00 00 00 00 00 - 1e38............
3a50 - 00 00 00 00 00 00 00 00 00 00 0a 00 74 00 65 00 - ............t.e.
3a60 - 78 00 74 00 2f 00 61 00 73 00 63 00 69 00 69 00 - x.t./.a.s.c.i.i.
3a70 - 00 00 18 00 61 00 66 00 66 00 79 00 6d 00 65 00 - ....a.f.f.y.m.e
3a80 - 74 00 72 00 69 00 78 00 2d 00 61 00 72 00 72 00 - t.r.i.x.-.a.r.r
3a90 - 61 00 79 00 2d 00 62 00 61 00 72 00 63 00 6f 00 - a.y.-.b.a.r.c.o.
3aa0 - 64 00 65 00 00 00 64 00 40 00 35 00 32 00 30 00 - d.e...d.#.5.2.0.
3ab0 - 38 00 32 00 36 00 30 00 30 00 39 00 31 00 30 00 - 8.2.6.0.0.9.1.0.
3ac0 - 37 00 30 00 36 00 31 00 31 00 31 00 38 00 31 00 - 7.0.6.1.1.1.8.1.
3ad0 - 31 00 34 00 31 00 32 00 31 00 33 00 34 00 35 00 - 1.4.1.2.1.3.4.5.
3ae0 - 35 00 30 00 39 00 38 00 39 00 00 00 00 00 00 00 - 5.0.9.8.9.......
3af0 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 - ................
3b00 - 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0a 00 - ................
and this is the output I'd like to get:
text/ascii affymetrix-array-barcode d#52082600910706111811412134550989
Try with the iconv command. Something like this should work:
tail -c +6 input.txt | iconv -f UTF16 -t ASCII >output.txt
Then split on the null bytes.
Granted, I'm no wiz, but this does the job if all your files look very similar to the one you just posted:
use strict;
open FILE, 'file.dat';
binmode FILE;
my ($chunk, $buf, $n);
seek FILE, 28, 0;
while (($n=read FILE, $chunk, 16)) { $buf .= $chunk; }
my #s=split(/\0\0/, $buf, 4);
print "$s[0] $s[1] $s[2]\n";
close (FILE);
A perl solution might be interesting, but wouldn't the unix strings command give you the plaintext portion of the file?

How to convert/manipulate BINARY file to ASCII file?

I'm looking for a way to take the TEXT characters from a 4byte BINARY file to array or TEXT file,
Lets say my input file is:
00000000 2e 00 00 00 01 00 00 00 02 00 00 00 03 00 00 00 |................|
00000010 04 00 00 00 05 00 00 00 06 00 00 00 07 00 00 00 |................|
00000020 08 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000070 00 00 00 00 00 00 00 00 |........|
00000078
And my desired output is:
46,1,2,3,4,5,6,7,8,9,0,0...
The output can be a TEXT file or an array.
I notice that the pack/unpack functions may help here, but I couldn't figure how to use them properly,
An example would be nice.
Use unpack:
local $/;
#_=unpack("V*", <>);
gets you an array. So as an inefficient (don't try on huge files) example:
perl -e 'local$/;print join(",",map{sprintf("%d",$_)}unpack("V*",<>))' thebinaryfile
The answer is dependent on what you consider an ASCII character. Anything below 128 is technically an ASCII character, but I am assuming you mean characters you normally find in a text file. In that case, try this:
#!/usr/bin/perl
use strict;
use warnings;
use bytes;
$/ = \1024; #read 1k at a time
while (<>) {
for my $char (split //) {
my $ord = ord $char;
if ($char > 31 and $char < 127 or $char =~ /\r\n\t/) {
print "$ord,"
}
}
}
od -t d4 -v <filename>