I just destroyed libc.so on my machine. What can I do now? [closed] - libc

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I was SSHed into a remote box as root when I ran the following command:
ln -sf /nonexistent /.../libc.so
Immediately my prompt started throwing errors:
basename: could not find shared library
I can't even run anything:
root#toastbox# ls
ls: could not find shared library
How can I fix this? I have two SSH sessions open with Bash, but no other processes accessible. I have a cross-compiler for the target on my local machine, but no way to SCP files to the remote end anymore.
EDIT: There are no other copies of libc on this box; I overwrote the real libc file. Some things still work: I can echo, and I can use tab-completion to emulate ls. But normal programs (mv, rm, etc.) are MIA.

I discovered that I could still write to files by using echo and redirection (thanks Iwillnotexist Idonotexist!). Further, echo -ne lets me write arbitrary bytes to a file. I can therefore truncate a file with echo -ne '' > file, then repeatedly write to it with
echo -ne '\001' >> /file
Using this approach, I can overwrite any executable present on the system (since I'm still root) in this way.
I compiled a simple program to rename a file:
#include <unistd.h>
int main(int argc, char **argv) { return rename(argv[1], argv[2]); }
using cross-gcc -static mv.c mv (eliminating the libc.so dependency). Then, I wrote a script to encode any binary file as a series of echo commands (limited by the length that readline will allow me to enter):
# Encode a file as a series of echo statements.
# settings
maxlen = 1020
infile = '/tmp/mv'
outfile = '/usr/bin/mv'
print "echo -ne '' > %s" % outfile
template = "echo -ne '%%s' >> %s" % outfile
maxchunk = maxlen - len(template % '')
pos = 0
data = open(infile, 'rb').read()
transtable = {}
for i in xrange(256):
c = chr(i)
if i == 0:
transtable[c] = r'\0'
elif c.isalpha():
transtable[c] = c
else:
transtable[c] = r'\0%o' % i
while pos < len(data):
chunk = []
chunklen = 0
while pos < len(data):
bit = transtable[data[pos]]
if chunklen + len(bit) < maxchunk:
chunk.append(bit)
chunklen += len(bit)
pos += 1
else:
break
print template % ''.join(chunk)
I used my echo encoder to generate a series of echo commands which I mass-pasted into the ssh session. These look like
echo -ne '' > /usr/bin/mv
echo -ne '\0177ELF\01\01\01\0\0\0\0\0\0\0\0\0\02\0\050\0\01\0\0\0\0360\0200\0\0\064\0\0\0\030Q\05\0\0\0\0\05\064\0\040\0\05\0\050\0\034\0\033\0\01\0\0\0\0\0\0\0\0\0200\0\0\0\0200\0\0P\03\01\0P\03\01\0\05\0\0\0\0\020\0\0\01\0\0\0\0\017\01\0\0\0237\01\0\0\0237\01\0x\02\0\0X\046\0\0\06\0\0\0\0\020\0\0Q\0345td\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\06\0\0\0\0\0\0\0\01\0\0p\0244\0356\0\0\0244n\01\0\0244n\01\0\0350\010\0\0\0350\010\0\0\04\0\0\0\04\0\0\0R\0345td\0\017\01\0\0\0237\01\0\0\0237\01\0\0\01\0\0\0\01\0\0\06\0\0\0\040\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\020\0265\04\034\0\040\0\0360\053\0371\040\034\016\0360r\0375\0134\0300\0237\0345\0H\055\0351X\060\0237\0345\04\0260\0215\0342\020\0320M\0342\014\0300\0217\0340\03\060\0234\0347\024\060\013\0345D\060\0237\0345\04\0\0213\0342\03\060\0234\0347\020\060\013\0345\070\060\0237\0345\0\020\0240\0343\03\060\0234\0347\014\060\013\0345\054\060\0237\0345\03\060\0234\0347\010\060\013\0345\044\060\0237\0345\03\040\0234\0347\024\060K\0342\0223\072\0\0353\04' >> /usr/bin/mv
echo -ne '\0320K\0342\0\0210\0275\0350\0350\036\01\0\0174\0377\0377\0377\0200\0377\0377\0377\0204\0377\0377\0377\0210\0377\0377\0377\0214\0377\0377\0377\0H\055\0351\04\0260\0215\0342\010\0320M\0342\010\0\013\0345\014\020\013\0345\014\060\033\0345\04\060\0203\0342\0\040\0223\0345\014\060\033\0345\010\060\0203\0342\0\060\0223\0345\02\0\0240\0341\03\020\0240\0341\06\0\0\0353\0\060\0240\0341\03\0\0240\0341\04\0320K\0342\0\0210\0275\0350\0\0\0\0\0\0\0\0\0\0\0\0\0220\0\055\0351\046p\0240\0343\0\0\0\0357\0220\0\0275\0350\0\0\0260\0341\036\0377\057Qr\072\0\0352\0\0\0240\0341\020\0265\04\034\0\0360\014\0370\04\0140\01\040\0100B\020\0275\020\0265\03\034\0377\063\02\0333\0100B\0377\0367\0361\0377\020\0275\020\0265\02K\0230G\010\060\020\0275\0300F\0340\017\0377\0377\0360\0265\031N\0203\0260\034\034\0176D\07\034\01\0222\0\0360\0253\0371\045h\0\0340\0230G\04\065\053h\0\053\0372\0321\0345h\0\0340\0230G\04\065\053h\0\053\0372\0321eh\0\0340\0230G\04\065\053h\0\053\0372\0321\075\034\0200\0315y\034\0210\0' >> /usr/bin/mv
...
I tested the replacement mv a few times to make sure it worked (using Bash tab-completion as a substitute for ls), and then used the echo encoder to write a replacement libc.so to a temporary directory. Finally, I moved the replacement libc.so into the right place using the static mv I pushed.
And success! It might've taken about an hour, but my box is back up and running, with no casualties save for one clobbered /usr/bin/mv :)

Related

Perl how to detect if running from the command line or called from browser

Is there a good way to detect if the Perl script is called from terminal/DOS or called from a web server.
I currently have this code:
sub cli {
# return 1 if called from browser, 0 if called from command line
if (defined $ENV{GATEWAY_INTERFACE} || exists $ENV{HTTP_HOST}){
return 0;
}
return 1;
}
but read that these ENV variables are not set by all servers.
The reason for this if the script run from terminal I will print text/plain formatted messages, if run from browser I will print HTML formatted messages.
I usually do:
if (-t STDIN) {
//This is running from a terminal
}
Edit after reading comments : I found this other question that details a different solution : How do I check if a Perl script is running in a terminal?.
The following is a quick example and may not be the most accurate, but serves to demonstrate that you could check the process's parent command.
my $parent_process = `ps -o ppid= -p $$ | xargs ps -o command= -p`;
if ($parent_process =~ /httpd/) {
say q{CGI};
} else {
say q{Terminal};
}
Basically, we retrieve the search for the current process, and pipe that parent process into another look up so that we can get the parent's command that was run. From the terminal it should be the shell name (e.g., -bash), but if it was run by the server, it should be the server daemon httpd (depending on the web server).
It would be much easier to have the server set an environment variable and check for the existence of that variable. This all depends on where you're running the command and who you're running the command as.
This could further be condensed into something like:
my $is_server = `ps -o ppid= -p $$ | xargs ps -o command= -p | grep httpd -q && echo "1" || echo "0"`; chomp $is_server;
if ($is_server) { ... } else { ... }

Merge the files with the same file name in Perl [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I currently got a problem on merging the files in Perl.
There are two directories/folders, which contain the same name and extension files in pairs.
For example, in folder 1, I have files 1.fastq, 2.fastq,....,10.fastq.
In folder 2, I have the exactly same file names 1.fastq, 2.fastq,....,10.fastq, but they contain different information.
I want to merge the files with the same name, in the beginning I tried the cat command
$ cat 1.fastq 1.fastq > 1.fastq
However if there are too many files, for example 1000+, I will need to do it 1000+ times.
How can I do it automatically with the perl command?
Thank you in advance.
Perl based solution will be like below.
#!/usr/bin/perl
$source_dir = "./source";
$dest_dir = "./dest";
opendir ($source, $source_dir);
#source_files = readdir $source;
foreach $each_file (#source_files){
if($each !~ /^(\.|\.\.)$/) {
open $file_h , "< $source_dir/$each_file";
#contents = <$file_h>;
open $dest_file, ">>$dest_dir/$each_file";
print $dest_file #contents;
#contents =();
}
}
You can also do this using Shell script also. A typical shell script will be like below.
#!/usr/bin/sh
source='./source'
dest='./dest'
for file in `ls $source`
do
if [ -e $dest/$file ]
then
cat $source/$file $dest/$file >> $dest/$file."unique_name"
rm $dest/$file
mv $dest/$file."unique_name" $dest/$file
else
cp $source/$file $dest/$file
fi
done
You can not use input file as output with cat.
$ cat 1.fastq 1.fastq > 1.fastq
This will lead you error saying "input file is output file".

Optimize Duplicate Detection

Background
This is an optimization problem. Oracle Forms XML files have elements such as:
<Trigger TriggerName="name" TriggerText="SELECT * FROM DUAL" ... />
Where the TriggerText is arbitrary SQL code. Each SQL statement has been extracted into uniquely named files such as:
sql/module=DIAL_ACCESS+trigger=KEY-LISTVAL+filename=d_access.fmb.sql
sql/module=REP_PAT_SEEN+trigger=KEY-LISTVAL+filename=rep_pat_seen.fmb.sql
I wrote a script to generate a list of exact duplicates using a brute force approach.
Problem
There are 37,497 files to compare against each other; it takes 8 minutes to compare one file against all the others. Logically, if A = B and A = C, then there is no need to check if B = C. So the problem is: how do you eliminate the redundant comparisons?
The script will complete in approximately 208 days.
Script Source Code
The comparison script is as follows:
#!/bin/bash
echo Loading directory ...
for i in $(find sql/ -type f -name \*.sql); do
echo Comparing $i ...
for j in $(find sql/ -type f -name \*.sql); do
if [ "$i" = "$j" ]; then
continue;
fi
# Case insensitive compare, ignore spaces
diff -IEbwBaq $i $j > /dev/null
# 0 = no difference (i.e., duplicate code)
if [ $? = 0 ]; then
echo $i :: $j >> clones.txt
fi
done
done
Question
How would you optimize the script so that checking for cloned code is a few orders of magnitude faster?
Idea #1
Remove the matching files into another directory so that they don't need to be examined twice.
System Constraints
Using a quad-core CPU with an SSD; trying to avoid using cloud services if possible. The system is a Windows-based machine with Cygwin installed -- algorithms or solutions in other languages are welcome.
Thank you!
Your solution, and sputnick's solution, both take O(n^2) time. This can be done in O(nlog n) time by sorting the files and using a list merge. It can be sped up further by comparing MD5 (or any other cryptographically-strong hash function) of the files, instead of the files themselves.
Assuming you're in the sql directory:
md5sum * | sort > ../md5sums
perl -lane 'print if $F[0] eq $lastMd5; $last = $_; $lastMd5 = $F[0]' < ../md5sums
Using the above code will report only exact byte-for-byte duplicates. If you want to consider two non-identical files to be equivalent for the purposes of this comparison (e.g. if you don't care about case), first create a canonicalised copy of each file (e.g. by converting every character to lower case with tr A-Z a-z < infile > outfile).
The best way to do this is to hash each file, like SHA-1, and then use a set. I'm not sure bash can do this, but python can. Although if you want best performance C++ is the way to go.
To optimize comparison of your files :
#!/bin/bash
for i; do
for j; do
[[ "$i" != "$j" ]] &&
if diff -IEbwBaq "$i" "$j" > /dev/null; then
echo "$i & $j are the same"
else
echo "$i & $j are different"
fi
done
done
USAGE
./script /dir/*

Perl ambiguous command line options, and security implications of eval with -i?

I know this is incorrect. I just want to know how perl parses this.
So, I'm playing around with perl, what I wanted was perl -ne what I typed was perl -ie the behavior was kind of interesting, and I'd like to know what happened.
$ echo 1 | perl -ie'next unless /g/i'
So perl Aborted (core dumped) on that. Reading perl --help I see -i takes an extension for backups.
-i[extension] edit <> files in place (makes backup if extension supplied)
For those that don't know -e is just eval. So I'm thinking one of three things could have happened either it was parsed as
perl -i -e'next unless /g/i' i gets undef, the rest goes as argument to e
perl -ie 'next unless /g/i' i gets the argument e, the rest is hanging like a file name
perl -i"-e'next unless /g/i'" whole thing as an argument to i
When I run
$ echo 1 | perl -i -e'next unless /g/i'
The program doesn't abort. This leads me to believe that 'next unless /g/i' is not being parsed as a literal argument to -e. Unambiguously the above would be parsed that way and it has a different result.
So what is it? Well playing around with a little more, I got
$ echo 1 | perl -ie'foo bar'
Unrecognized switch: -bar (-h will show valid options).
$ echo 1 | perl -ie'foo w w w'
... works fine guess it reads it as `perl -ie'foo' -w -w -w`
Playing around with the above, I try this...
$ echo 1 | perl -ie'foo e eval q[warn "bar"]'
bar at (eval 1) line 1.
Now I'm really confused.. So how is Perl parsing this? Lastly, it seems you can actually get a Perl eval command from within just -i. Does this have security implications?
$ perl -i'foo e eval "warn q[bar]" '
Quick answer
Shell quote-processing is collapsing and concatenating what it thinks is all one argument. Your invocation is equivalent to
$ perl '-ienext unless /g/i'
It aborts immediately because perl parses this argument as containing -u, which triggers a core dump where execution of your code would begin. This is an old feature that was once used for creating pseudo-executables, but it is vestigial in nature these days.
What appears to be a call to eval is the misparse of -e 'ss /g/i'.
First clue
B::Deparse can your friend, provided you happen to be running on a system without dump support.
$ echo 1 | perl -MO=Deparse,-p -ie'next unless /g/i'
dump is not supported.
BEGIN { $^I = "enext"; }
BEGIN { $/ = "\n"; $\ = "\n"; }
LINE: while (defined(($_ = <ARGV>))) {
chomp($_);
(('ss' / 'g') / 'i');
}
So why does unle disappear? If you’re running Linux, you may not have even gotten as far as I did. The output above is from Perl on Cygwin, and the error about dump being unsupported is a clue.
Next clue
Of note from the perlrun documentation:
-u
This switch causes Perl to dump core after compiling your program. You can then in theory take this core dump and turn it into an executable file by using the undump program (not supplied). This speeds startup at the expense of some disk space (which you can minimize by stripping the executable). (Still, a "hello world" executable comes out to about 200K on my machine.) If you want to execute a portion of your program before dumping, use the dump operator instead. Note: availability of undump is platform specific and may not be available for a specific port of Perl.
Working hypothesis and confirmation
Perl’s argument processing sees the entire chunk as a single cluster of options because it begins with a dash. The -i option consumes the next word (enext), as we can see in the implementation for -i processing.
case 'i':
Safefree(PL_inplace);
[Cygwin-specific code elided -geb]
{
const char * const start = ++s;
while (*s && !isSPACE(*s))
++s;
PL_inplace = savepvn(start, s - start);
}
if (*s) {
++s;
if (*s == '-') /* Additional switches on #! line. */
s++;
}
return s;
For the backup file’s extension, the code above from perl.c consumes up to the first whitespace character or end-of-string, whichever is first. If characters remain, the first must be whitespace, then skip it, and if the next is a dash then skip it also. In Perl, you might write this logic as
if ($$s =~ s/i(\S+)(?:\s-)//) {
my $extension = $1;
return $extension;
}
Then, all of -u, -n, -l, and -e are valid Perl options, so argument processing eats them and leaves the nonsensical
ss /g/i
as the argument to -e, which perl parses as a series of divisions. But before execution can even begin, the archaic -u causes perl to dump core.
Unintended behavior
An even stranger bit is if you put two spaces between next and unless
$ perl -ie'next unless /g/i'
the program attempts to run. Back in the main option-processing loop we see
case '*':
case ' ':
while( *s == ' ' )
++s;
if (s[0] == '-') /* Additional switches on #! line. */
return s+1;
break;
The extra space terminates option parsing for that argument. Witness:
$ perl -ie'next nonsense -garbage --foo' -e die
Died at -e line 1.
but without the extra space we see
$ perl -ie'next nonsense -garbage --foo' -e die
Unrecognized switch: -onsense -garbage --foo (-h will show valid options).
With an extra space and dash, however,
$ perl -ie'next -unless /g/i'
dump is not supported.
Design motivation
As the comments indicate, the logic is there for the sake of harsh shebang (#!) line constraints, which perl does its best to work around.
Interpreter scripts
An interpreter script is a text file that has execute permission enabled and whose first line is of the form:
#! interpreter [optional-arg]
The interpreter must be a valid pathname for an executable which is not itself a script. If the filename argument of execve specifies an interpreter script, then interpreter will be invoked with the following arguments:
interpreter [optional-arg] filename arg...
where arg... is the series of words pointed to by the argv argument of execve.
For portable use, optional-arg should either be absent, or be specified as a single word (i.e., it should not contain white space) …
Three things to know:
'-x y' means -xy to Perl (for some arbitrary options "x" and "y").
-xy, as common for unix tools, is a "bundle" representing -x -y.
-i, like -e absorbs the rest of the argument. Unlike -e, it considers a space to be the end of the argument (as per #1 above).
That means
-ie'next unless /g/i'
which is just a fancy way of writing
'-ienext unless /g/i'
unbundles to
-ienext -u -n -l '-ess /g/i'
^^^^^ ^^^^^^^
---------- ----------
val for -i val for -e
perlrun documents -u as:
This switch causes Perl to dump core after compiling your program. You can then in theory take this core dump and turn it into an executable file by using the undump program (not supplied). This speeds startup at the expense of some disk space (which you can minimize by stripping the executable). (Still, a "hello world" executable comes out to about 200K on my machine.) If you want to execute a portion of your program before dumping, use the dump() operator instead. Note: availability of undump is platform specific and may not be available for a specific port of Perl.

Substituting environment variables in a file: awk or sed?

I have a file of environment variables that I source in shell scripts, for example:
# This is a comment
ONE=1
TWO=2
THREE=THREE
# End
In my scripts, I source this file (assume it's called './vars') into the current environment, and change (some of) the variables based on user input. For example:
#!/bin/sh
# Read variables
source ./vars
# Change a variable
THREE=3
# Write variables back to the file??
awk 'BEGIN{FS="="}{print $1=$$1}' <./vars >./vars
As you can see, I've been experimenting with awk for writing the variables back, sed too. Without success. The last line of the script fails. Is there a way to do this with awk or sed (preferably preserving comments, even comments with the '=' character)? Or should I combine 'read' with string cutting in a while loop or some other magic? If possible, I'd like to avoid perl/python and just use the tools available in Busybox. Many thanks.
Edit: perhaps a use case might make clear what my problem is. I keep a configuration file consisting of shell environment variable declarations:
# File: network.config
NETWORK_TYPE=wired
NETWORK_ADDRESS_RESOLUTION=dhcp
NETWORK_ADDRESS=
NETWORK_ADDRESS_MASK=
I also have a script called 'setup-network.sh':
#!/bin/sh
# File: setup-network.sh
# Read configuration
source network.config
# Setup network
NETWORK_DEVICE=none
if [ "$NETWORK_TYPE" == "wired" ]; then
NETWORK_DEVICE=eth0
fi
if [ "$NETWORK_TYPE" == "wireless" ]; then
NETWORK_DEVICE=wlan0
fi
ifconfig -i $NETWORK_DEVICE ...etc
I also have a script called 'configure-network.sh':
#!/bin/sh
# File: configure-network.sh
# Read configuration
source network.config
echo "Enter the network connection type:"
echo " 1. Wired network"
echo " 2. Wireless network"
read -p "Type:" -n1 TYPE
if [ "$TYPE" == "1" ]; then
# Update environment variable
NETWORK_TYPE=wired
elif [ "$TYPE" == "2" ]; then
# Update environment variable
NETWORK_TYPE=wireless
fi
# Rewrite configuration file, substituting the updated value
# of NETWORK_TYPE (and any other updated variables already existing
# in the network.config file), so that later invocations of
# 'setup-network.sh' read the updated configuration.
# TODO
How do I rewrite the configuration file, updating only the variables already existing in the configuration file, preferably leaving comments and empty lines intact? Hope this clears things up a little. Thanks again.
You can't use awk and read and write from the same file (is part of your problem).
I prefer to rename the file before I rewrite (but you can save to a tmp and then rename too).
/bin/mv file file.tmp
awk '.... code ...' file.tmp > file
If your env file gets bigger, you'll see that is is getting truncated at the buffer size of your OS.
Also, don't forget that gawk (the std on most Linux installations) has a built in array ENVIRON. You can create what you want from that
awk 'END {
for (key in ENVIRON) {
print key "=" ENVIRON[key]
}
}' /dev/null
Of course you get everything in your environment, so maybe more than you want. But probably a better place to start with what you are trying to accomplish.
Edit
Most specifically
awk -F"=" '{
if ($1 in ENVIRON) {
printf("%s=%s\n", $1, ENVIRON[$1])
}
# else line not printed or add code to meet your situation
}' file > file.tmp
/bin/mv file.tmp file
Edit 2
I think your var=values might need to be export -ed so they are visible to the awk ENVIRON array.
AND
echo PATH=xxx| awk -F= '{print ENVIRON[$1]}'
prints the existing value of PATH.
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.
I don't exactly know what you are trying to do, but if you are trying to change the value of variable THREE ,
awk -F"=" -vt="$THREE" '$1=="THREE" {$2=t}{print $0>FILENAME}' OFS="=" vars
You can do this in just with bash:
rewrite_config() {
local filename="$1"
local tmp=$(mktemp)
# if you want the header
echo "# File: $filename" >> "$tmp"
while IFS='=' read var value; do
declare -p $var | cut -d ' ' -f 3-
done < "$filename" >> "$tmp"
mv "$tmp" "$filename"
}
Use it like
source network.config
# manipulate the variables
rewrite_config network.config
I use a temp file to maintain the existance of the config file for as long as possible.