how to retrive a perl file using wget and execute it using a one-liner? - perl

I'm looking to use wget to retrieve a perl file and execute it in one line. Does anyone know if this is possible/how I would go about doing this?

In order to use wget for this purpose, you would use the -O flag and give it the '-' character as an argument. From the manpage:
-O file
--output-document=file
Giving '-' as the "file" option to -O tells it to send it's output to stdout, which can then be piped into the Perl command.
You can provide the -q flag as well to turn off wget's own warning and message output:
-q
--quiet
Turn off Wget's output.
This will make things look cleaner in the shell.
So you would end up with something like:
wget -qO - http://127.0.0.1/myscript.pl | perl -
For more information on I/O redirection take a look at this:
http://www.tldp.org/LDP/abs/html/io-redirection.html

Just download and pipe to perl
curl -L http://your_location.pl | perl -
You'll sometimes see code like for install modules like cpanm.

Related

Running perl files from a text file

There're multiple perl scripts that is ran from CYGWIN terminal. An example is,
$ perl IdGeneratorTool.pl JSmith -i userInfo.adb -o JSmith.txt
The above is an example. Were based on input parameter JSmith, it reads a db file, generate an ID and output that to a text file.
Now these perl scripts running on the CYGWIN keeps growing and it's added to a text file like shown below,
$ perl IdGeneratorTool.pl JSmith -i userInfo.adb -o JSmith.txt
$ perl IdGeneratorTool.pl PTesk -i userInfo.adb -o PTesk.txt
$ perl IdGeneratorTool.pl CMorris -i userInfo.adb -o CMorris.txt
$ perl IdGeneratorTool.pl JLawrence -i userInfo.adb -o JLawrence.txt
$ perl IdGeneratorTool.pl TCruise -i userInfo.adb -o TCruise.txt
...
....
......
.......
.........
And the list keeps growing.
I would like to know whether there's a way to execute all these perl scripts which are in a text file in one go.
I'm new to perl and doesn't have much idea as to what are the options.
An ideal scenario might be, a tool where i can open this text file and click a execute button and then it executes all the scripts and output multiple *.txt files into the same directory.
Or maybe a simple perl script that can do it.
Put them into a file makeall (or whatever you want to call it.
Put as a first line #!/bin/bash into the file
In cygwin enter chmod +x makeall
in cygwin enter ./makeall
With this you've created a bash script which'll do all your calls of the perl script.
Another option would to just put all the user information into a csv file and read that one in order to call your script.
WAIT! Even easier!
Put into the makeall script this:
#!/bin/bash
for user in \
JSmith \
PTesk \
CMorris \
JLawrence \
TCruise \
; do
perl IdGeneratorTool.pl "$user" -i userInfo.adb -o "$user".txt
done
Now you just need to add any additional user the same way I did for your examples.
Without seeing the source for IdGeneratorTool.pl it's hard to give any specific advice; but it is generally not hard to turn something like
do_stuff($ARGV[0], $opt_i, $opt_o);
into
while (<>) {
chomp;
$user, $adb, $outputfile = split('\t');
do_stuff($user, $adb, $outputfile);
}
to read the input from a tab-delimited file instead of from command-line arguments.
You can create text file with list of users (one per line) for example user_list.txt
JSmith
PTesk
CMorris
JLawrence
TCruise
Then create bash script process_list.sh with following content in same directory
#!/bin/bash
for user in `cat user_list.txt`
do
perl IdGeneratorTool.pl $user -i userInfo.adb -o ${user}.txt
done
Now make bash script executable chmod +x process_list.sh and it is ready for execution.
Once you need to add new user edit user_list.txt to add one more line into the file.
Polar Bear

Searching through many pcap files with tcpdump

I have a bunch of pcap files that I got with tcpdump. I need to search through all of them for specific keywords and record which files contain these strings. Is there a way to automate the search for these keywords using a tcpdump command perhaps?
Probably the most generic solution using tshark would be to run something like:
tshark -r file.pcap -Y "frame contains foo"
... where foo is the string you're searching for. Refer to the wireshark-filter man page for more information on filtering using the contains and other operators, such as the matches operator which supports Perl compatible regular expressions.
Using that command, the output you'll see will be a 1-line summary of each packet matching the filter. You could tailor the output using a number of methods, but for example, suppose you only wanted to know the frame number of the matching packet, you could run:
tshark -r file.pcap -Y "frame contains foo" -T fields -e frame.number
Refer to the tshark man page for more information on the -T and -e options, as well as other options which may be of use to you.
There is more powerful version of tcpdump, tshark (it is the command line tool from wireshark package). You could use tshark -T fields|pdml|ps|psml|text to dump packets in format you like, and just grep it. tshark could read tcpdump dumps.

run a prolog code with swipl in a command line

I am searching for swipl the similar feature as perl -e
In particular, I want to run prolog code in this fashion:
swipl --wanted-flag "fact(a). message:-writeln('hello')." -g "message" -t halt
This is possible to do with
swipl -f file -g "message" -t halt
where the prolog clauses are written in file
I am running swipl on the server side that takes user input as prolog clauses, therefore writing a file on the server is not a good idea.
One thing you can do is to use load_files/2 with the option stream, and load from standard input, not from an argument (you can still pass the entry point as an argument, I guess):
Say in a file fromstdin.pl you have:
main :-
load_files(stdin, [stream(user_input)]),
current_prolog_flag(argv, [Goal|_]),
call(Goal),
halt.
main :- halt(1).
and with this you can do:
$ echo 'message :- format("hello~n").' | swipl -q -t main fromstdin.pl -- message
|: hello
The comments by #false to this answer and the question will tell you what this |: is, if you are wondering, but if it annoys you, just do:
$ echo 'message :- format("hello~n").' \
| swipl -q -t main fromstdin.pl -- message \
| cat
hello
instead.
This will let you read any Prolog from standard input and call an arbitrary predicate from it. Whether this is a clever thing to do, I don't know. I would also not be surprised if there is a much easier way to achieve the same.

how to use -o flag in wget with -i?

I understand that -i flag takes a file (which may contain list of URLs) and I know that -o followed by a name can be specified to rename a item being downloaded using wget.
example:
wget -i list_of_urls.txt
wget -o my_custom_name.mp3 http://example.com/some_file.mp3
I have a file that looks like this:
file name: list_of_urls.txt
http://example.com/some_file.mp3
http://example.com/another_file.mp3
http://example.com/yet_another_file.mp3
I want to use wget to download these files with the -i flag but also save each file as 1.mp3, 2.mp3 and so on.
Can this be done?
You can use any script language (PHP or Python) for generate batch file. In thin batch file each line will contains run wget with url and -O options.
Or you can try write cycle in bash script.
I ran a web search again and found https://superuser.com/questions/336669/downloading-multiple-files-and-specifying-output-filenames-with-wget
Wget can't seem to do it but Curl can with -K flag, the file supplied can contain url and output name. See http://curl.haxx.se/docs/manpage.html#-K
If you are willing to use some shell scripting then https://unix.stackexchange.com/questions/61132/how-do-i-use-wget-with-a-list-of-urls-and-their-corresponding-output-files has the answer.

Why does wget still print to stderr when there are no error and using -nv?

I have a question illustrated by the following command line interaction:
$ wget www.google.com -nv >> out.log
2014-10-28 21:41:43 URL:http://www.google.com/ [17700] -> "index.html.1" [1]
So wget www.google.com, and using -nv (nonverbose, but still printing error information), and i redirected all the output to out.log, so nothing should print on stdout, but information still gets printed to the terminal, which i can only assume is coming from stderr. Does anyone know why wget does that? How would i go about turning it off and still preserve error logging when there are actual errors?
Thanks a lot!
Jason
Like the manual says, the option you are looking for is -q. "Non-verbose" merely turns off verbose status reporting.
The somewhat weird design decisions in wget are one reason to prefer curl.
Use cURL instead:
$ curl -Ss http://www.stackoverflow.com -o /dev/null
(no output)
$ curl -Ss http://www.stackoverflow.invalid -o /dev/null
curl: (6) Couldn't resolve host 'www.stackoverflow.invalid'
If you for whichever reason really need to use wget, you can capture output and only show it on failure:
errors=$(2>&1 wget -nv http://www.stackoverflow.com) || echo "$errors" >&2