Passing Perl Data Structures as Serialized GET Strings to a Perl CGI program - perl

I want to pass a serialized Perl data structure as a GET variable to a CGI application. I tried Data::Serializer as my first option. Unfortunately the serialized string is too long for my comfort, in addition to containing options joined by '^' (a caret).
Is there a way I can create short encoded strings from perl data structures so that I can safely pass them as GET variables to a perl CGI application?
I would also appreciate being told that this (serialized, encoded strings) is a bad way to pass complex data structures to web applications and suggestions on how I could accomplish this

If you need to send URL's to your users that contains a few key datapoints and you want to ensure it can't be forged you can do this with a Digest (such as from Digest::SHA) and a shared secret. This lets you put the data out there in your messages without needing to keep a local database to track it all. My example doesn't include a time element, but that would be easy enough to add in if you want.
use Digest::SHA qw(sha1_base64);
my $base_url = 'http://example.com/thing.cgi';
my $email = 'guy#somewhere.com';
my $msg_id = '123411';
my $secret = 'mysecret';
my $data = join(":", $email, $msg_id, $secret);
my $digest = sha1_base64($data);
my $url = $base_url . '?email=' . $email . '&msg_id=' . $msg_id' . '&sign=' . $digest;
Then send it along.
In your "thing.cgi" script you just need to extract the parameters and see if the digest submitted in the script matches the one you locally regenerate (using $email and $msg_id, and of course your $secret). If they don't match, don't authorize them, if they do then you have a legitimately authorized request.
Footnote:
I wrote the "raw" methods into Data::Serializer to make translating between serializers much easier and that in fact does help with going between languages (to a point). But that of course is a separate discussion as you really shouldn't ever use a serializer for exchanging data on a web form.

One of the drawbacks of the approach — using a perl-specific serializer, that is — is that if you ever want to communicate between the client and server using something other than perl it will probably be more work than something like JSON or even XML would be. The size limitations of GET requests you've already run in to, but that's problematic for any encoding scheme.
It's more likely to be a problem for the next guy down the road who maintains this code than it is for you. I have a situation now where a developer who worked on a large system before I did decided to store several important bits of data as perl Storable objects. Not a horrible decision in and of itself, but it's making it more difficult than it should be to access the data with tools that aren't written in perl.

Passing serialized encoded strings is a bad way to pass complex data structures to web applications.
If you are trying to pass state from page to page, you can use server side sessions which would only require you to pass around a session key.
If you need to email a link to someone, you can still create a server-side session with a reasonable expiry time (you'll also need to decide if additional authentication is necessary) and then send the session id in the link. You can/should expire the session immediately once the requested action is taken.

Related

Decode base64 data found in mongodb change stream to human readable format

I am developing a small application to test the change stream functionality in MongoDB.
I have found that if one uses a client session, that information is included in the change stream output (change event)
For instance, here is the output when I insert a document:
{"txnNumber"=>1, "lsid"=>{"id"=><BSON::Binary:0x70310118878160 type=uuid data=0x05d30a0fa4db4f24...>, "uid"=><BSON::Binary:0x70310118878040 type=generic data=0x08e97261f57b1617...>}, "_id"=>{"_data"=>"8262D407C4000000022B022C0100296E5A100483BECD0AF46146E4A271EDAC0922356946645F6964006462D407C48C187B092534BD050004"}, "operationType"=>"insert", "clusterTime"=>#<BSON::Timestamp:0x00007fe4b351d4f8 #seconds=1658062788, #increment=2>, "fullDocument"=>{"_id"=>BSON::ObjectId('62d407c48c187b092534bd05'), "one"=>"one"}, "ns"=>{"db"=>"change_stream_testing", "coll"=>"testing"}, "documentKey"=>{"_id"=>BSON::ObjectId('62d407c48c187b092534bd05')}}
The "lsid"-field contains information about the session from which the write originated. After taking a closer look at this i found that it contains base64 encoded data (just doing a json.parse() on the id and uid fields)
ID IS
{"$binary":{"base64":"BdMKD6TbTySICrHNHE6GBA==","subType":"04"}}
UID IS
{"$binary":{"base64":"COlyYfV7FhdDV8hhDrSY7+10/NVCs/fLwkGrKMztex4=","subType":"00"}}
Now, the problem/question is that i can't decode that base64 string to something readable. Using an online decoder i get
У
¤ЫO$€
±НN†**
and **éraõ{CWÈa´˜ïítüÕB³÷ËÂA«(Ìí{
respectively - when using the "auto detect" feature or UTF-8 which MongoDB uses internally (according to a quick google search)
The reason I ask is that I have a use case where, in some cases, I would like to be able to identify where an event in the change stream originated from ie. and what client issued the write. The only way I've been able to sort of accomplishing that without using the mutateFields operator to actually change the documents themselves and adding some kind of marker I could inspect in the change stream code (which I ideally don't want to do) is to use explicit client sessions which at least lets me know that whatever was writing the document was using an explicit session. But I would like to be able to go further and actually decipher this session information to, if possible, get some kind of unique identifier.

Sending a response without calling render() from a Mojolicious::Lite application

I am writing a "partial proxy" in Mojolicious::Lite. Certain requests (depending on the query path, and on the values of the parameters) generate a request to another server, while others are handled locally.
There is a nice proxy example, but it totally overrides the request/response handling and thus is not suitable to my needs.
Currently, I am marshalling the response via
$self->render(data => $res->body, code => $res->code);
Unfortunately, this does not take into account different content types. Using Mojolicious::Type does not help,
because I need a reverse mapping from the content type
in $res to the format in render(); besides,
the number of possible render formats is significantly smaller
than the number of possible content types.
So ideally, instead of the $self->render() call above
I need a way to say "here, I got a response in $res;
please return it back to the client as is".
Any ideas? Thanks!
Ok, the trick was to replace render() call with
$self->tx->res($res);
$self->rendered($res->code);

security code permutations; security methodology

I'm writing a Perl email subscription management app, based on a url containing two keycode parameters. At the time of subscription, a script will create two keycodes for each subscriber that are unique in the database (see below for script sample).
The codes will be created using Digest::SHA qw(sha256_hex). My understanding of it is that one way to ensure that codes are not duplicated in the database is to create a unique prefix in the raw data to be encoded. (see below, also).
Once a person is subscribed, I then have a database record of a person with two "code" fields, each containing values that are unique in the database. Each value is a string of alphanumeric characters that is 64 characters long, using lower case (only?) a-z and 0-9, e.g:
code1: ae7518b42b0514d69ae4e87d7d9f888ad268f4a398e7b88cbaf1dc2542858ba3
code2: 71723cf0aecd27c6bbf73ec5edfdc6ac912f648683470bd31debb1a4fbe429e8
These codes are sent in newsletter emails as parameters in a subscription management url. Thus, the person doesn't have to log in to manage their subscription; but simply click the url.
My question is:
If a subscriber tried to guess the values of the pair of codes for another person, how many possible combinations would there be to not only guess code1 correctly, but also guess code2? I suppose, like the lottery, a person could get lucky and just guess both; but I want to understand the odds against that, and its impact on security.
If the combo is guessed, the person would gain access to the database; thus, I'm trying to determine the level of security this method provides, compared to a more normal method of a username and 8 character password (which generically speaking could be considered two key codes themselves, but much shorter than the 64 characters above.)
I also welcome any feedback about the overall security of this method. I've noticed that many, many email newsletters seem to use similar keycodes, and don't require logging in to unsubscribe, etc. To, the primary issue (besides ease of use) is that a person should not be able to unsubscribe someone else.
Thanks!
Peter (see below for the code generation snippet)
Note that each ID and email would be unique.
The password is a 'system' password, and would be the same for each person.
#
#!/usr/bin/perl
use Digest::SHA qw(sha256_hex);
$clear = `clear`;
print $clear;
srand;
$id = 1;
$email = 'someone#domain.com';
$tag = ':!:';
$password = 'z9.4!l3tv+qe.p9#';
$rand_str = '9' x 15;
$rand_num = int(rand( $rand_str ));
$time = time() * $id;
$key_data = $id . $tag . $password . $rand_num . $time;
$key_code = sha256_hex($key_data);
$email_data = $email . $tag . $password . $time . $rand_num;
$email_code = sha256_hex($email_data);
print qq~
ID: $id
EMAIL: $email
KEY_DATA: $key_data
KEY_CODE: $key_code
EMAIL_DATA: $email_data
EMAIL_CODE: $email_code
~;
exit;
#
This seems like a lot of complexity to guard against a 3rd party unsubscribing someone. Why not generate a random code for each user, and store it in the database alongside the username? The method you are using creates a long string of digits, but there isn't actually much randomness in it. SHA is a deterministic algorithm that thoroughly scrambles bits, but it doesn't add entropy.
For an N bit truly random number, an attacker will only have a 1/(2^N) chance of guessing it right each time. Even with a small amount of entropy, say, 64 bits, your server should be throttling unsubscribe requests from the attacking IP address long before the attacker gets significant odds of succeeding. They'd have better luck guessing the user's email password, or intercepting the unencrypted email in transit.
That is why the unsubscribe codes are usually short. There's no need for a long code, and a long URL is more likely to be truncated or mistyped.
If you're asking how difficult it would be to "guess" two 256-bit "numbers", getting the one specific person you want to hack, that'd be 2^512:1 against. If there are, say, 1000 users in the database, and the attacker doesn't care which one s/he gets, that's 2^512:1000 against - not a significant change in likelihood.
However, it's much simpler than that if your attacker is either in control of (hacked in is close enough) one of the mail servers from your machine to the user's machine, or in control of any of the routers along the way, since your email goes out in plain text. A well-timed hacker who saw the email packet go through would be able to see the URL you've embedded no matter how many bits it is.
As with many security issues, it's a matter of how much effort to put in vs the payoff. Passwords are nice in that users expect them, so it's not a significant barrier to send out URLs that then need a password to enter. If your URL were even just one SHA key combined with the password challenge, this would nearly eliminate a man-in-the-middle attack on your emails. Up to you whether that's worth it. Cheap, convenient, secure. Pick one. :-)
More effort would be to gpg-encrypt your email with the client's public key (not your private one). The obvious downside is that gpg (or pgp) is apparently so little used that average users are unlikely to have it set up. Again, this would entirely eliminate MITM attacks, and wouldn't need a password, as it basically uses the client-side gpg private key password.
You've essentially got 1e15 different possible hashes generated for a given user email id (once combined with other information that could be guessed). You might as well just supply a hex-encoded random number of the same length and require the 'unsubscribe' link to include the email address or user id to be unsubscribed.
I doubt anyone would go to the lengths required to guess a number from 1 to 1e15, especially if you rate limit unsubscribe requests, and send a 'thanks, someone unsubscribed you' email if anyone is unsubscribed, and put a new subsubscription link into that.
A quick way to generate the random string is:
my $hex = join '', map { unpack 'H*', chr(rand(256)) } 1..8;
print $hex, "\n";
b4d4bfb26fddf220
(This gives you 2^64, or about 2*10^19 combinations. Or 'plenty' if you rate limit.)

Parse and display MIME multipart email on website

I have a raw email, (MIME multipart), and I want to display this on a website (e.g. in an iframe, with tabs for the HTML part and the plain text part, etc.). Are there any CPAN modules or Template::Toolkit plugins that I can use to help me achieve this?
At the moment, it's looking like I'll have to parse the message with Email::MIME, then iterate over all the parts, and write a handler for all the different mime types.
It's a long shot, but I'm wondering if anyone has done all this already? It's going to be a long and error prone process writing handlers if I attempt it myself.
Thanks for any help.
I actually just dealt with this problem just a few months ago. I added an email feature to the product I work for, both sending and receiving. The first part was sending reminders to users, but we didn't want to manage the bounce backs for our customer admins, we decided to have a message inbox that the admins could see bounces and replies without us, and the admins can deal with adjusting email addresses if they needed to.
Because of this, we accept all email that is sent to an inbox we watch. We use VERP to associate an email with a user, and store the entire email as is in the database. Then, when the admin requests to see the email, we have to parse the email.
My first attempt was very similar to an earlier answer. If one of the parts is html, show it. If it's text, show it. Otherwise, show the original, raw email. This broke down real fast with a few emails not generated by sendmail. Outlook, Exchange, and a few other email systems don't do that, they use multiparts to send the email. After a lot of digging and cussing, I discovered that the problem doesn't appear to be well documented. With the help of looking through MHonArc and reading the RFC's (RFC2045 and RFC2046), I settled on the solution below. I decided on not using MHonArc, since I couldn't easily resuse the parsing and display functionality. I wouldn't say this is perfect, but it's been good enough that we used it.
First, take the message and use Email::MIME to parse it. Then call a function called get_part with the array of parts Email::MIME gives you with ->parts().
get_part, for each part it was passed, decodes the content type, looks it up in a hash, and if it exists, call the function associated with that content type. If the decoder was able to give us something, put it on a result array.
The last piece of the puzzle is this decoder array. Basically, it defines the content types I can deal with:
text/html
text/plain
message/delivery-status, which is actually also plain text
multipart/mixed
multipart/related
multipart/alternative
The non-multipart sections I return as is. With mixed, related and alternative, I merely call get_parts on that MIME node and returns the results. Because alternative is special, it has some extra code after calling get_parts. It will only return html if it has an html part, or it will return only the text part of it has a text part. If it has neither, it won't return anything valid.
The advantage with the hash of valid content types is that I can easily add logic for more parts as needed. And by the time you get_parts is done, you should have an array of all content you care about.
One more item I should mention. As a part of this, we created a separate domain that actually serves these messages. The main domain that an admin works on will refuse to serve the message and redirect the browser to our user content domain. This second domain will only serve user content. This is to help the browser properly sandbox the content away from our main domain. See same origin policy (http://en.wikipedia.org/wiki/Same_origin_policy)
It doesn't sound like a difficult job to me:
use Email::MIME;
my $parsed = Email::MIME->new($message);
my #parts = $parsed->parts; # These will be Email::MIME objects, too.
print <<EOF;
<html><head><title>!</title></head><body>
EOF
for my $part (#parts) {
my $content_type = $parsed->content_type;
if ($content_type eq "text/plain") {
print "<pre>", $part->body (), "</pre>\n";
}
elsif ($content_type eq "text/html") {
print $part->body ();
}
# Handle some more cases here
}
print <<EOF;
</body></html>
EOF
Reuse existing complete software. The MHonArc mail-to-HTML converter has excellent MIME support.

Safe non-tamperable URL component in Perl using symmetric encryption?

OK, I'm probably just having a bad Monday, but I have the following need and I'm seeing lots of partial solutions but I'm sure I'm not the first person to need this, so I'm wondering if I'm missing the obvious.
$client has 50 to 500 bytes worth of binary data that must be inserted into the middle of a URL and roundtrip to their customer's browser. Since it's part of the URL, we're up against the 1K "theoretical" limit of a GET URL. Also, $client doesn't want their customer decoding the data, or tampering with it without detection. $client would also prefer not to store anything server-side, so this must be completely standalone. Must be Perl code, and fast, in both encoding and decoding.
I think the last step can be base64. But what are the steps for encryption and hashing that make the most sense?
I have some code in a Cat App that uses Crypt::Util to encode/decode a user's email address for an email verification link.
I set up a Crypt::Util model using Catalyst::Model::Adaptor with a secret key. Then in my Controller I have the following logic on the sending side:
my $cu = $c->model('CryptUtil');
my $token = $cu->encode_string_uri_base64( $cu->encode_string( $user->email ) );
my $url = $c->uri_for( $self->action_for('verify'), $token );
I send this link to the $user->email and when it is clicked on I use the following.
my $cu = $c->model('CryptUtil');
if ( my $id = $cu->decode_string( $cu->decode_string_uri_base64($token) ) ) {
# handle valid link
} else {
# invalid link
}
This is basically what edanite just suggested in another answer. You'll just need to make sure whatever data you use to form the token with that the final $url doesn't exceed your arbitrary limit.
Create a secret key and store it on the server. If there are multiple servers and requests aren't guaranteed to come back to the same server; you'll need to use the same key on every server. This key should be rotated periodically.
If you encrypt the data in CBC (Cipher Block Chaining) mode (See the Crypt::CBC module), the overhead of encryption is at most two blocks (one for the IV and one for padding). 128 bit (i.e. 16 byte) blocks are common, but not universal. I recommend using AES (aka Rijndael) as the block cipher.
You need to authenticate the data to ensure it hasn't been modified. Depending on the security of the application, just hashing the message and including the hash in the plaintext that you encrypt may be good enough. This depends on attackers being unable to change the hash to match the message without knowing the symmetric encryption key. If you're using 128-bit keys for the cipher, use a 256-bit hash like SHA-256 (you can use the Digest module for this). You may also want to include some other things like a timestamp in the data to prevent the request from being repeated multiple times.
I see three steps here. First, try compressing the data. With so little data bzip2 might save you maybe 5-20%. I'd throw in a guard to make sure it doesn't make the data larger. This step may not be worth while.
use Compress::Bzip2 qw(:utilities);
$data = memBzip $data;
You could also try reducing the length of any keys and values in the data manually. For example, first_name could be reduced to fname.
Second, encrypt it. Pick your favorite cipher and use Crypt::CBC. Here I use Rijndael because its good enough for the NSA. You'll want to do benchmarking to find the best balance between performance and security.
use Crypt::CBC;
my $key = "SUPER SEKRET";
my $cipher = Crypt::CBC->new($key, 'Rijndael');
my $encrypted_data = $cipher->encrypt($data);
You'll have to store the key on the server. Putting it in a protected file should be sufficient, securing that file is left as an exercise. When you say you can't store anything on the server I presume this doesn't include the key.
Finally, Base 64 encode it. I would use the modified URL-safe base 64 which uses - and _ instead of + and / saving you from having to spend space URL encoding these characters in the base 64 string. MIME::Base64::URLSafe covers that.
use MIME::Base64::URLSafe;
my $safe_data = urlsafe_b64encode($encrypted_data);
Then stick it onto the URL however you want. Reverse the process for reading it in.
You should be safe on size. Encrypting will increase the size of the data, but probably by less than 25%. Base 64 will increase the size of the data by a third (encoding as 2^6 instead of 2^8). This should leave encoding 500 bytes comfortably inside 1K.
How secure does it need to be? Could you just xor the data with a long random string then add an MD5 hash of the whole lot with another secret salt to detect tampering?
I wouldn't use that for banking data, but it'd probably be fine for most web things...
big