Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
My friend challenged me to figure out what encoding he used and only gave me the hint MD5 (Its not just an MD5 hash, but I'm still only learning about encryption), to which I eventually lost, but he still won't tell me what he used.
He has code that is authenticating his encoding is genuine each time, so I was hoping I could ask someone here what encoding was used and how effective it would be for me to learn about it and use it in some of my future applications.
Here are a few samples of the String that he is using to authenticate against:
pkG5llUrdxRcfe2wYLFawQ==
Zy6mwGZ9cNk3UyHvVOqpWg==
SdG3KvuLp/902XTiSlM+NQ==
mutGi6BcdPvv/z6j1B3/KQ==
pkG5llUrdxRcfe2wYLFawQ==
FJy+7HiRVlbsjntIign+PQ==
tYa63Ko4SS9WaGMUTdSkFg==
tgVrNGfztjS8Tony9swqUw==
XrhfcKqnNlPBtH8hXswEDA==
v4H+13J+Eh1ERpsGSIHYdQ==
+AV1IoAyU8DkTv6Ry+GH+g==
If someone could tell me what these are, I would really appreciate it.
Also, as a bonus, because this clearly isn't driving me crazy enough, why are there two equal signs at the end of each encryption? Is that some sort of format, or are they there just to distract me?
EDIT: A point I would like to make is that I know the original message, but I don't understand how the one message can create unique hashes every time.
Well looks like raw md5 results base64 encoded... but really it could be anything, there is no way to know from looking at 16 bytes to see what produced them or if they even make any sense.
If these are all the result of the same 74 byte message, then that most likely means HMAC-MD5 used with different keys. It's not reversible regardless.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am trying to recover a word document that was using 97-2003 version. I do not remember why I did this, but I saved off the Compound File streams into a folder, and it is definitely something I would like to have back as the original document. Is there any solution for this?
Thanks!
Sure, you should theoretically be able to restore it.
You should be able to create a root compound document using
StgCreateDocfile() // or StgCreateStorageEx()
You will need the IStorage* pointer from this if it succeeds.
If you have a folder in your folder, you are going to have to create sub storages for each folder--and do it recursively. The API is IStorage::CreateStorage() ... look it up.
If you have a file in a folder, then a stream needs to be created in the storage that is equivalent to the folder. To create a stream, use IStorage::CreateStream()...look up arguments.
Looking at your screen shot, it has some streams named something like [1]CompObj or [5]SummaryDocumentInformation. For the [n] part, that is probably the equivalent of _T('\00n') where n is 1 or 5 or whatever--it's probably a control character. I've seen that in compound files. If you want to investigate, create a Word 97-2003 document and save as a .doc file and examine the structure.
So, something like [1]CompObj is really _T("\001CompObj")
The stuff above about [n] in your file names is an educated/experienced guess.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
At my work I wrote a little parser for C-like expressions in one of our in-house Perl tools. I initially chose Parse::RecDescent because of its extreme ease of use and straightforward grammar syntax, but I'm finding that it's excessively slow (which is corroborated by general opinion found on the web). It's safe to assume that the grammar of the expressions is no more complicated than that of C.
What are the fastest (but still with a straightforward and uncumbersome grammar format) lexxer/parser modules for the use case of thousands of simple expressions (I'd guestimate the median length is 1 token, mean is 2 or so, and max is 30)? Additionally, thanks to unsavory IT choices, it must work in Perl 5.8.8 and it and any non-core dependencies must be pure Perl.
Parse::Eyapp looks like satysfying 5.8.8, pure perl and dependency requirements. As for speed, it claims LALR parsers, which must be faster than recursive descent. A grammar for expressions is given in the doc. Hope it helps.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Could you please recomend me java libraries for text prerprocessing and clean up? The lib should perform such tasks:
convert all verbs to infinitive
convert all nouns to singular form
remove useless (for the sense of a text) words
Converting words to canonical forms (verbs to infinitives and nouns to singular, for example) is called lemmatization. One Java-based lemmatizer is Standford CoreNLP.
For "useless words" you probably want "stop words" - there's no standard list, but there's a lot floating around the Internet which function in more or less the same way with the only difference being how many words they include (typically between 100 and 1000). I've known people to use this list before. When removing stop words, remember to ignore case when looking for matches.
Not sure if this does everything you need, but check out mrsqg.
http://code.google.com/p/mrsqg/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
A google search gave me the methods to validate form fields, but I can already construct them. My question is, which are the characters that are safe to include and which are to exclude in a form field? Specifically, username and password.
A brief explanation would be nice too.
Thanks.
You need to exclude all characters you will never have inside you datas.
Do you think it would be any sense to have special characters if your usernames/passwords must only contains alphanumeric characters ?
Look at some REGEX for JAVA or for PHP.
There is a regexp reference table which could be usefull too.
If you give us more information about the langage you are using, we could maybe help us more.
Have a good day!
[UPDATE]
There is the security reference which is very good and the OWASP website which is a real reference for any web security related topics, look at the OWASP Cheat Sheets.
#**Cross-Site Scripting Vulnerabilities?**
#for any programming language, the chars you should reject or handle properly are:
> < ( ) [ ] ' " ; : / |
#for PHP, tools to handle with care:
strip_tags(), utf8_decode(), htmlspecialchars(), strtr()
#do Positive/Negative filtering
#check Encoding
#**SQL Injection ?**
#etc...
[/UPDATE]
If you properly sanitize your input and output, there's nothing you need to be afraid of.
Note: I'm assuming you're using PHP as your server side language.
First, use PDO (or MySQLi) with prepared statements, to eliminate the risk of SQL Injection.
Second, anything that will be displayed on your site should be sanitized against XSS attacks (so that users don't register a username of <script>doSomeEvilStuff()</script>).
That's it basically, if you're really paranoid, you should be using a whitelist (to only allow certain characters) and not a blacklist (to only disallow certain characters), since someone will always find a way to bypass a blacklist, but no one can bypass a whitelist.
For usernames, I don't see the need for anything more than /[a-zA-Z0-9_.\s!$%^&*\-+=]/ You may think otherwise. In any case, don't allow /[`<>(){}[]]/
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
is there any pgp key dumping program like http://www.pgpdump.net/ that also shows the MPI values as well as the other information? the linked website's program will print out ... for the long MPI, which is perfectly logical, but I want to see the values since my program is for some reason getting all but one part right (reading an elgamal public key), and its messing with everything that comes afterwards. i want to see where im off by a few bits
gpg --list-packets --debug-all should show MPI values.
pgpdump.net links to the source code of pgpdump. Perhaps you could find the part where ... is written and change it in a local copy of the program.