I am fairly new with hashing and I have generated hashes for values using SHA-256 and a self made method of generating salts. It involves deterministically finding the seed for each input for the random salt generator. Now I would like to verify the integrity of the hash values and also check the effectiveness of the salt added. However, I do not know how to go about doing this. Can anyone help with this?
Each operating system has a random source and offer methods to get random bytes from this source. Most applications are not able on their own to generate cryptographically safe salts, that's why they read from the random source. Testing this method is done elsewhere, there is no need to test such a salt generator in your application, just call the method of the operating system.
Verifying the correctness of your code is the time for Unit Testing.
...a software testing method by which individual units of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures are tested to determine if they are fit for use.1 Intuitively, one can view a unit as the smallest testable part of an application.
Unit testing is aided by testing frameworks, which can examine your code base, find anything marked as a test and run it on demand (i.e. when you tell it to) and/or automatically as part of a build or deploy process. The goal is to gain some level of guarantee that changes you've made to the code under test won't break your application. The test should cover combinations of inputs and their expected outputs, thus enforcing behavior.
Popular testing frameworks include JUnit for Java, NUnit for .Net languages and many others.
Anyways, to test the integrity of your hashing scheme, you would hopefully have a single method that encapsulates all the behavior. Your unit test would cover this one method1. Writing this in a Java JUnit syntax (with matchers, which you can see more about here), we might have something like this:
import org.junit.*;
public class PasswordSha256HasherTest {
// Known inputs and outputs
// hashes are precomputed with an external SHA256 calculator and verified
public static final String password = "thisismypassword";
public static final String unsaltedHash = "1da9133ab9dbd11d2937ec8d312e1e2569857059e73cc72df92e670928983ab5";
public static final String salt = "a19044b439748e23ccbc82c5e2549ece5ceae026340ac6cb1f616549e64020e3";
// This is what we want: SHA256(salt+password);
public static final String saltedHash = "6c89a97409d598650e458a2040e900abbeca8e697137353fdaae97e382378433"
// If your hasher has to be instantiated, do it like this
private PasswordHasher hasher;
#BeforeClass
public static void beforeClass() throws Exception {
hasher = new PasswordHasher();
}
#Test
public void testCorrectPasswordHash() throws Exception {
String hash = hasher.getPasswordHash(password, salt);
// If PasswordHasher is a static class, you can skip the instantiation and do this
String hash = PasswordHasher.getPasswordHash(password, salt);
assertThat("Hash should match SHA256(salt + password)",
hash, equalTo(saltedHash));
// You could also test that the hash doesn't match the password or the salt
}
#Test(expected= IllegalArgumentException)
public void testSaltIsRequired() throws Exception {
// If the exception for an empty salt IS NOT thrown, this test fails.
String hash = hasher.getPasswordHash(password, "");
}
}
This is just a start, but this would accomplish the goal of your question - verify the integrity of your hashing utility. There are many other tests you could surely write (verify password complexity requirements, disallow dictionary words, salt length requirements, etc.). Some advocates of testing maintain that writing useful and meaningful tests requires as much thought and dedication as designing the code itself. There is even the philosophy of Test Driven Development, which states you should write tests first (which describe and encapsulate all the requirements of the code), and stop writing code as soon as all the tests pass. Both of these have their merits.
I also encourage you to check out the Google testing blog if you're interested in learning from (probably) some of the best testers on the planet.
Finally, on a related note, I want to drive home my point that salts should be non-deterministic and globally unique. The easiest way to accomplish this is a sufficiently large pseudo-random value that is completely independent of everything to do with your system and your users, such as a GUID. This question tackles the exact topic, as does this question on Security.SE. As the common adage on the internet goes, if you're doing your own thing with cryptography, you are almost certainly wrong.
1: There is some debate over the proper scope of a unit test. Some might say that the lowest meaningful unit of work for a login system is the creation and authorization of a user - specifically, the mechanism of securing the password is only a part of the rest of authorization schema, and therefore it should be tested all together. To them I say that testing each part separately makes the process of finding and correcting specific bugs easier and more straightforward.
At Martin says, you're best of using an OS-provided pseudo-random value as salt, from a function's that's documented to be appropriate for cryptographic use.
Separately, if you want to to assess your own salt routine, the criterion for quality is that it should produce a vast range of possible outputs - with the levels of grouping, scatter and repetition that you'd get from random numbers, even if the inputs to the salt calculation only differ in a bit or two. It's pretty much the same criterion for a hash function, but you probably don't need something secure against deriving the original inputs from the salt value.
Separately, if you're regenerating the salt from say a user's address and they change it, you'd want their current password to recalcualte the hash with a new salt, or to save the old salt or address somewhere for ongoing use... otherwise their next login attempt is doomed.
Related
For reference I only know python so that's what I've been working with.
I've recently been looking into making my own hashing algorithm to further my understanding on how they work, I'm not looking into creating the most cryptographically secure hashing algorithm, just something that is a bit more secure than validating passwds in plain-text.(in other words I don't care if this algorithm has copious amounts of collisions.
From what I understand about hash functions is that they use ???? to obfuscate the input password. Where I'm getting caught up is how the function takes a user input, like "password1" and translates that into numbers the system can work with, then, what exact methods do they use to obfuscate them?
Apologies if this is a stupid question but I cant find any material on this that isn't way beyond my understanding or basic enough where they gloss over what happens inside the hash algorithm.
I am using multiple TestCaseSource attributes. Is it possible to determine which value is coming from which source?
Code example:
[TestCaseSource(CountryListA)]
[TestCaseSource(CountryListB)]
public void SomeTest(Country country){
...tests on country...
// access the source (either CountryListA or CountryListB)
}
There are multiple reasons to access the source, for example I need to check that country is unique in that list, or if something went wrong I want to log for which country in which source the test failed.
Short answer: No. NUnit is, in fact, designed to make this impossible. Tests are basically not supposed to know where their arguments came from or who supplied them. This is important in some kinds of advanced test generation scenarios as well as for [Theory] tests.
The general approach to this problem is to take steps that ensure your tests are uniquely named. Any duplicates, whether within the same list or between two lists, make it impossible to determine exactly which source is the problem.
The key question is whether the apparently duplicate tests are true duplicates. You won't know that unless the Type of each returned argument overrides ToString() in a way that allows you to determine the exact case. For example, if Country were a class without a ToString override, every instance of the test would be named something like SomeTest(<Country>). OTOH, this would not be a problem if Country is an enum or if its ToString() is overridden in a unique way.
In your case, there is a relatively simple way to give your tests a unique full name even if the display names are the same.
Put all the tests in an abstract base class.
Derive two different fixtures from that base.
Use CountryListA in one of those fixtures and CountryListB in the other.
That said, it seems to me that the better approach is to keep the current structure and make the test cases more identifiable.
I am kinda playing with the SHA-1 algorithm. I want to find out differences and variations in the results if I change few values in the SHA-1 algorithm for a college report. I have found a piece of java code to generate hash of a text. Its done by importing
java.security.MessageDigest
class. However, I want to change the h0-4 values and edit them but I don't know where can I find them? I had a look inside the MessageDigest class but couldn't find it there. Please help me out!
Thanx in advance.
I don't believe you can do that. Java doesn't provide any API for its MessageDigest Class, which can allow you change the values.
However, there are some workarounds (none of which I've ever tried). Take a look at this answer to the question "How to edit Java Platform Package (Built-in API) source code?"
If you're playing around with tweaks to an algorithm, you shouldn't be using a built-in class implementing that algorithm. The class you mention is designed to implement standard algorithms for people who just want to use them in production; if you're using SHA-1 (or any cryptographic algorithm) instead of playing around and tweaking it, it's never a good idea to change the algorithm yourself (e.g. by changing the initial hash value), so the class does not support modifying those constants.
Just implement the algorithm yourself; from Wikipedia's pseudocode, it doesn't look like it's all that complicated. I know that "don't implement your own crypto, use a standard and well-tested implementation" is a common mantra here, but that only applies to production-type code -- if you're playing around with an algorithm to see what effect tweaking it has, you should implement it yourself, so you have more flexibility in modifying it and seeing the effect of the modifications.
Basically adding to #Rahil's answer but too much for comments:
Even without API access, if MessageDigest were the implementation you could use reflection. But it's not.
Most of the java standard library is just commonly-useful classes in the usual way, e.g. java.util.ArrayList contains the implementation of ArrayList (or ArrayList<?> since 6), java.io.FileInputStream contains the implementation of FileInputStream (although it may use other classes in that implementation), etc. Java Cryptography uses a more complicated scheme where the implementations are not in the API classes but instead in "providers" that are mostly in their own jars (in JRE/lib and JRE/lib/ext) not rt.jar and mostly(?) don't have source in src.zip.
Thus the java.security.MessageDigest class does not have the code to implement SHA1, or SHA256, or MD5, etc etc. Instead it has code to search the JVM's current list of crypto providers to find an implementation of whatever algorithm is asked for, and instantiate and use that. Normally the list of providers used is set to (the list of) those included in the JRE distribution, although an admin or program can change it.
With the normal JRE7 providers, SHA1 is implemented by sun.security.provider.SHA.
In effect the API classes like MessageDigest Signature Cipher KeyGenerator etc function more like interfaces or facades by presenting the behavior that is common to possibly multiple underlying implementations, although in Java code terms they are actual classes and not interfaces.
This was designed back in 1990 or so to cope with legal restrictions on crypto in effect then, especially on export from the US. It allowed the base Java platform to be distributed easily because by itself it did no crypto. To use it -- and even if you don't do "real" crypto on user data in Java you still need things like verification of signed code -- you need to add some providers; you might have one set of providers, with complete and strong algorithms, used in US installations, and a different set, with fewer and weaker algorithms, used elsewhere. This capability is now much less needed since the US officially relaxed and in practice basically dropped enforcement about 2000, although there are periodically calls to bring it back. There is still one residual bit, however: JCE (in Oracle JREs) contains a policy that does not allow symmetric keys over 128 bits; to enable that you must download from the Oracle website and install an additional (tiny) file "JCE Unlimited Strength Policy".
TLDR: don't try to alter the JCE implementation. As #cpast says, in this case where you want to play with something different from the standard algorithm, do write your own code.
Since SHA-3 seems to be an already known function (Keccak as the finalist of NIST hash function competition) I have several questions related to this topic:
NIST site says that NIST is closed due to a lapse in government funding. Is there any chance that SHA-3 will ever be finally accepted?
BouncyCastle library has an implementation of SHA-3 which digest results are the same as examples posted in wikipedia article (I tested this). Since the final standard is not approved, can this be trusted? Wikipedia says this is likely to be changed but how can it change as the final algorithm does not seem to be a subject to change (or else it would be another algorithm).
Here someone noted that usage of PBKDF2 with SHA-3 for key strengthening and password hashing should be avoided. But I cannot understand why? (how can it give attacker an advantage if the algorithm is not fast?)
I could not find test vectors anywhere to test my implementation of PBKDF2-HMAC-SHA3 in scala based on BouncyCastle java api. I can post my test spec with some results. But first can anybody post any/spec test vectors?
Here is my implementation in scala:
package my.crypto
import org.bouncycastle.crypto.digests.SHA3Digest
import org.bouncycastle.crypto.generators.PKCS5S2ParametersGenerator
import org.bouncycastle.crypto.PBEParametersGenerator
import org.bouncycastle.crypto.params.KeyParameter
object PBKDF2WithHmacSHA3 {
def apply(password: String, salt: Array[Byte], iterations: Int = 65536, keyLen: Int = 256): Array[Byte] = {
val generator = new PKCS5S2ParametersGenerator(new SHA3Digest(256))
generator.init(
PBEParametersGenerator.PKCS5PasswordToUTF8Bytes(password.toCharArray),
salt,
iterations
)
val key = generator.generateDerivedMacParameters(keyLen).asInstanceOf[KeyParameter]
key.getKey
}
}
One questionable thing for me is new SHA3Digest(256), the 256 bit length in the constructor, should it be same as provided key length or some fixed one as I did? I decided to use a fixed length because only some fixed values can be used and object API user can provide any value as key length parameter, but most of uncommon ones would result in exception thrown from inside SHA3Digest constructor. Also the default value seem to be 288 (when no key length is provided) which looks strange.
Thanks in advance!
Shutdown is temporary. SHA-3 will most likely be standardized at some point in 2014.
No, those values are probably for Final Round Keccak, not for SHA-3. There is no SHA-3 spec yet and it's quite likely that SHA-3 will be tweaked before standardization.
=> it's impossible to implement SHA-3 now, you can only implement Keccak.
Password hashes should be as expensive as possible for the attacker. The attacker uses different hardware from the defender, at minimum a GPU, but possible even custom chips.
The defender has a limited time budged for a hash (e.g. 100ms) and wants a function that's as expensive as possible for the attacker given that constraint. This means that custom hardware shouldn't gain a big advantage over a standard computer. So it's preferable to use a software friendly hash, but Keccak is relatively hardware friendly.
SHA-1 and SHA-2 are decent in hardware as well, so in practice the difference is small compared to the advantage other password hashes have over PBKDF2-HMAC-SHA-x. If you care about security instead of standard conformance, I recommend scrypt.
All,
Wanted to get a few thoughts on this. Lately I am becoming more and more of a subscriber of "purist" DI/IOC principles when designing/developing. Part of this (a big part) involves making sure there is little coupling between my classes, and that their dependencies are resolved via the constructor (there are certainly other ways of managing this, but you get the idea).
My basic premise is that extension methods violate the principles of DI/IOC.
I created the following extension method that I use to ensure that the strings inserted into database tables are truncated to the right size:
public static class StringExtensions
{
public static string TruncateToSize(this string input, int maxLength)
{
int lengthToUse = maxLength;
if (input.Length < maxLength)
{
lengthToUse = input.Length;
}
return input.Substring(0, lengthToUse);
}
}
I can then call my string from within another class like so:
string myString = "myValue.TruncateThisPartPlease.";
myString.TruncateToSize(8);
A fair translation of this without using an extension method would be:
string myString = "myValue.TruncateThisPartPlease.";
StaticStringUtil.TruncateToSize(myString, 8);
Any class that uses either of the above examples could not be tested independently of the class that contains the TruncateToSize method (TypeMock aside). If I were not using an extension method, and I did not want to create a static dependency, it would look more like:
string myString = "myValue.TruncateThisPartPlease.";
_stringUtil.TruncateToSize(myString, 8);
In the last example, the _stringUtil dependency would be resolved via the constructor and the class could be tested with no dependency on the actual TruncateToSize method's class (it could be easily mocked).
From my perspective, the first two examples rely on static dependencies (one explicit, one hidden), while the second inverts the dependency and provides reduced coupling and better testability.
So does the use of extension methods conflict with DI/IOC principles? If you're a subscriber of IOC methodology, do you avoid using extension methods?
I think it's fine - because it's not like TruncateToSize is a realistically replaceable component. It's a method which will only ever need to do a single thing.
You don't need to be able to mock out everything - just services which either disrupt unit testing (file access etc) or ones which you want to test in terms of genuine dependencies. If you were using it to perform authentication or something like that, it would be a very different matter... but just doing a straight string operation which has absolutely no configurability, different implementation options etc - there's no point in viewing that as a dependency in the normal sense.
To put it another way: if TruncateToSize were a genuine member of String, would you even think twice about using it? Do you try to mock out integer arithmetic as well, introducing IInt32Adder etc? Of course not. This is just the same, it's only that you happen to be supplying the implementation. Unit test the heck out of TruncateToSize and don't worry about it.
I see where you are coming from, however, if you are trying to mock out the functionality of an extension method, I believe you are using them incorrectly. Extension methods should be used to perform a task that would simply be inconvenient syntactically without them. Your TruncateToLength is a good example.
Testing TruncateToLength would not involve mocking it out, it would simply involve the creation of a few strings and testing that the method actually returned the proper value.
On the other hand, if you have code in your data layer contained in extension methods that is accessing your data store, then yes, you have a problem and testing is going to become an issue.
I typically only use extension methods in order to provide syntactic sugar for small, simple operations.
Extension methods, partial classes and dynamic objects. I really like them, however you must tread carefully , there be monsters here.
I would take a look at dynamic languages and see how they cope with these sort of problems on a day to day basis, its really enlightening. Especially when they have nothing to stop them from doing stupid things apart from good design and discipline. Everything is dynamic at run time, the only thing to stop them is the computer throwing a major run time error. "Duck Typing" is the maddest thing I have ever seen, good code is down to good program design, respect for others in your team, and the trust that every member, although have the ability to do some wacky things choose not to because good design leads to better results.
As for your test scenario with mock objects/ICO/DI, would you really put some heavy duty work in an extension method or just some simple static stuff that operate in a functional type way? I tend to use them like you would in a functional programming style, input goes in, results come out with no magic in the middle, just straight up framework classes that you know the guys at MS have designed and tested :P that you can rely on.
If your are doing some heavy lifting stuff using extension methods I would look at your program design again, check out your CRC designs, Class models, Use Cases, DFD's, action diagrams or whatever you like to use and figure out where in this design you planned to put this stuff in an extension method instead of a proper class.
At the end of the day, you can only test against your system design and not code outside of your scope. If you going to use extension classes, my advice would be to look at Object Composition models instead and use inheritance only when there is a very good reason.
Object Composition always wins out with me as they produce solid code. You can plug them in, take them out and do what you like with them. Mind you this all depends on whether you use Interfaces or not as part of your design. Also if you use Composition classes, the class hierarchy tree gets flattened into discrete classes and there are fewer places where your extension method will be picked up through inherited classes.
If you must use a class that acts upon another class as is the case with extension methods, look at the visitor pattern first and decide if its a better route.
Its a pain because they are hard to mock. I usually use one of these strategies
Yep, scrap the extension its a PITA to mock out
Use the extension and just test that it did the right thing. i.e. pass data into the truncate and check it got truncated
If it's not some trivial thing, and I HAVE to mock it, I'll make my extension class have a setter for the service it uses, and set that in the test code.
i.e.
static class TruncateExtensions{
public ITruncateService Service {private get;set;}
public string TruncateToSize(string s, int size)
{
return (Service ?? Service = new MyDefaultTranslationServiceImpl()). TruncateToSize(s, size);
}
}
This is a bit scary because someone might set the service when they shouldn't, but I'm a little cavalier sometimes, and if it was really important, I could do something clever with #if TEST flags, or the ServiceLocator pattern to avoid the setter being used in production.