As per the JWK specification 'kid' is defined as follows.
The "kid" (key ID) parameter is used to match a specific key. This
is used, for instance, to choose among a set of keys within a JWK Set
during key rollover. The structure of the "kid" value is
unspecified. When "kid" values are used within a JWK Set, different
keys within the JWK Set SHOULD use distinct "kid" values. (One
example in which different keys might use the same "kid" value is if
they have different "kty" (key type) values but are considered to be
equivalent alternatives by the application using them.) The "kid"
value is a case-sensitive string. Use of this member is OPTIONAL.
When used with JWS or JWE, the "kid" value is used to match a JWS or
JWE "kid" Header Parameter value.
Can we use the certificate thumbprint as the 'kid' value here since it directly identifies the key used to sign the JWT? What are the drawbacks of using the certificate thumbprint as the 'kid' instead of a random string?
The fingerprint by definition would be stable and always available on the key. So using it would free you from storing the relation between the key and a random string.
I see not drawbacks in using the fingerprint.
Related
I'm having trouble understanding what the Hash Function does and doesn't do, as well as what exactly a Bucket is.
From my understanding:
A HashTable is a data structure that maps keys to values using a Hash Function.
A HashFunction is meant to map data from an array of arbitrary/unknown size to a data array of fixed size.
There can be duplicate Values in the original data array, but this is irrelevant.
Each Value will have a unique Key. Thus, each Key has exactly 1 Value.
The HashFunction will generate a HashCode for each (Value, Key) pair. However, Collisions can occur in which multiple (Value, Key) pairs map to the same HashCode.
This can be remedied by using either Chaining/Open Addressing methods.
The HashCode is the index value indicating the position of a particular entry from the original data array within the Bucket array.
The Bucket array is the fixed data array constructed that will contain the entries from the original array.
My questions:
How are the Keys generated for each value? Is the HashFunction meant to generate both Key and HashCode values for each entry? Does each Bucket thus contain only one entry (assuming a Chaining implementation to remedy Collision)?
How are the Keys generated for each value?
Key is not generated, it is provided by you and serves as an input to the hash function which in turn converts that key into index of hash table. Simply speaking:
H(key)=index
so the value you are looking for is:
hash_table[index] = value
Is the HashFunction meant to generate HashCode values for each entry?
It all depends on the implementation of hash function and hash table. Some hash functions might generate a hashcode out of provided key and then for example take its modulo(size) where size is the size of hash table, in order to get the index. Others might convert the key directly into index. In either case the ultimate goal of hash function is to find the location of searched data within hash table in constant time.
Does each Bucket thus contain only one entry (assuming a Chaining implementation to remedy Collision)?
Ideally each key should be mapped to a unique index but mostly that's not the case since the number of buckets (i.e. indices) is far smaller than the number of keys so the average length of a chain per bucket (i.e. number of collisions per bucket) is no.of keys/no.of indices
decl_storage! is a "procedural macro" used for storing data to make it available in subsequent blocks.
It says if the user is able to set the first key pair in the double_map, then we cannot trust that key pair, and so we must use a cryptographic hasher such as blake2_256 to prevent "other values of all storage items being compromised".
Then it goes on to say that if the user is able to set the second key pair in the double_map, then we cannot trust that key pair, and so we must use a cryptographic hasher such as blake2_256 to prevent "other items in storage with the same first key being compromised".
With regard to the first key pair, why does it say that it's just to prevent "other values of all storage items being compromised"? Isn't blake2_256 also used to prevent the first key pair itself from being compromised (rather than just "other values")?
Let's say the hash of module1.someValue is 0x12345678
hash of module2.doubleMapValue.firstKey(value1) is 0x1234
hash of module2.doubleMapValue.secondKey(value2) is 0x5678
This means module2.doubleMapValue.fullKey(value1, value2) and module1.someValue have same hash. i.e. the values are stored in the same place.
If a user is able to control both keys of module2.doubleMapValue and figure out the value of value1 and value2, then they will be able to override the value of module1.someValue and cause security issues.
That's why the hash function of key1 of double map needs to be a cryptographic hasher if the value is controlled by a user. Otherwise a user may be able to craft value1 such it collides with the storage of all other modules, and hence compromise them.
In case a user does not control key2, double map provides a clear all keys with hash(key1) prefix feature that could be hijacked to cause troubles as well.
I'm using something like this:
OPEN SYMMETRIC KEY SSNKey
DECRYPTION BY CERTIFICATE SSNCert;
UPDATE
Customers
SET
SSNEncrypted = EncryptByKey(Key_GUID('SSNKey'), 'DecryptedSSN')
Where SSNEncrypted is a varbinary column. I noticed the values come out different each time. Why is this? And what can I do to get consistent encrypted values, so I can compare them in different tables?
This is "by design".
The function EncryptByKey is nondeterministic.
But if you decrypt the different values you always get the original decrypted value.
Have a look at this blog on MSDN.
This question is not specific to any programming language, I am more interested in a generic logic.
Generally, associative maps take a key and map it to a value. As far as I know, implementations require the keys to be unique otherwise values get overwritten. Alright.
So let us assume that the above is done by some hash implementation.
What if two DIFFERENT keys get the same hash value? I am thinking of this in the form of an underlying array whose indices are in a result of hash on said keys. It could be possible that more than one unique key gets mapped to the same value yes? If so, how does such an implementation handle this?
How is handling same hash different from handling same key? Since same key results in overwriting and same hash HAS to retain the value.
I understand hashing with collision, so I know chaining and probing. Do implementations iterate over the current values which are hashed to a particular index and determine if the key is the same?
While I was searching for the answer I came across these links:
1. What happens when a duplicate key is put into a HashMap?
2. HashMap with multiple values under the same key
They don't answer my question however. How do we distinguish between same hash vs same key?
By comparing the keys. If you look at object-oriented implementations of hash maps, you'll find that they usually require two methods to be implemented on the key type:
bool equal(Key key1, Key key2);
int hash(Key key);
If only the hash function can be given and no equality function, that restricts the hash map to be based on the language's default equality. This is not always desirable as sometimes keys need to be compared with a different equality function. For example, if the keys are strings, an application may need to do a case-insensitive comparison, and then it would pass a hash function that converts to lowercase before hashing, and an equal function that ignores case.
The hash map stores the key alongside each corresponding value. (Usually, that's a pointer to the key object that was originally stored.) Any lookup into the hash map has to make a key comparison after finding a matching hash, to verify that the key actually matches.
For example, for a very simple hash map that stores a list in each bucket, the list would be a list of (key, value) pairs, and any lookup compares the keys of each list entry until it finds a match. In pseudocode:
Array<List<Pair<Key, Value>>> buckets;
Value lookup(Key k_sought) {
int h = hash(k_sought);
List<Pair<Key, Value>> bucket = buckets[h];
for (kv in bucket) {
Key k_found = kv.0;
Value v_found = kv.1;
if (equal(k_sought, k_found)) {
return v_found;
}
}
throw Not_found;
}
You can not tell what a key is from the index, so no you can not iterate over the values to find any information about the keys. You will either have to guarantee 0 collisions or store the information that was hashed to give the index.
If you only have values stored in your structure, there is no way to tell if they have the same key or just the same hash. You will need to store the key along with the value to know.
Redis HMSET command documentation describes it as:
"Sets the specified fields to their respective values in the hash stored at key. This command overwrites any existing fields in the hash. If key does not exist, a new key holding a hash is created."
What does the word 'hash' mean in this case? Does it mean a hash table? Or, hash code computed for the given the field,value pairs? I would like to think it means the former, i.e., a hash table, but I would still like to clarify as the documentation is not explicit?
Hash refers to the Redis Hash Data-Type:
Redis Hashes are maps between string fields and string values, so they
are the perfect data type to represent objects (e.g. A User with a
number of fields like name, surname, age, and so forth)