Will Directory api store my password as plain text if I dont specify hashfunction value in request body . kindly tell me
If we check the docs for Manage users we can see how google wants us to use the User.insert method. As far as the password goes there are a few things we can add to the body of the request.
The password itself and the hashFunction that was used to create the password
"password": "new user password",
"hashFunction": "SHA-1",
"changePasswordAtNextLogin": false,
If we read further down in the docs we will find this line.
A password is required for new user accounts. If a hashFunction is specified, the password must be a valid hash key. For more information, see the API Reference.
So you must send a password, in clear text. Then Google will hash it on their end using what ever hashing method they choose. If you then do a user.get on the ner user created the hashFunction will be filled in and google will tell you which type of hash function they used.
You do have another option you can decide yourself which hash function you want to use. By sending hashFunction as part of your request. However you then must hash the password itself. If a hashFunction is specified, the password must be a valid hash key. This option may by some be considered more secure as you are not sending a clear text password to google for hashing.
So if you want to send it clear text just let google hasht it on their end. However if your have your own favoriate hash function (dont we all) then feel free to use that as long as its one of these apparently google only supports MD5, SHA1 and crypt
resource:-user
I have added an issue request 237333031 to have some clarifications added to the documentation.
I am using Azure key vault for creating and storing my Secp256k1 keys. I am also using the sign API for getting my input string signed. I am working on a Secp256K1 blockchain network.These are steps I follow to get the signature in Golang.
Converting my Hex string into Byte[]
Sha256 of this Byte[]
RawURL encoding of this Sha.
b64.RawURLEncoding.EncodeToString(sha)
Sending this to Key vault for signature.
Decoding the response using RawURLEncoding.
b64.RawURLEncoding.DecodeString(*keyOpsResp.Result)
Doing Hex of the []Byte array returned from 5th Step.
Sending the signature to the blockchain.
The problem I am facing is that signature is invalid sometimes. As in 2/5 times it works and other times signature verification fails.
I am thinking there is some special chars or padding thing that I am missing.
How can I resolve this?
PS: Azure uses non-deterministic signatures where as chains usually use deterministic signs. I did some reading and found out that for verification it does not matter both could be verified successfully. Let me know if I am wrong.
• Since you are using base64 encode RawURL for encoding purposes, you can check whether the following parts are included in the token request for the keyvault signature validation. They are as follows: -
aud (audience): The resource of the token. Notice that this is https://vault.azure.net. This token will NOT work for any resource that does not explicitly match this value, such as graph.
iat (issued at): The number of ticks since the start of the epoch when the token was issued.
nbf (not before): The number of ticks since the start of the epoch when this token becomes valid.
exp (expiration): The number of ticks since the start of the epoch when this token expires.
appid (application ID): The GUID for the application ID making this request.
tid (tenant ID): The GUID for the tenant ID of the principal making this request. It is important that all the values be properly identified in the token for the request to work
• Also, please check the size of the block that is dependent on the target key and the algorithm to be used for validation of signature. In that, please check the ‘decryptParameters’, ‘algorithm’ and ‘ciphertext’ parameter for the returns that are displayed after the decrypt operation during signature validation.
Please find the below links for more details: -
https://learn.microsoft.com/en-us/java/api/com.azure.security.keyvault.keys.cryptography.cryptographyasyncclient.decrypt?view=azure-java-stable
Could someone kindly explain why the private key is included (by default) in the JWT-Token generated by "all" the cpp-based JWT libraries found on github & how to remove it?
The only thing that comes close is the signature of the JWT (the last part in blue at the example in https://jwt.io/).
You need the private key to calculate the signature, but it is definitely not included in the JWT!
I am trying to Use SOAPUI to generate the SOAP request which uses WS-Security configuration.
The request requires me to sign the
_ Body
- TimeStamp
- and Binary Security Token
I'm able to do the body and timestamp part but if I specify "name" as Binary Security token in part portion of generating Signature it gives me error.
Does any body has ever encountered this issue in SOAP-UI?
in your outgoing ws-security configuration you have to add three elements
Timestamp
Signature, in the first signature, you have to choose for "key identifier type" BinarySecuritytoken
Signature, in the second signature, you have to choose for "key identifier type" Issuer Name and Serial Number.
Then you have to add the signed parts to sign into parts section.
Each element to sign, you have to note the oder to sign. In my case was Timestamp, Body and then BinarySecurityToken.
Fill name space with the above elements (Timestamp, Body...), fill the namespace and you have to choose element into Encode section.
Then you have to apply the outogoing security to your request.
I hope to help you.
I don't understand why JWS unprotected headers exist.
For some context: a JWS unprotected header contains parameters that are not integrity protected and can only be used per-signature with JSON Serialization.
If they could be used as a top-level header, I could see why someone could want to include a mutable parameter (that wouldn't change the signature). However, this is not the case.
Can anyone think of a use-case or know why they are included in the spec?
Thanks!
JWS Spec
The answer by Florent leaves me unsatisfied.
Regarding the example of using a JWT to sign a hash of a document... the assertion is that the algorithm and keyID would be "sensitive data" that needs to be "protected". By which I suppose he means "signed". But there's no need to sign the algorithm and keyID.
Example
Suppose Bob creates a signed JWT, that contains an unprotected header asserting alg=HS256 and keyid=XXXX1 . This JWT is intended for transmission to Alice.
Case 1
Suppose Mallory intercepts the signed JWT sent by Bob. Mallory then creates a new unprotected header, asserting alg=None.
The receiver (Alice) is now responsible for verifying the signature on the payload. Alice must not be satisfied with "no signature"; in fact Alice must not rely on a client (sender) assertion to determine which signing algorithm is acceptable for her. Therefore Alice rejects the JWT with the contrived "no signature" header.
Case 2
Suppose Mallory contrives a header with alg=RS256 and keyId=XXX1. Now Alice tries to validate the signature and finds either:
the algorithm is not compliant
the key specified for that algorithm does not exist
Therefore Alice rejects the JWT.
Case 3
Suppose Mallory contrives a header with alg=HS256 and keyId=ZZ3. Now Alice tries to validate the signature and finds the key is unknown, and rejects the JWT.
In no case does the algorithm need to be part of the signed material. There is no scenario under which an unprotected header leads to a vulnerability or violation of integrity.
Getting Back to the Original Question
The original question was: What is the purpose of an unprotected JWT header?
Succinctly, the purpose of an unprotected JWS header is to allow transport of some metadata that can be used as hints to the receiver. Like alg (Algorithm) and kid (Key ID). Florent suggests that stuffing data into an unprotected header could lead to efficiency. This isn't a good reason. Here is the key point: The claims in the unprotected header are hints, not to be relied upon or trusted.
A more interesting question is: What is the purpose of a protected JWS header? Why have a provision that signs both the "header" and the "payload"? In the case of a JWS Protected Header, the header and payload are concatenated and the result is signed. Assuming the header is JSON and the payload is JSON, at this point there is no semantic distinction between the header and payload. So why have the provision to sign the header at all?
One could just rely on JWS with unprotected headers. If there is a need for integrity-protected claims, put them in the payload. If there is a need for hints, put them in the unprotected header. Sign the payload and not the header. Simple.
This works, and is valid. But it presumes that the payload is JSON. This is true with JWT, but not true with all JWS. RFC 7515, which defines JWS, does not require the signed payload to be JSON. Imagine the payload is a digital image of a medical scan. It's not JSON. One cannot simply "attach claims" to that. Therefore JWS allows a protected header, such that the (non JSON) payload AND arbitrary claims can be signed and integrity checked.
In the case where the payload is non-JSON and the header is protected, there is no facility to include "extra non signed headers" into the JWS. If there is a need for sending some data that needs to be integrity checked and some that are simply "hints", there really is only one container: the protected header. And the hints get signed along with the real claims.
One could avoid the need for this protected-header trick, by just wrapping a JSON hash around the data-to-be-signed. For example:
{
"image" : "qw93u9839839...base64-encoded image data..."
}
And after doing so, one could add claims to this JSON wrapper.
{
"image" : "qw93u9839839...base64-encoded image data..."
"author" : "Whatever"
}
And those claims would then be signed and integrity-proected.
But in the case of binary data, encoding it to a string to allow encapsulation into a JSON may inflate the data significantly. A JWS with a non-JSON payload avoids this.
HTH
The RFC gives us examples of unprotected headers as follows:
A.6.2. JWS Per-Signature Unprotected Headers
Key ID values are supplied for both keys using per-signature Header Parameters. The two JWS Unprotected Header values used to represent these key IDs are:
{"kid":"2010-12-29"}
and
{"kid":"e9bc097a-ce51-4036-9562-d2ade882db0d"}
https://datatracker.ietf.org/doc/html/rfc7515#appendix-A.6.2
The use of kid in the example is likely not coincidence. Because JWS allows multiple signatures per payload, a cleartext hint system could be useful. For example, if a key is not available to the verifier (server), then you can skip decoding the protected header. The term "hint" is actually used in the kid definition:
4.1.4. "kid" (Key ID) Header Parameter
The "kid" (key ID) Header Parameter is a hint indicating which key was used to secure the JWS. This parameter allows originators to explicitly signal a change of key to recipients. The structure of the "kid" value is unspecified. Its value MUST be a case-sensitive string. Use of this Header Parameter is OPTIONAL.
https://datatracker.ietf.org/doc/html/rfc7515#section-4.1.4
If we look at Key Identification it mentions where you a kid does not have to be integrity protected (ie: part of unprotected headers): (emphasis mine)
6. Key Identification
It is necessary for the recipient of a JWS to be able to determine the key that was employed for the digital signature or MAC operation. The key employed can be identified using the Header Parameter methods described in Section 4.1 or can be identified using methods that are outside the scope of this specification. Specifically, the Header Parameters "jku", "jwk", "kid", "x5u", "x5c", "x5t", and "x5t#S256" can be used to identify the key used. These Header Parameters MUST be integrity protected if the information that they convey is to be utilized in a trust decision; however, if the only information used in the trust decision is a key, these parameters need not be integrity protected, since changing them in a way that causes a different key to be used will cause the validation to fail.
The producer SHOULD include sufficient information in the Header Parameters to identify the key used, unless the application uses another means or convention to determine the key used. Validation of the signature or MAC fails when the algorithm used requires a key (which is true of all algorithms except for "none") and the key used cannot be determined.
The means of exchanging any shared symmetric keys used is outside the scope of this specification.
https://datatracker.ietf.org/doc/html/rfc7515#section-6
Simplified, if you have a message that by somebody modifying the kid will refer to another key, then the signature itself will not match. Therefore you don't have to include the kid in the protected header. A good example of the first part, where the information they convey is to be utilized in a trust decision, is the ACME (aka the Let's Encrypt protocol). When creating an account, and storing the key data, you want to trust the kid. We want to store kid, so we need to make sure it's valid. After the server has stored the kid and can use it to get a key, we can push messages and reference the kid in unprotected header (not done by ACME but possible). Since we're only going to verify the signature, then the kid is used a hint or reference to which kid was used for the account. If that field is tampered with, then it'll point to a nonexistent of completely different key and fail the signature the check. That means the kid itself is "the only information used in the trust decision".
There's also more theoretical scenarios that, knowing how it works you can come up with.
For example: the idea of having multiple signatures that you can pass on (exchange). A signing authority can include one signature that can be for an intermediary (server) and another for the another recipient (end-user client). This is differentiated by the kid and the server doesn't need to verify or even decode the protected header or signature. Or perhaps, the intermediary doesn't have the client's secret in order to verify a signature.
For example, a multi-recipient message (eg: chat room) could be processed by a relay/proxy and using kid in the unprotected header, pass along a reconstructed compact JWS (${protected}.${payload}.${signature}) for each recipient based on kid (or any other custom unprotected header field, like userId or endpoint).
Another example, would be a server with access to many different keys and a cleartext kid would be faster than iterating and decoded each protected field to find which one.
From one perspective, all you're doing is skipping base64url decoding the protected header for performance, but if you're going to proxy/relay the data, then you're not polluting the protected header which is meant for another recipient.