I'm reading PKCS 11 documentation, and I can not understand cleary, what does CKA_SENSITIVE attribute of a key means.
And more common: where can I read attributes description?
Quote from PKCS#11 spec v2.20:
If the CKA_SENSITIVEattribute is CK_TRUE, or if the CKA_EXTRACTABLE
attribute is CK_FALSE, then certain attributesof the secret key cannot be revealed in
plaintext outside the token. Which attributes these are is specified for each type of secret
key in the attribute table in the section describing that type of key
In general this means that the actual value of the secret key is not exposed. It depends on the key which attributes make up the value. For secret keys it is generally CKA_VALUE, for private RSA keys this would be CKA_PRIVATE_EXPONENT and the Chinese Remainder Theorem parameters - if those are part of the key.
I found out that if CKA_SENSITIVE = FALSE, then the clear value of the key (for secret keys) can be retrieved by the C_GetAttributeValue function, while you cannot retrieve the value itself if CKA_SENSITIVE = TRUE.
Related
The transit secrets engine returns encrypted data with a prefix:
% vault write transit/encrypt/my-key plaintext=$(base64 <<< "my secret data")
Key Value
ciphertext vault:v1:C7BqsulaJTww6+zyO+0TnjFUUdDVTQWIatlbxOtEkZbF5govTZAp8S6gjQ==
Is there any way of customazation where we can change vault:v1: >>>> CMPname:APP:
vault:v2:VHTTBb2EyyNYHsa3XiXsvXOQSLKulH+NqS4eRZdtc2TwQCxqJ7PUipvqQ==
So that it becomes:
CompnanyName:appV1:0VHTTBb2EyyNYHsa3XiXsvXOQSLKulH+NqS4eRZdtc2TwQCxqJ7PUipvqQ==
Vault has a default version template that evaluates to vault:v{{version}}. There is code that support a custom version template, but the version_template parameter is ignored when you create the key.
So as of today, this option does not exist, sorry.
This metadata is not encrypted (nor signed). I suggest you either add a prefix to it:
CompnanyName:app:vault:v1:0VHTTBb2EyyNYHsa3XiXsvXOQSLKulH+NqS4eRZdtc2TwQCxqJ7PUipvqQ=
Or replace it:
CompnanyName:app:v1:0VHTTBb2EyyNYHsa3XiXsvXOQSLKulH+NqS4eRZdtc2TwQCxqJ7PUipvqQ=
To be future proof (so that you can remove your custom code and use version_template one day), I suggest that you keep a link between my-key (the name of the key) and the prefix. As the code stands today, it is unlikely that Vault will support multiple prefixes for a single key name.
As you may know, Thales has introduced two attributes named CKA_EXPORT and CKA_EXPORTABLE to make the key backup procedures more secure.
Based on the documentation, only Security Officer (SO) can enable CKA_EXPORT attribute of a key and as far as I know, SO have access to non-Private keys only.
So to enable CKA_EXPORT of a key, I think I need to do the following procedure:
Login as "User" and create a key with PRIVATE attribute equal to False
Login as "SO" and modify the attributes to make EXPORT = True and Private = True.
Well, I can enable EXPORT attribute, but when I want to modify PRIVATE attribute, I receive an error which say:
C:\Users\admin>ctkmu m -s1 -n "myKeyName" -aP
ProtectToolkit C Key Management Utility 5.2.0
Copyright (c) Safenet, Inc. 2009-2016
Enter user PIN for slot 1:
ctkmu: Modify operation failed 0x10 - attribute read only
The question is: How can I make the key PRIVATE after enabling its EXPORT attribute?
How to create a valid key for azurite storage explorer, when I give some random alpha numeric values, it's failing saying not a valid base64 value
can you provide some more details on the scenario you are trying to address?
The storage emulator will default to the standard dev account key see:
https://github.com/Azure/Azurite#storage-accounts
If you want to use a different key, you need to replace the account key with a valid base64 string, found in the constants.ts files, one per API.
You can see how it is done in the code :
export const EMULATOR_ACCOUNT_KEY_STR =
"Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==";
export const EMULATOR_ACCOUNT_KEY = Buffer.from(
EMULATOR_ACCOUNT_KEY_STR,
"base64"
);
In my form it showing my policy and x-amz-credential, x-amz-alorithm, x-amz-signature, my bucket, etc...
data-form-data = "{"key":"/uploads/temporary/<some random numbers/letters>/${filename}",
"success_action_status":"201",
"acl":"public-read",
"Content-Type":"image/jpeg",
"policy":"<bunch of random numbers/letters",
"x-amz-credential":"<your-access-key-id>/<date>/<aws-region>/<aws-service>/aws4_request",
"x-amz-algorithm":"<some random numbers/lettering>",
"x-amz-date":"<some random numbers/letters>",
"x-amz-signature":"<some random numbers/letters>"}"
data-url="https://<bucket-name>.s3.amazonaws.com"
data-hose="<bucket-name>.s3.amazonaws.com
Yes, that's fine. It's designed not to expose sensitive data, and this data isn't sensitive.
Your AWS Access Key Secret is the only value that is secret and must not be revealed. (There's also a sensitive intermediate value called the signing key that's generated from the secret, which you won't see unless you wrote your own V4 request signing code). The signature is derived from the signing key and other request parameters; the signing key is service and region specific and is derived from the secret and used in your code, then discarded... and both of these values are generated using in a one-way process that makes it computationally infeasible to reverse-engineer.
Can you let me know the best way to set aws access key and aws secret key while inside spark-shell. I tried setting it using
sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", MY_ACCESS_KEY)
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", MY_SECRET_KEY)
and got
java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties (respectively)
I am able to get it to work by passing it as part of the url
s3n://MY_ACCESS_KEY:MY_SECRET_KEY#BUCKET_NAME/KEYNAME
after replacing the slashes in my secret key with %2F but wanted to know if there was an alternative to embedding my access key and secret key in the url.
in Addition to Holden's answer, here's amore specific example:
val jobConf = new JobConf(sparkContext.hadoopConfiguration)
jobConf.set("fs.s3n.awsAccessKeyId", MY_ACCESS_KEY)
jobConf.set("fs.s3n.awsSecretAccessKey", MY_SECRET_KEY)
val rdd = sparkContext.hadoopFile(jobConf, ...)
You can use the hadoopRDD function and specify the JobConf object directly with the required properties.