We are getting the below error in Wildfly/Jboss when we are trying to encrypt the DB password using Vault. Can you provide a solution for this?
Caused by: org.jboss.security.vault.SecurityVaultException: PB00027: Vault Mismatch:Shared Key does not match for vault block:bea_interface and attributeName:password
There are three possible causes:
1). There is just a mismatch between the passwords. Check what you used when setting up the vault.
2). The encrypted password files are missing:
Aside of the keystore, you should not forget to put the two other files that vault.sh generates
vault.keystore
ENC.dat
Shared.dat
You need to copy all three files to the desired location, for example to the "standalone/configuration/" directory.
In the vault definition, these are the two paramaters that will tell JBoss where to find them:
<vault-option name="KEYSTORE_URL" value="${jboss.server.config.dir}/vault.keystore"/>
<vault-option name="ENC_FILE_DIR" value="${jboss.server.config.dir}/"/>
3). You are using a keystore alias name longer then 10 characters.
Related
The answer on many forums is to simply "create the key". However, this also requires further maintenance because then the flow.xml.gz file cannot be easily migrated between deployments. I do not experience this on previous versions (v1.12.0 & 1.13.0).
Any thoughts on mitigating this issue?
Error:
ERROR [main] o.a.nifi.properties.NiFiPropertiesLoader Clustered Configuration Found: Shared Sensitive Properties Key [nifi.sensitive.props.key] required for cluster nodes
ERROR [main] org.apache.nifi.NiFi Failure to launch NiFi due to java.lang.IllegalArgumentException: There was an issue decrypting protected properties
Similar question: Docker - Nifi : 1.14.0 - Startup failure - Caused by: org.apache.nifi.properties.SensitivePropertyProtectionException
Create the key.
Don't let NiFi generate a random one, supply it from your deployment code along with all the other settings that go into nifi.properties. If you have the same key, you can copy/migrate the flow.xml.gz and share it within clusters.
This also works with an encrypted key if you provide the decryption hex key in bootstrap.conf when deploying.
The latest NiFi version has support for Hashicorp vaults. That might allow you to obtain the correct keys at runtime and share them among cluster nodes.
If you want to work without a key, you will need to use NiFi 1.13.2 or older. From the admin guide:
Starting with version 1.14.0, NiFi requires a value for 'nifi.sensitive.props.key' in nifi.properties.
The following command can be used to read an existing flow.xml.gz configuration and set a new sensitive properties key in nifi.properties:
$ ./bin/nifi.sh set-sensitive-properties-key [sensitivePropertiesKey]
The minimum required length for a new sensitive properties key is 12 characters.
Ignore this error.
Create a new deployment, either import the flow file via the GUI or copy and paste the XML flow file, then restart the deployment.
In my testing, I have not seen any evidence that the sensitive key property is required.
I installed vault locally and started, unsealed, and initialized the vault and added some secrets. After rebooting, I am unable to use the keys to unseal the vault. The first two unseal keys are accepted without issue, but after submitting the third key, I get an error response:
Error unsealing: Error making API request.
URL: PUT https://127.0.0.1:28200/v1/sys/unseal
Code: 500. Errors:
* failed to decrypt encrypted stored keys: cipher: message authentication failed
Any ideas what is going on? I am running vault version 1.4.2. The command I am using is vault operator unseal. The server config is:
vault_server.hcl
listener "tcp" {
address = "127.0.0.1:28200"
tls_cert_file = "/etc/vault/certs/vault_cert.crt"
tls_key_file = "/etc/vault/certs/vault_cert.key"
}
storage "file" {
path = "/etc/vault/mnt/data"
}
api_addr = "https://127.0.0.1:28200" # my $VAULT_ADDR is https://127.0.0.1:28200
disable_mlock = true
The relevant log output:
Jun 12 21:26:24 lambda vault[1147]: 2020-06-12T21:26:24.537-0500 [DEBUG] core: unseal key supplied
Jun 12 21:26:24 lambda vault[1147]: 2020-06-12T21:26:24.537-0500 [DEBUG] core: cannot unseal, not enough keys: keys=1 threshold=3 nonce=920f7d80-fdcc-3bc3-149e-8b069ef23acb
Jun 12 21:26:38 lambda vault[1147]: 2020-06-12T21:26:38.069-0500 [DEBUG] core: unseal key supplied
Jun 12 21:26:38 lambda vault[1147]: 2020-06-12T21:26:38.069-0500 [DEBUG] core: cannot unseal, not enough keys: keys=2 threshold=3 nonce=920f7d80-fdcc-3bc3-149e-8b069ef23acb
Jun 12 21:26:51 lambda vault[1147]: 2020-06-12T21:26:51.984-0500 [DEBUG] core: unseal key supplied
The most relevant issues I can find in web searches are for people who inadvertently corrupted their storage:
https://github.com/hashicorp/vault/issues/5498
https://groups.google.com/forum/#!msg/vault-tool/N9fc_dUejJw/OfovdNNHBwAJ
https://discuss.hashicorp.com/t/move-vault-installation-between-servers/6990/2
I'm not sure that applies here. I'm using filesystem storage, vault is the owner of everything in /etc/vault, and I can't tell that any data has been lost or corrupted.
I had the same issue with freshly installed vault 1.4.2 in HA mode on GKE using their official vault-k8s helm chart. I deployed it on 2 environments. The first one was OK, but the second one was failing exactly the same way as you described when I tried to join the 2nd vault instance to the HA cluster. I simply deleted and re-installed it a few times and eventually it worked.
TLDR Summary: vault will always accept keys until it hits the minimum count so it can attempt to assemble/use the resulting unseal key. Accepting keys is not an indicator of validity.
The keys distributed by the vault server are actually "shards" or "shares" (exact terminology changes between documenting sources) that are generated by splitting/sealing the master key using Shamir's secret Sharing. Because the master key cannot be decrypted without accepting the minimum number of shards (defaults to 3, but could be configured to a different value), the vault server has no way of determining if a provided shard is a valid until that minimum is provided so it can attempt to 1. generate an unseal key from shards, and 2. use resulting key against the master key.
Hashicorp provides a decent overview of the process here:
https://www.vaultproject.io/docs/concepts/seal
More information on shamir and the math behind it here:
https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing
Now the bad news:
The error implies you are using an incorrect set of keys to unlock your vault: the resulting unseal key is incorrect. Either they have been changed or they are for a different vault. (maybe bad cut-n-paste newline injection?). While some have suggested a reinstall, I don't think this will solve anything. If you are desperate, you could try using a different version of vault in-case there is an unseal bug in one of your distros, but that's ... a reach.
I had the exact same issue. It turns out that I was not using the correct keys (I was using old keys copied elsewhere). After using the correct keys, I was able to unseal the vault without any issues.
https://discuss.hashicorp.com/t/not-able-to-unseal-vault-after-container-pod-restart/16797
I am trying to import an RSA Key.
I open cmd prompt in Admin mode, go to C:\Windows\Microsoft.NET\Framework64\v4.0.30319 and my command is as follows: aspnet_regiis - pi "Key" "S:\RSAKeys\Key.xml" -pku
This is the exact same command that my coworker used and it worked perfectly for him. When I try it though, I get "Importing RSA Keys from file..Unable to find the specified file. Failed!"
What could be different between our machines?
I have also tried different things (removing the -pku, trying it not as admin, etc.) but in the end it doesn't fully work.
Trying it not as admin with -pku will say succeeded (but then when I try to use the service, it errors with "The RSA key container could not be opened"). Trying it not as admin without -pku will error with "Access is denied."
Edit 1: Looks like a read perms issue between S drive and C drive maybe. Putting the file on the C drive was able to succeed the import but still receiving an error from the service that uses the import saying the rsa key container could not be opened.
Final Edit: After some research, I discovered that I needed to change permissions. I used these documents to help: https://serverfault.com/questions/293416/the-rsa-key-container-could-not-be-opened-windows-server-2008-r2 http://austrianalex.com/rsaprotectedconfigurationprovider-not-recommended-for-children-under-5.html The RSA key container could not be opened
Unfortunately, none of them fixed the problem. Somehow, the RSA key was imported where even the Admin group didn't have the permissions it needed to change permissions. So I went and found the RSA key under the C:\Users\All Users\Microsoft\Crypto\RSA\MachineKeys folder. I had originally tried giving the Administrator group (which was only me anyways) full permissions but received a Safe Handle Error and had to remove that.
Finally, I added myself (not the administrator group) with full permissions and it worked. Thanks #Thymine for pointing me in the right direction!
I am building a solution to store keys and encrypt\decrypt data using an HSM. I am using a network HSM manufactured by Thales. The thing I have noticed is that a key generated in client machine 1 is inaccessible in client machine 2. The key can only be used to encrypt\decrypt data in client machine 1. Is there any thing that needs to be changed in my implementation or is there something to be changed in net-HSM configuration to enable this. I am using PKCS11Iterop library for all the key management operations.
I am using token based OCS protection.
I suppose your client machine 1 has a new file in kmdata/local directory associated to the new key generated.
But your client machine 2 has not this file in his kmdata/local directory.
You have to find a way to share the kmdata/local directory, for instance, using NFS.
I have changed the carbon.xml file and axis2.xml to point to my own key-store. But when I start the wso2-am, the log says:
WARN - ValidationResultPrinter The default keystore (wso2carbon.jks)
is currently being used. To maximize security when deploying to a
production environment, configure a new keystore with a unique
password in the production server profile.
Exmaple from axis2.xml
<KeyStore>
<Location>/data/wso2/certs/ibridge.jks</Location>
<Type>JKS</Type>
<Password>****</Password>
<KeyPassword>****</KeyPassword>
</KeyStore>
There will be 2 main reasons you to change keystore default password which is "wso2carbon".
When moving to production environments keystore should be altered
from the default of 'wso2carbon'.
When changing the default keystore.
You can learn how to do this by following this blog post.
Did you point your jks file in the secret-conf.properties file (AM_HOME\repository\conf\security)?
Did you specify wso2carbon as the alias when creating the KeyStore? When the server starts up it'll search for KeyStores having wso2carbon alias, and if it finds any, it'll assume the default keystore is being used. If this is the case try giving a different alias.