The answer on many forums is to simply "create the key". However, this also requires further maintenance because then the flow.xml.gz file cannot be easily migrated between deployments. I do not experience this on previous versions (v1.12.0 & 1.13.0).
Any thoughts on mitigating this issue?
Error:
ERROR [main] o.a.nifi.properties.NiFiPropertiesLoader Clustered Configuration Found: Shared Sensitive Properties Key [nifi.sensitive.props.key] required for cluster nodes
ERROR [main] org.apache.nifi.NiFi Failure to launch NiFi due to java.lang.IllegalArgumentException: There was an issue decrypting protected properties
Similar question: Docker - Nifi : 1.14.0 - Startup failure - Caused by: org.apache.nifi.properties.SensitivePropertyProtectionException
Create the key.
Don't let NiFi generate a random one, supply it from your deployment code along with all the other settings that go into nifi.properties. If you have the same key, you can copy/migrate the flow.xml.gz and share it within clusters.
This also works with an encrypted key if you provide the decryption hex key in bootstrap.conf when deploying.
The latest NiFi version has support for Hashicorp vaults. That might allow you to obtain the correct keys at runtime and share them among cluster nodes.
If you want to work without a key, you will need to use NiFi 1.13.2 or older. From the admin guide:
Starting with version 1.14.0, NiFi requires a value for 'nifi.sensitive.props.key' in nifi.properties.
The following command can be used to read an existing flow.xml.gz configuration and set a new sensitive properties key in nifi.properties:
$ ./bin/nifi.sh set-sensitive-properties-key [sensitivePropertiesKey]
The minimum required length for a new sensitive properties key is 12 characters.
Ignore this error.
Create a new deployment, either import the flow file via the GUI or copy and paste the XML flow file, then restart the deployment.
In my testing, I have not seen any evidence that the sensitive key property is required.
Related
I have this error when I deploy my app:
My properties are these:
quarkus.vault.secret-config-kv-path=kv2/dev/test/test/getting-started-v1
quarkus.vault.kv-secret-engine-version=2
quarkus.vault.authentication.kubernetes.role=getting-started-v1
My policy in hashicorp is the same and the role is attached for this policie.
When I disabled this property: quarkus.vault.secret-config-kv-path the app running but not load any secrets this is the console message:
I have used this documentation
Any help or idea for get the values for hashicorp, probably other method programmatic
This could be a typical issue with kv ver 1 vs 2. Ver 2 has different path. You should consider it in your settings and policy, check the document: https://developer.hashicorp.com/vault/tutorials/secrets-management/versioned-kv#compare-kv-v1-and-kv-v2
In most cases you need to add data after a mount point in the path: kv2/data/dev/test/test...
We use Kafka, Kafka connect and Schema-registry in our stack. Version is 2.8.1(Confluent 6.2.1).
We use Kafka connect's configs(key.converter and value.converter) with value: io.confluent.connect.avro.AvroConverter.
It registers a new schema for topics automatically. But there's an issue, AvroConverter doesn't specify subject-level compatibility for a new schema
and the error appears when we are trying to get config for the schema via REST API /config: Subject 'schema-value' does not have subject-level compatibility configured
If we specify the request parameter defaultToGlobal then global compatibility is returned. But it doesn't work for us because we cannot specify it in the request. We are using 3rd party UI: AKHQ.
How can I specify subject-level compatibility when registering a new schema via AvroConverter?
Last I checked, the only properties that can be provided to any of the Avro serializer configs that affect the Registry HTTP client are the url, whether to auto-register, and whether to use the latest schema version.
There's no property (or even method call) that sets either the subject level or global config during schema registration
You're welcome to check out the source code to verify this
But it doesn't work for us because we cannot specify it in the request. We are using 3rd party UI: AKHQ
Doesn't sound like a Connect problem. Create a PR for AKHQ project to fix the request
As of 2021-10-26, I used akhq 0.18.0 jar and confluent-6.2.0, the schema registry in akhq is working fine.
Note: I also used confluent-6.2.1, seeing exactly the same error. So, you may want to switch back to 6.2.0 to give a try.
P.S: using all only for my local dev env, VirtualBox, Ubuntu.
#OneCricketeer is correct.
There is no possibility to specify subject-level compatibility in AvroConverter unfortunately.
I see only two solutions:
Override AvroConverter to add property and functionality to send an additional request to API /config/{subject} after registering the schema.
Contribute to AKHQ to support defaultToGlobal parameter. But in this case, we also need to backport schema-registry RestClient. Github issue
The second solution is more preferable till the user would specify the compatibility level in the settings of the converter. Without this setting in the native AvroConverter, we have to use the custom converter for every client who writes a schema. And it makes a lot of effort.
For me, it looks strange why the client cannot set up the compatibility at the moment of registering the schema and has to use a different request for it.
I installed vault locally and started, unsealed, and initialized the vault and added some secrets. After rebooting, I am unable to use the keys to unseal the vault. The first two unseal keys are accepted without issue, but after submitting the third key, I get an error response:
Error unsealing: Error making API request.
URL: PUT https://127.0.0.1:28200/v1/sys/unseal
Code: 500. Errors:
* failed to decrypt encrypted stored keys: cipher: message authentication failed
Any ideas what is going on? I am running vault version 1.4.2. The command I am using is vault operator unseal. The server config is:
vault_server.hcl
listener "tcp" {
address = "127.0.0.1:28200"
tls_cert_file = "/etc/vault/certs/vault_cert.crt"
tls_key_file = "/etc/vault/certs/vault_cert.key"
}
storage "file" {
path = "/etc/vault/mnt/data"
}
api_addr = "https://127.0.0.1:28200" # my $VAULT_ADDR is https://127.0.0.1:28200
disable_mlock = true
The relevant log output:
Jun 12 21:26:24 lambda vault[1147]: 2020-06-12T21:26:24.537-0500 [DEBUG] core: unseal key supplied
Jun 12 21:26:24 lambda vault[1147]: 2020-06-12T21:26:24.537-0500 [DEBUG] core: cannot unseal, not enough keys: keys=1 threshold=3 nonce=920f7d80-fdcc-3bc3-149e-8b069ef23acb
Jun 12 21:26:38 lambda vault[1147]: 2020-06-12T21:26:38.069-0500 [DEBUG] core: unseal key supplied
Jun 12 21:26:38 lambda vault[1147]: 2020-06-12T21:26:38.069-0500 [DEBUG] core: cannot unseal, not enough keys: keys=2 threshold=3 nonce=920f7d80-fdcc-3bc3-149e-8b069ef23acb
Jun 12 21:26:51 lambda vault[1147]: 2020-06-12T21:26:51.984-0500 [DEBUG] core: unseal key supplied
The most relevant issues I can find in web searches are for people who inadvertently corrupted their storage:
https://github.com/hashicorp/vault/issues/5498
https://groups.google.com/forum/#!msg/vault-tool/N9fc_dUejJw/OfovdNNHBwAJ
https://discuss.hashicorp.com/t/move-vault-installation-between-servers/6990/2
I'm not sure that applies here. I'm using filesystem storage, vault is the owner of everything in /etc/vault, and I can't tell that any data has been lost or corrupted.
I had the same issue with freshly installed vault 1.4.2 in HA mode on GKE using their official vault-k8s helm chart. I deployed it on 2 environments. The first one was OK, but the second one was failing exactly the same way as you described when I tried to join the 2nd vault instance to the HA cluster. I simply deleted and re-installed it a few times and eventually it worked.
TLDR Summary: vault will always accept keys until it hits the minimum count so it can attempt to assemble/use the resulting unseal key. Accepting keys is not an indicator of validity.
The keys distributed by the vault server are actually "shards" or "shares" (exact terminology changes between documenting sources) that are generated by splitting/sealing the master key using Shamir's secret Sharing. Because the master key cannot be decrypted without accepting the minimum number of shards (defaults to 3, but could be configured to a different value), the vault server has no way of determining if a provided shard is a valid until that minimum is provided so it can attempt to 1. generate an unseal key from shards, and 2. use resulting key against the master key.
Hashicorp provides a decent overview of the process here:
https://www.vaultproject.io/docs/concepts/seal
More information on shamir and the math behind it here:
https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing
Now the bad news:
The error implies you are using an incorrect set of keys to unlock your vault: the resulting unseal key is incorrect. Either they have been changed or they are for a different vault. (maybe bad cut-n-paste newline injection?). While some have suggested a reinstall, I don't think this will solve anything. If you are desperate, you could try using a different version of vault in-case there is an unseal bug in one of your distros, but that's ... a reach.
I had the exact same issue. It turns out that I was not using the correct keys (I was using old keys copied elsewhere). After using the correct keys, I was able to unseal the vault without any issues.
https://discuss.hashicorp.com/t/not-able-to-unseal-vault-after-container-pod-restart/16797
We are getting the below error in Wildfly/Jboss when we are trying to encrypt the DB password using Vault. Can you provide a solution for this?
Caused by: org.jboss.security.vault.SecurityVaultException: PB00027: Vault Mismatch:Shared Key does not match for vault block:bea_interface and attributeName:password
There are three possible causes:
1). There is just a mismatch between the passwords. Check what you used when setting up the vault.
2). The encrypted password files are missing:
Aside of the keystore, you should not forget to put the two other files that vault.sh generates
vault.keystore
ENC.dat
Shared.dat
You need to copy all three files to the desired location, for example to the "standalone/configuration/" directory.
In the vault definition, these are the two paramaters that will tell JBoss where to find them:
<vault-option name="KEYSTORE_URL" value="${jboss.server.config.dir}/vault.keystore"/>
<vault-option name="ENC_FILE_DIR" value="${jboss.server.config.dir}/"/>
3). You are using a keystore alias name longer then 10 characters.
This is regarding WSO2 API Manager Worker cluster configuration with external Postgres db. I have used 2 databases i.e wso2_carbon for registry and user management and the wso2_am, for storing APIs. Respective xmls have been configured. The postgres scripts have been run to create the database tables. My log console when wso2server.sh is run, shows enabled clustering and the members of the domain. However on the https://: when I try to create to create APIs, it throws and error in the design phase itself.
ERROR - add:jag org.wso2.carbon.apimgt.api.APIManagementException: Error while checking whether context exists
[2016-12-13 04:32:37,737] ERROR - ApiMgtDAO Error while locating API: admin-hello-v.1.2.3 from the database
java.sql.SQLException: org.postgres.Driver cannot be found by jdbc-pool_7.0.34.wso2v2
As per the error message, the driver class name you have given is org.postgres.Driver which is not correct. It should be org.postgresql.Driver. Double check master-datasource.xml config.