Password masking only works for JDBC Connectors - apache-kafka

We have set our Kafka Connect to be able to read credentials from a file, instead of giving them directly in connector config. This is how a login part of connector config looks like:
"connection.user": "${file:/kafka/pass.properties:username}",
"connection.password":
"${file:/kafka/pass.properties:password}",
We also added these 2 lines to "connect-distributed.properties" file:
config.providers=file
config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
Mind that it works perfectly for JDBC connectors, so there is no problem with the pass.properties file. But for other connectors, such as couchbase, rabbitmq, s3 etc. it causes problems. All these connectors work fine when we give credentials directly but when we try to make Connect to read them from a file it gives some errors. What could be the reason? I don't see any JDBC specific configuration here.
EDIT:
An error about couchbase in connect.log:
[2021-12-02 11:50:19,580] ERROR [com.couchbase.io][SaslAuthenticationFailedEvent][20ms] Authentication Failure - Potential causes: invalid credentials or if LDAP is enabled ensure PLAIN SASL mechanism is exclusively used on the PasswordAuthenticator (insecure) or TLS is used (recommended) {"circuitBreaker":"DISABLED","coreId":"0xbf785c7500000001","remote":"10.30.142.109:11210","status":"UNKNOWN","type":"KV","xerror":{"ref":"ae3ce600-7097-4077-9231-8ced290cd399"}} (com.couchbase.io:533)
[2021-12-02 11:50:19,580] WARN [com.couchbase.endpoint][EndpointConnectionFailedEvent][23ms] Connect attempt 9 failed because of AuthenticationFailureException: Authentication Failure - Potential causes: invalid credentials or if LDAP is enabled ensure PLAIN SASL mechanism is exclusively used on the PasswordAuthenticator (insecure) or TLS is used (recommended) {"circuitBreaker":"DISABLED","coreId":"0xbf785c7500000001","remote":"10.30.142.109:11210","type":"KV"} (com.couchbase.endpoint:523)
com.couchbase.client.core.error.AuthenticationFailureException: Authentication Failure - Potential causes: invalid credentials or if LDAP is enabled ensure PLAIN SASL mechanism is exclusively used on the PasswordAuthenticator (insecure) or TLS is used (recommended) {"circuitBreaker":"DISABLED","coreId":"0xbf785c7500000001","remote":"10.30.142.109:11210","status":"UNKNOWN","type":"KV","xerror":{"ref":"ae3ce600-7097-4077-9231-8ced290cd399"}}
at com.couchbase.client.core.io.netty.kv.SaslAuthenticationHandler.failConnect(SaslAuthenticationHandler.java:488)
at com.couchbase.client.core.io.netty.kv.SaslAuthenticationHandler.maybeFailConnect(SaslAuthenticationHandler.java:293)
at com.couchbase.client.core.io.netty.kv.SaslAuthenticationHandler.channelRead(SaslAuthenticationHandler.java:250)
at com.couchbase.client.core.io.netty.kv.MemcacheProtocolVerificationHandler.channelRead(MemcacheProtocolVerificationHandler.java:84)
at java.lang.Thread.run(Thread.java:748)
It says something about authentication but works fine when credentials are given directly. If the masking is not working correctly, how does it work for JDBC connectors?

Looks like the problem was quote marks in pass.properties file. The interesting thing is, even if credentials are typed with or without quote marks, JDBC connectors work well. Maybe the reason is it is the first line in the file but just a small possibility.
So, do NOT use quote marks in your password files, even if some of the connectors work this way.

Related

Subject does not have subject-level compatibility configured

We use Kafka, Kafka connect and Schema-registry in our stack. Version is 2.8.1(Confluent 6.2.1).
We use Kafka connect's configs(key.converter and value.converter) with value: io.confluent.connect.avro.AvroConverter.
It registers a new schema for topics automatically. But there's an issue, AvroConverter doesn't specify subject-level compatibility for a new schema
and the error appears when we are trying to get config for the schema via REST API /config: Subject 'schema-value' does not have subject-level compatibility configured
If we specify the request parameter defaultToGlobal then global compatibility is returned. But it doesn't work for us because we cannot specify it in the request. We are using 3rd party UI: AKHQ.
How can I specify subject-level compatibility when registering a new schema via AvroConverter?
Last I checked, the only properties that can be provided to any of the Avro serializer configs that affect the Registry HTTP client are the url, whether to auto-register, and whether to use the latest schema version.
There's no property (or even method call) that sets either the subject level or global config during schema registration
You're welcome to check out the source code to verify this
But it doesn't work for us because we cannot specify it in the request. We are using 3rd party UI: AKHQ
Doesn't sound like a Connect problem. Create a PR for AKHQ project to fix the request
As of 2021-10-26, I used akhq 0.18.0 jar and confluent-6.2.0, the schema registry in akhq is working fine.
Note: I also used confluent-6.2.1, seeing exactly the same error. So, you may want to switch back to 6.2.0 to give a try.
P.S: using all only for my local dev env, VirtualBox, Ubuntu.
#OneCricketeer is correct.
There is no possibility to specify subject-level compatibility in AvroConverter unfortunately.
I see only two solutions:
Override AvroConverter to add property and functionality to send an additional request to API /config/{subject} after registering the schema.
Contribute to AKHQ to support defaultToGlobal parameter. But in this case, we also need to backport schema-registry RestClient. Github issue
The second solution is more preferable till the user would specify the compatibility level in the settings of the converter. Without this setting in the native AvroConverter, we have to use the custom converter for every client who writes a schema. And it makes a lot of effort.
For me, it looks strange why the client cannot set up the compatibility at the moment of registering the schema and has to use a different request for it.

Add Relying Party Trust is failing in ADFS SAML

I've spent quite a few hours fighting with these issues so I though a quick recap might be helpful for somebody else too.
First, when trying to import an RP from a metadata URL:
I was getting this error:
An error occured during an attempt to read the federation metadata. Verify that the specified URL or hostname is a valid federation metadata endpoint.
...
Error message: The underlying connection was closed: An unexpected error occured on a send.
The problem turned out to be caused by the fact that Windows Server at least up to 2016 is using TLS 1.0 for .NET framework (in which the ADFS configuration wizard is implemented) while my service hosting the metadata document only allowed TLS 1.2 as the minimum version:
Dropping the minimum version to TLS 1.0 is a no-go from security point of view, so the proper fix would be to enable TLS 1.2 as the default version on the ADFS server.
That would solve the issue (which I confirmed with a test) but then some of the other RPs only supporting TLS 1.0 would stop working, so I had to give up on importing metadata directly from a URL and use the file import option:
In this case another error popped up, which happened to be:
An error occured during an attempt to read the federation metadata. Verify that the specified URL or hostname is a valid federation metadata endpoint.
...
Error message: Entity descriptor '...'. ID6018: Digest verification failed for reference '...'.
This one turned out to be caused by me when I formatted the XML in the metadata file with line breaks and tabs to improve readability - it's all on a single line originally. ADFS won't allow that so the document must be exactly the same it came out of the metadata endpoint.
The same issue might result in different error messages and codes, depending on Windows and ADFS versions. For example this one is possible caused by a failed metadata integrity check as well:
An error occured during an attempt to read the federation metadata. Verify that the specified URL or hostname is a valid federation metadata endpoint.
...
Error message: Entity descriptor '...'. ID6013: The signature verification failed.
After having successfully imported a raw metadata file and having added a suitable Claim Issuance Policy I've got it finally working:

Understanding OPC-UA Security using Eclipse Milo

I am new to this OPC-UA world and Eclipse Milo.
I do not understand how the security works here,
Discussing about client-example provided by eclipse-milo
I see few properties of security being used to connect to the OPCUA Server:
SecurityPolicy,
MessageSecurityMode,
clientCertificate,
clientKeyPair,
setIdentityProvider,
How the above configurations are linked with each other?
I was trying to run client-examples -> BrowseNodeExample.
This example internally runs the ExampleServer.
ExampleServer is configured to run with Anonymous and UsernamePassword Provider. It is also bound to accept SecurityPolicy.None, Basic128Rsa15, Basic256, Basic256Sha256 with MessageSecurityMode as SignandEncrypt except for SecurityPolicy.None where MessageSecurityMode is None too.
The problem is with AnonymousProvider I could connect to the server with all SecurtiyPolicy and MessageSecurityMode pair mentioned above (without client certificates provided).
But I could not do the same for UsernameProvider, For UsernameProvider only SecurityPolicy MessageSecurityMode pair with None runs successfully.
All others pairs throw security checks failed exception (when certificate provided) else user access denied (when client certificate not provided). How to make this work?
Lastly, It would be really nice if someone could point me to proper User documentation for Eclipse Milo. Since I could not see any documentation except examples codes, and they are not documented.
SecurityPolicy and MessageSecurityMode go hand-in-hand. The security policy dictates the set of algorithms that will be used for signatures and encryption, if any. The message security mode determines whether the messages will be signed, signed and encrypted, or neither in the case where no security is used.
clientCertificate and clientKeyPair must be configured if you plan to use security. You can't use encryption or signatures if you don't have a certificate and private key, after all.
IdentityProvider used to provide the credentials that identify the user of the session, if any.
When the ExampleServer starts up it logs that its using a temporary security directory, something like this: security temp dir: /var/folders/z5/n2r_tpbn5wd_2kf6jh5kn9_40000gn/T/security. When a client connects using any kind of security its certificate is not initially trusted by the server, resulting in the Bad_SecurityChecksFailed errors you're seeing. Inside this directory you'll find a folder rejected where rejected client certificates are stored. If you move the certificate(s) to the trusted folder the client should then be able to connect using security.

Kafka on Cloudera - test=TOPIC_AUTHORIZATION_FAILED

We just upgraded from CDH 5.3.6 to 5.10.0, and started getting errors when trying to write to Kafka topics. We have the default settings on everything, no SSL or Kerberos authentication enabled. When use the console producer to write to one of my topics, I get this error:
/usr/bin/kafka-console-producer --broker-list=myhost1.dev.com:9092,myhost2.dev.com:9092 --topic test
17/03/06 21:00:57 INFO utils.AppInfoParser: Kafka version : 0.10.0-kafka-2.1.0
17/03/06 21:00:57 INFO utils.AppInfoParser: Kafka commitId : unknown
x
17/03/06 21:00:59 WARN clients.NetworkClient: Error while fetching metadata with correlation id 0 : {test=TOPIC_AUTHORIZATION_FAILED}
Looking at /var/log/kafka/, I see a bunch of these exceptions:
2017-03-06 21:00:26,964 WARN org.apache.sentry.provider.common.HadoopGroupMappingService: Unable to obtain groups for ANONYMOUS
java.io.IOException: No groups found for user ANONYMOUS
at org.apache.hadoop.security.Groups.noGroupsForUser(Groups.java:190)
at org.apache.hadoop.security.Groups.getGroups(Groups.java:210)
at org.apache.sentry.provider.common.HadoopGroupMappingService.getGroups(HadoopGroupMappingService.java:60)
at org.apache.sentry.provider.common.ResourceAuthorizationProvider.getGroups(ResourceAuthorizationProvider.java:167)
at org.apache.sentry.provider.common.ResourceAuthorizationProvider.doHasAccess(ResourceAuthorizationProvider.java:97)
at org.apache.sentry.provider.common.ResourceAuthorizationProvider.hasAccess(ResourceAuthorizationProvider.java:91)
at org.apache.sentry.kafka.binding.KafkaAuthBinding.authorize(KafkaAuthBinding.java:212)
at org.apache.sentry.kafka.authorizer.SentryKafkaAuthorizer.authorize(SentryKafkaAuthorizer.java:63)
at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$authorize$2.apply(KafkaApis.scala:321)
at kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$authorize$2.apply(KafkaApis.scala:321)
at scala.Option.map(Option.scala:146)
at kafka.server.KafkaApis.kafka$server$KafkaApis$$authorize(KafkaApis.scala:321)
at kafka.server.KafkaApis$$anonfun$30.apply(KafkaApis.scala:702)
at kafka.server.KafkaApis$$anonfun$30.apply(KafkaApis.scala:702)
at scala.collection.TraversableLike$$anonfun$partition$1.apply(TraversableLike.scala:314)
at scala.collection.TraversableLike$$anonfun$partition$1.apply(TraversableLike.scala:314)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:94)
at scala.collection.TraversableLike$class.partition(TraversableLike.scala:314)
at scala.collection.AbstractTraversable.partition(Traversable.scala:104)
at kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:702)
at kafka.server.KafkaApis.handle(KafkaApis.scala:79)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)
I've been looking for a solution to this, but have come up empty so far. Do I need to assign the ANONYMOUS user to some groups somewhere? I was able to write messages to my topics in CDH 5.3.6, but it appears something has gone wrong in the upgrade.
Just trying to get the helloWorld/Quickstart example to work again on our DEV Kafka after upgrading to CDH 5.10.0.
----------------- Temporary workaround solution ---
In cloudera manager 5.10 there is a super.users property in the kafka configuration. Adding ANONYMOUS to that list, allowed me to produce and consume from my topics.
I had already tried doing this in /opt/cloudera/parcels/KAFKA-2.1.0-1.2.1.0.p0.115/etc/kafka/conf.dist/server.properties, which had no effect. So Cloudera must be managing these values elsewhere.
Kafka strictly distinguishes between authentication and authorization - even if you have authentication via Kerb or SSL turned of it is still possible to turn authorization on via the following parameter:
authorizer.class.name=kafka.security.auth.SimpleAclAuthorize‌​r
This will make Kafka check ACLs for every access - since authentication is turned of in your case though, every user will be evaluated as ANONYMOUS and denied, if there are no ACLs set for this user.
You can delete that setting from your config which should make Kafka return to its old, trusting, self. I am not sure where you'd do this in Cloudera Manager though, so an alternative would be to add ANONYMOUS to the list of super users which is available in CM. Or of course just define an ACL to allow access to ANONYMOUS.
For production use later on you should probably set up either SSL or Kerberos and define proper ACLs if there is any chance of the cluster being accessed from the outside.

Using multiple keytabs in Kerberos JGSS without using JAAS .conf file

I want to use multiple keytabs in multiple server threads. I don't want to use JAAS conf file so i implemented my own login configuration in LoginConfiguration class. The getGSSCredentials() function in KerberosLogin class is used to get the Credentials by giving keytab location as parameter.
KerberosLogin -> http://ideone.com/vaip3H
LoginConfiguration -> http://ideone.com/jDqlN0
When i ran only two server threads, first one was able to get the credentials from its keytab ( both the server threads use different service principal ) while second one failed. Somehow using parms.put("refreshKrb5Config","true"); in LoginConfiguration solved the problem.
I am not able to understand why it's not working without refreshing the configuration and for cases in which there will be several such server threads will it be safe to use. Is there any better way to use multiple keytabs ?
It was due to the way java6 handles login config, it has been fixed in java7.