I'm not able to find Option to control the Topic level permission in Cloud Based Free Trial of Confluent Platform. Can you please suggest on how to configure ?
Confluent Cloud role-based access control (RBAC) provides a method to control access to an organization, environment, or cluster configuration using predefined roles. RBAC enables enterprises to protect their production environment by isolating user and service account access and allowing for the delegation of authorization to the appropriate business units and teams.
To control access to specific resources within a cluster, such as Kafka topics or ksqlDB applications, continue to use ACLs.
RBAC does not provide granular support for Kafka resources (like topics), nor does it provide granular access control for individual connectors, ksqlDB applications, and schema subjects.
ACL are not manged by the Control Center
https://docs.confluent.io/cloud/current/access-management/cloud-rbac.htm
ACLs are managed using the Confluent Cloud CLI. For a complete list of Kafka ACLs, see Authorization using ACLs.
https://docs.confluent.io/ccloud-cli/current/command-reference/kafka/acl/index.html#ccloud-kafka-acl
ccloud kafka acl create --allow --service-account sa-55555 --operation READ --operation DESCRIBE --consumer-group java_example_group_1
ccloud kafka acl create --allow --service-account sa-55555 --operation READ --operation DESCRIBE --topic '*'
Related
I am trying to create in Confluent Cloud (Kafka) a MongoDB connector sink with ksqlDB. The problem is that I have the data source and credentials in the AWS Secrets manager.
Is there a way to obtain the secrets with ksqlDB to set the connector properties?
Kafka Connect supports Externalized config for secrets. Whether such an implement exists for AWS, I am not sure, but if not, you'll need to write your own ConfigProvider for it.
Alternatively, there may be alternative solutions like running ksql or just Connect itself in MSK Connect, ECS, EC2, or EKS where you write processes around exposing Secrets Manager data into files or environment variables, which can then be used by Connect's default config providers, then setup ksql externally to point at those Connect instances, or just process the topics it outputs
For Kafka cluster hosted in Confluent Cloud, there is an Audit Log cluster that gets created. It seems to be possible to hook a Sink connector to this cluster and drain the events out from "confluent-audit-log-events" topic.
However, I am running into the below error when I run the connector to do the same.
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [connect-offsets]
In my connect-distributed.properties file, I have the settings as :
offset.storage.topic=connect-offsets
offset.storage.replication.factor=3
offset.storage.partitions=3
What extra permission/s needs to be granted so that the connector can create the required topics in the cluster? The key/secret being used in the connect-distributed.properties files is a valid key/secret that is associated to the service account for this cluster.
Also, when I run the consumer in the console using the same key (as above) , I am able to read the audit log events just fine.
It's confirmed that this feature (hooking up a connector to the Audit Log cluster) is not supported at the moment in Confluent Cloud. This feature may be available later this year at some point.
I have a Kafka cluster running with Zookeeper, Confluent Schema registry and Kafka security manager(KSM). KSM, https://github.com/conduktor/kafka-security-manager, is software makes it easy to manager Kafka ACL with a csv file instead of using the command line tool.
The confluent schema registry let us store Avro schema for Kafka. It is currently open and I need to secured it. I want to give every user the READ or GET permission only. I am currently using kubernetes to deploy all the tools.
How can I do that with KSM? Where can I find examples?
Thank you
Kafka ACLs don't apply to the Schema Registry, they would apply to the underlying _schemas topic, which you'd setup in the Registry's configuration
The API itself can be secured using TLS and HTTP Authentication
https://docs.confluent.io/platform/current/schema-registry/security/index.html
give every user the READ or GET permission only.
I don't think you can lock down HTTP method level access to specific users, you'll likely need a proxy for this, but also without POST, there's no way to register topics...
Is there a way can a operations team restrict application teams from creating kafka stream intermediate topics on kafka cluster?
Kafka provides authorisation mechanisms and more precisely, a pluggable Authorizer.
You can either use the simple Authorizer implementation which is provided by Kafka by including the following configuration in server.properties
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
or you can create your own class that implements Authorizer Interface. Again, you'd need to provide the authorizer.class.name in server.properties broker configuration file.
When an authorizer is configured, access to resources is limited to Super Users and therefore if a resource has no associated ACLs, then the access is restricted only to these Super Users. In order to define super users, you simply need to include them in the server.properties configuration;
super.users=User:Bob;User:Alice
This is the default behaviour, and can be amended by including the following configuration in server.properties file
allow.everyone.if.no.acl.found=true
that essentially enables access to every user when no ACLs are configured.
I've a 3-node unsecured kafka(v0.10.2.1) cluster with topic auto creation and deletion disabled with the following in server.properties
auto.create.topics.enable=false
delete.topic.enable=true
Topics are then created/altered on the cluster using bin/kafka-topics.sh. However, it looks like anyone can create topics on the cluster once they know the end points.
Is there a way to lock down topic creation/alteration to specific hosts to prevent abuses?
Edit 1:
Since ACL was suggested, I tried to restrict topic creation to select hosts using kafka-acls.sh.
I restarted the brokers after adding the following to server.properties, .
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
I tried the below to restrict topic creation on localhost.
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* --cluster --operation Create --allow-host 127.0.0.1
However, I was still able to create topics from an other host using kafka-topics.sh with the right endpoints. Is it the case that ACLs can't be used without authentication?
You need to use access control lists (ACLs) to restrict such operations and that implies knowing who the caller is, so you need kafka to be secured by an authentication mechanism in the first place.
ACLs: http://kafka.apache.org/documentation.html#security_authz
Authentication can be done using SSL or SASL or by plugging in a custom provider, see the preceding sections of the same document.
Disabling auto-creation is not an access control mechanism, it only means that trying to produce to or consume from a topic will not create it automatically.