Bluemix message hub ACLs - ibm-cloud

I am using Bluemix message hub service. I need to provide read only access to specific topics to specific users. Question is how can I define ACLs in Bluemix message hub? Apache kafka does provide the means (http://kafka.apache.org/documentation.html#security_authz), but that involves interacting with zookeeper. And I haven't been able to find details to connect to underlying zookeeper behind Bluemix message hub service. Appreciate the responses.

ACLs are currently not supported with MessageHub. As you've correctly noted, we don't give users access to Zookeeper. So at the moment, there is no way to only provide read or write access to a topic.
That said, we isolate each service instance. So if you provision 2 MessageHub instances in different Bluemix spaces, they will be fully isolated and won't be able to see each other topics. That way you could have guaranties that users from instance1 won't be able to read or write from/to topics from instance2. Not what you've asked for but that might help.

Related

What are the different ways to get Kafka Cluster Audit log to GCP Logging?

What are the different ways to get Kafka Cluster Audit log to GCP Logging?
Can anyone share more information on how can I achieve it?
Thank you!
Assuming you have access to the necessary topic (from what I understand the Audit topic is not stored on your own cluster), to get data out of Kafka, you need a consumer. This could be in any language.
To get data into Cloud Logging, you need to use its API.
That being said, you could use any compatible pair of Kafka clients & Cloud logging clients that you would be comfortable with.
For example, you could write or find a Kafka Connect Sink connector that wraps the Java Cloud Logging client.

Ingress for kafka

We are exploring in implementing the multi-tenancy at kafka for each of our dev team which would be hosted in AWS-EKS.
For this the initial thought process is to have topic level multi-tenant.
NLB-Nginx-Ingress: ingress host-route for each team and add all the brokers in the backend, in which that team's all the topic-partition leaders are present.
access restriction through the ACLs at broker level based on principal like user.
Sample flow:
Ingress book-keeping challenges:
When someone from foobar team creates a new topic and if that lands in a new broker, we need to add that broker to the backend of the respective ingress.
If a broker goes down, again the ingress need to be updated.
Prune the brokers when the partition leader goes away due to topic deletion.
What I'm Looking for:
Apart from writing an operator or app to do the above tasks, is there any other better way to achieve this? I'm ok to completely new suggestions as well. Since this is just in the POC stage.
PS: I'm new to kafka and if this exchange is not suitable for this question, pls suggest the right exchange to post. Thanks!
First of all the ACL restrictions are cluster level and not broker level,
Secondly, for bootstraping process you need to access at least one active broker from the cluster it will send back metadata where the data leaders are and on the continuous connection the client will connect to the brokers accordingly,
there is no need to put load balancer behind kafka bootstraping , the suggestion is to put at least two brokers or more in comma separated list , the client will connect the first available and get the metadata, for further connection , client need to be able to connect to all brokers in the cluster
You can use the ACL to restrict access by principals (users) to topics in the cluster based on their need.

Using Kafka Security Manager for ACL for Schema Registry

I have a Kafka cluster running with Zookeeper, Confluent Schema registry and Kafka security manager(KSM). KSM, https://github.com/conduktor/kafka-security-manager, is software makes it easy to manager Kafka ACL with a csv file instead of using the command line tool.
The confluent schema registry let us store Avro schema for Kafka. It is currently open and I need to secured it. I want to give every user the READ or GET permission only. I am currently using kubernetes to deploy all the tools.
How can I do that with KSM? Where can I find examples?
Thank you
Kafka ACLs don't apply to the Schema Registry, they would apply to the underlying _schemas topic, which you'd setup in the Registry's configuration
The API itself can be secured using TLS and HTTP Authentication
https://docs.confluent.io/platform/current/schema-registry/security/index.html
give every user the READ or GET permission only.
I don't think you can lock down HTTP method level access to specific users, you'll likely need a proxy for this, but also without POST, there's no way to register topics...

How to connect to someone else's public Kafka Topic

Apologies if this is a very basic question.
I'm just starting to get to grips with Kafka and have been given a kafka endpoint and topic to push messages to but I'm not actually sure where, when writing the consumer, to specify the end point. Atm I've only had experience in creating a consumer for a broker and producer that is running locally on my machine and so was able to do this by setting the bootstrap server to my local host and port.
I have an inkling that it may be something to do with the advertised listeners settings but I am unsure how it works.
Again sorry if this seems like a very basic question but I couldn't find the answer
Thank you!
Advertised listeners are a broker setting. If someone else setup Kafka, then all you need to do is change the bootstrap address
If it's "public" over the internet, then chances are you might also need to configure certificates & authentication
Connecting to a public cluster is same as connecting to a local deployment.
Im assuming that your are provided with a FQDN of the cluster and the topic name.
You need to add the FQDN to the bootstrap.servers property of your consumer and subscribe to the topics using the subscribe()
you might want to look into the client.dns.lookup property if you want to change the discovery strategy.
Additionally you might have to configure the keystore and a truststore depending on the security configuration on the cluster

Kafka internal topic : Where are the internal topics created - source or target broker?

We are doing a stateful operation. Our cluster is managed. Everytime for internal topic creation , we have to ask admin guys to unlock so that internal topics can be created by the kafka stream app. We have control over target cluster not source cluster.
So, wanted to understand which cluster - source/ target are internal topics created?
AFAIK, There is only one cluster that the kafka-streams app connects to and all topics source/target/internal are created there.
So far, Kafka Stream applications can support connection to only one cluster as defined in the BOOTSTRAP_SERVERS_CONFIG in Stream configurations.
As answered above also, all source topics reside in those brokers and all internal topics(changelog/repartition topics) are created in the same cluster. KStream app will create the target topic in the same cluster as well.
It will be worth looking into the server logs to understand and analyze the actual root cause.
As the other answers suggest there should be only one cluster that the Kafka Stream application connects to. Internal topics are created by the Kafka stream application and will only be used by the application that created it. However, there could be some configuration related to security set on the Broker side which could be preventing the streaming application from creating these topics:
If security is enabled on the Kafka brokers, you must grant the underlying clients admin permissions so that they can create internal topics set. For more information, see Streams Security.
Quoted from here
Another point to keep in mind is that the internal topics are automatically created by the Stream application and there is no explicit configuration for auto creation of internal topics.