Burrow integration with MSK Kafka - apache-kafka

I am trying to connect Burrow to AWS MSK Kafka. I keep receiving below message. I am able to connect to MSK from same EC2 instance following steps.However Burrow is not able to connect. We need to specify the truststore which I am not able to set it in Burrow. Any help would be appreciated.
client has run out of available brokers

AWS support ticket helped me solve the issue. My Client to broker was TLS connection, the steps mentioned in AWS refers to PLAINTEXT. Here is what u need to do to make it work.
Run the following command to COPY the cacerts file to the current location:
-> cp /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.242.b08-0.amzn2.0.1.x86_64/jre/lib/security/cacerts .
**The JVM path might be different for your instance.
Please note the path of this newly created cacerts file by running the pwd command. This path (say P1) will be used in the next steps.
Add additional configuration for TLS in the file /home/ec2-user/go/src/github.com/linkedin/Burrow/config/burrow.toml and adding the following details:
===========
[client-profile.test]
client-id="burrow-test"
kafka-version="0.10.0"
tls="mytlsprofile"
[tls.mytlsprofile]
cafile="P1/cacerts"
noverify=true

Related

How to connect Ksql with ibm-cloud event-stream?

we created a project with ibm functions and event-streams in IBM Cloud.
Now, I am trying to connect KSQL with IBM cloud Event Stream, and I am following along the Document for getting basic ideas of integration.
By following the instructions, I created a file called ksql-server.properties and modified bootstrap.servers, username, password according to my credentials. Then I ran ksql http://localhost:8088 --config-file ksql-server.properties with ksql local cli. I assume everying runs correctly so far since the ksql> shows in the front of every new line...
Then I decided to check if the ksql connected with my ibm cloud by running SHOW topics;
Turns out some error lines:
`Error issuing POST to KSQL server. path:ksql'`
`Caused by: com.fasterxml.jackson.databind.JsonMappingException: Failed to set 'ssl.protocol' to 'TLSv1.2' (through reference chain: io.confluent.ksql.rest.entity.KsqlRequest["streamsProperties"])`
`Caused by: Failed to set 'ssl.protocol' to 'TLSv1.2' (through reference chain: io.confluent.ksql.rest.entity.KsqlRequest["streamsProperties"])
`
`Caused by: Failed to set 'ssl.protocol' to 'TLSv1.2'`
`Caused by: Cannot override property 'ssl.protocol'`
Also, I am quick lost at step 4 when it tells me to:
`Then start DataGen twice as follows:
i. With bootstrap-server=HOSTNAME:PORTNUMBER quickstart=users format=json topic=users maxInterval=10000 to start creating users events.
ii. With bootstrap-server=HOSTNAME:PORTNUMBER quickstart=pageviews format=delimited topic=pageviews maxInterval=10000 to start creating pageviews events.`
Is there anyone have done this before or would love to help me out? Thank you very much!!!
The IBM document is very out of date. KSQL runs as a client/server. The server needs to be run with the details of the broker, and then you can connect to it with a client, including the CLI, REST API, or web interface provided by Confluent Control Center.
So you need to run the KSQL server using your properties file:
./bin/ksql-server-start ksql-server.properties
and then connect to it with the CLI (for example):
./bin/ksql http://localhost:8088
See https://docs.confluent.io/current/ksql/docs/installation/installing.html for more information.

Unable to configure KafkaChannel or KafkaSource in Flume for Kerberos enabled cluster-LoginException

I try to setup KafkaChannel (or KafkaSource) in Flume. And I constantly receive following Exception
Caused by: javax.security.auth.login.LoginException: Could not login:
the client is being asked for a password, but the Kafka client code
does not currently support obtaining a password from the user.
Make sure -Djava.security.auth.login.config property passed to JVM
and the client is configured to use a ticket cache
(using the JAAS configuration setting 'useTicketCache=true)'.
Make sure you are using FQDN of the Kafka broker you are trying to
connect to. not available to garner authentication information from the user
My jaas.conf is following:
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
serviceName="kafka"
keyTab="flume-kafka.keytab"
principal="flume/kafka#MYDOMAIN.COM";
};
I have provided this confgration to Flume via
JAVA_OPTS="$JAVA_OPTS -Djava.security.auth.login.config=/path/to/jaas.conf "
And finally I have specified
agent.channels.myChannel.kafka.consumer.security.protocol = SASL_PLAINTEXT
Does anyone have any ideas why Flume does not use keyTab? Let me know if more details are needed.
According to the kafka document, the sasl_plaintext has been addedd in version 0.10.
SASL/GSSAPI (Kerberos) - starting at version 0.9.0.0
SASL/PLAIN - starting at version 0.10.0.0
SASL/SCRAM-SHA-256 and SASL/SCRAM-SHA-512 - starting at version 0.10.2.0
But the flume version 1.8 still use kafka_client_2.11_0.9.1.jar. I think it may be the bug of flume.
You can rewrite the flume-kafka-sink.jar to fix the bug.
Kafka Document
Flume Kafka sink
KafkaClient
config pasted can be sued.
The above is fine but the things to be cautious about is the following
1. the principal name for a non Cloudera user must be exactly as it shows in a KLIST.
2. very basic, use a full path to the key tab file
Next JAVA_OPTS is a must from the command line if using "Kafka-console-consumer" or in the config parameters if using Cloudera manager.
if using the Kafka-console-consumer, remember to use the --consumer.config switch point to the client.properties file.
After all the above are done, you may still receive the same error.
That is due to ACL.
user the Kafka documentation and set the permission for the Kafka topics for the user used above (including Kafka) to allow the access. Unless you do that, you do not have any access to the topics and will show the same error.
best wishes.
Thanks to this post (original) I've noticed that KafkaClient config specified in Flume 1.6 documentation provided by Cloudera was missing some options. Then I took a look at Official Apache Flume 1.7 documentation and noticed that I miss the following properties:
a1.channels.channel1.kafka.consumer.sasl.mechanism = GSSAPI
a1.channels.channel1.kafka.consumer.sasl.kerberos.service.name = kafka

Configuring SSL in zookeeper

Can somebody help me to sort out the SSL connection to zookeeper,my questions is How to configure
CLIENT_JVMFLAGS in zkCli.cmd file in windows.
Ref :https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide
Set CLIENT JVMFLAGS at terminal by exporting exact as mentioned in that website(Before starting zkCli). (Don't get confused about zkCli, which is "zk command line" not "zkclient")
Client VM Args:
-Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
-Dzookeeper.client.secure=true
-Dzookeeper.ssl.keyStore.location=C:/OpenSSL/bin/1KeyStore.jks
-Dzookeeper.ssl.keyStore.password=yourpass
-Dzookeeper.ssl.trustStore.location=C:/OpenSSL/bin/1truststore.jks
-Dzookeeper.ssl.trustStore.password=yourpass"
For more, Follow these steps to make connection and everything you need:
https://issues.apache.org/jira/browse/ZOOKEEPER-2125
Add this to the zkEnv.sh
export SERVER_JVMFLAGS="
-Dzookeeper.serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
-Dzookeeper.ssl.keyStore.location=/root/zookeeper/ssl/testKeyStore.jks
-Dzookeeper.ssl.keyStore.password=testpass
-Dzookeeper.ssl.trustStore.location=/root/zookeeper/ssl/testTrustStore.jks
-Dzookeeper.ssl.trustStore.password=testpass"
Also you might need to add more Heap.
Follow this page:
ZooKeeper SSL User Guide

Connection to a S3 instance using a service-connector

I'm trying to create a service-connector to my s3 instance like this:
cf service-connector 13001 mybucketname.ds31s3.swisscom.com:443
But I get the following error:
Server-Error 403: Check of security groups failed (no access)
I have created my service key according to this documentation.
Connecting to my MongoDB works perfectly using a service connector.
You can access Swisscom's S3 directly without the service connector.
The error message suggests that your current org and space do no have access to the S3. This is usually the case is there is no app-binding for that service in the current space. Please check whether you created your service key in the right org and space.
There was a misconfiguration due to security changes. We fixed the issue, so connecting to s3 with the service-connector should now work.

installing kubernetes on coreos with rkt and automated script

I'm trying to install kuberentes with rkt on my real (not virtual) coreos servers at home using the scripts at https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic and I have some questions.
my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
thanks!
update
Rob thank you so much for your response. I wasn't clear enough regarding etcd2. I already have etcd2 tls installed and properly configured on my coreos servers. so I configured my etcd servers in the controller-install.sh file:
export ETCD_ENDPOINTS="https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379"
but when I run the controller-install.sh script, it returns and repeat the following output:
Waiting for etcd...
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
...
so I was guessing it's because i didn't define etcd related tls certificates in the controller script and that is why it stuck in that faze.
on my macbook pro laptop I have the following alias configured:
alias myetcdctl="~/apps/etcd-v3.0.8-darwin-amd64/etcdctl --endpoint=https://coreos-2.tux-in.com:2379 --ca-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/ca.pem --cert-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1.pem --key-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1-key.pem --timeout=10s"
so when I run myetcdctl member list I get:
8832ce6a269a7dac: name=ccff826d5f564c67abf35467306f80a0 peerURLs=https://coreos-3.tux-in.com:2380 clientURLs=https://coreos-3.tux-in.com:2379 isLeader=true
a2c0ac9708ef90fc: name=dc38bc8f20e64940b260d3f7b260430d peerURLs=https://coreos-2.tux-in.com:2380 clientURLs=https://coreos-2.tux-in.com:2379 isLeader=false
so I'm guessing that I don't really have a problem there.
any ideas?
thanks!
my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
These scripts don't start an etcd server. You will need to set one up manually and will be able to use TLS and as many nodes as you would like. This isn't clear in the current form of the document, I will attempt a PR to fix.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
Only CONTROLLER_ENDPOINT be a domain name.
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
These scripts include/start the rkt API service. As you can see below, it also has a Restart parameter set (source):
[Unit]
Before=kubelet.service
[Service]
ExecStart=/usr/bin/rkt api-service
Restart=always
RestartSec=10
[Install]
RequiredBy=kubelet.service