I wrote a program in Eclipse to test the remote connection to HBase. This is modified from HBase Example API Usage. It does very basic things: connect to hbase, drop a table if exists, then create a new one. Here's the code :
public class TableTest {
private static Configuration conf = null;
static{
conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum","192.168.12.120");
conf.set("hbase.zookeeper.property.clientPort","2181");
//conf.set("hbase.master", "192.168.12.120:12000"); // not work
}
public static void main(String[] args){
String tableName = "hbase_table";
try {
Connection connection = ConnectionFactory.createConnection(conf);
Admin admin = connection.getAdmin();
HTableDescriptor table = new HTableDescriptor(TableName.valueOf(tableName));
table.addFamily(new HColumnDescriptor("info"));
if(admin.tableExists(table.getTableName())){
admin.disableTable(table.getTableName());
admin.deleteTable(table.getTableName());
}
admin.createTable(table);
} catch (IOException e) {
e.printStackTrace();
}
}
}
However, I got some problems for making the connection work. The message is the following. On one side, the program searched the HRegionServer at 192.168.12.120 on port 16020, on another side, it searched the HMaster at localhost 127.0.0.1 on port 12000. This broke the connection, because both HRegionServer and HMaster are running at 192.168.12.120.
1294 DEBUG [2015-08-21 11:08:49] Use SIMPLE authentication for service ClientService, sasl=false
1311 DEBUG [2015-08-21 11:08:49] Connecting to /192.168.12.120:16020
1357 DEBUG [2015-08-21 11:08:49] Reading reply sessionid:0x14f39c8a9f20060, packet:: clientPath:null serverPath:null finished:false header:: 5,3 replyHeader:: 5,1210,0 request:: '/hbase,F response:: s{4,4,1439021727692,1439021727692,0,116,0,0,0,16,1191}
1360 DEBUG [2015-08-21 11:08:49] Reading reply sessionid:0x14f39c8a9f20060, packet:: clientPath:null serverPath:null finished:false header:: 6,4 replyHeader:: 6,1210,0 request:: '/hbase/master,F response:: #ffffffff000146d61737465723a3132303030ffffffc4ffffffbbffffffbcffffffd251ffffffb2ffffffe0ffffffb950425546a15a96c6f63616c686f737410ffffffe05d18ffffff89ffffffc5ffffffa1fffffff1fffffff42910018ffffff8a7d,s{1170,1170,1440125319355,1440125319355,0,0,0,94357651205521495,57,0,1170}
1369 DEBUG [2015-08-21 11:08:49] Use SIMPLE authentication for service MasterService, sasl=false
1369 DEBUG [2015-08-21 11:08:49] Connecting to localhost/127.0.0.1:12000
So how to set hbase master host at 192.168.12.120 ? Can somebody help ?
Here're 2 docs for the configuration :
Apache HBase ™ Reference Guide: 7. Default Configuration
Github: apache/hbase/.../hbase-default.xml
Related
Im trying to connect to Kafka with JMS. I followed this guide to use the Payara Kafka Connector. This worked on Wildfly. But I cant get it to work on OpenLiberty.
The server.xml:
<resourceAdapter id="kafkajmsra" location="${shared.resource.dir}kafka-rar-0.5.0.rar"/>
<jmsTopicConnectionFactory jndiName="JMSTopicFactory">
<properties.kafkajmsra
bootstrapServerConfig="kafka:9092"/>
</jmsTopicConnectionFactory>
<jmsTopic id="kafkaTopic" jndiName="JmsTopic">
<properties.kafkajmsra topicName="demoTopic" />
</jmsTopic>
With those configurations I get a NullPointerException if I try to inject those components. The JNDI names can be found but not with these parameters.
#Resource(lookup = "JMSTopicFactory")
private TopicConnectionFactory jmsTopicFactory;
#Resource(lookup = "JMSTopic")
private Topic jmsTopic;
Am I missing something in the server.xml?
I tried using the default JMS Connector. It does connect to Kafka, but the connection gets refused and on the kafka side it tells me this:
[2020-05-31 20:05:27,134] WARN [SocketServer brokerId=1] Unexpected error from /172.20.0.4; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = -1091633152)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:103)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:448)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:398)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:678)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:580)
at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
at kafka.network.Processor.poll(SocketServer.scala:893)
at kafka.network.Processor.run(SocketServer.scala:792)
at java.lang.Thread.run(Thread.java:748)
EDIT:
I changed the server.xml to look like this now:
<resourceAdapter id="kafkajmsra" location="${shared.resource.dir}/kafka-rar-0.4.0.rar"/>
<connectionFactory jndi="java:app/KafkaConnectionFactory"
interfaceName="fish.payara.cloud.connectors.kafka.api.KafkaConnectionFactory"
resourceAdapter="liberty/wlp/usr/shared/resources/kafka-rar-0.4.0.rar">
</connectionFactory>
and the java code looks like this:
#ApplicationScoped
public class TopicProducer {
private static final Logger LOG = LoggerFactory.getLogger(TopicProducer.class);
public TopicProducer() throws Exception {
LOG.info("Starting TopicProducer");
}
#Resource(lookup = "java:app/KafkaConnectionFactory")
KafkaConnectionFactory kafkaConnectionFactory;
public void send(final String msg) {
try (KafkaConnection connection = kafkaConnectionFactory.createConnection()) {
LOG.info("Send message: {}", msg);
connection.send(new ProducerRecord("demoTopic", msg));
} catch (Exception e) {
LOG.error(e.getMessage(), e);
}
}
}
But now I get a NullPointerException on the #Resource. My guess is that the resource adapter cannot be found.
I'm trying to connect to IBM Message Hub from Apache Spark 2.2.1 Structured Streaming.
The connection code is quite basic:
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SparkSession
val spark = SparkSession.builder.appName("StreamingRetailTransactions").getOrCreate()
import spark.implicits._
val df = spark.readStream.
format("kafka").
option("kafka.bootstrap.servers", "kafka03-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka04-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka01-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka02-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka05-prod02.messagehub.services.eu-gb.bluemix.net:9093").
option("subscribe", "transactions_load").
option("security.protocol", "SASL_SSL").
option("sasl.mechanism", "PLAIN").
option("sasl.jaas.config", "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"*****\" password=\"*****\";").
option("ssl.protocol", "TLSv1.2").
option("ssl.enabled.protocols", "TLSv1.2").
option("ssl.endpoint.identification.algorithm", "HTTPS").
option("auto.offset.reset","earliest").
option("group.id", System.currentTimeMillis).
load()
val query = df.writeStream.format("console").start()
I'm starting the spark shell with:
~/spark-2.2.1-bin-hadoop2.7$ ./bin/spark-shell --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.2.0
However, I'm getting a disconnected error:
scala> 18/01/09 08:34:17 WARN NetworkClient: Bootstrap broker kafka02-prod02.messagehub.services.eu-gb.bluemix.net:9093 disconnected
18/01/09 08:34:17 WARN NetworkClient: Bootstrap broker kafka03-prod02.messagehub.services.eu-gb.bluemix.net:9093 disconnected
18/01/09 08:34:17 WARN NetworkClient: Bootstrap broker kafka01-prod02.messagehub.services.eu-gb.bluemix.net:9093 disconnected
<<repeats forever>>
I've increased debug output with sc.setLogLevel("DEBUG") and I get:
<<log output ommitted for brevity>>
18/01/09 08:38:28 DEBUG SessionState: SessionState user: null
18/01/09 08:38:28 DEBUG SessionState: HDFS root scratch dir: /tmp/hive with schema null, permission: rwx-wx-wx
18/01/09 08:38:28 INFO SessionState: Created local directory: /tmp/2b4557ce-dd17-46d1-9ab0-f9a36fd750f9_resources
18/01/09 08:38:28 INFO SessionState: Created HDFS directory: /tmp/hive/snowch/2b4557ce-dd17-46d1-9ab0-f9a36fd750f9
18/01/09 08:38:28 INFO SessionState: Created local directory: /tmp/snowch/2b4557ce-dd17-46d1-9ab0-f9a36fd750f9
18/01/09 08:38:28 INFO SessionState: Created HDFS directory: /tmp/hive/snowch/2b4557ce-dd17-46d1-9ab0-f9a36fd750f9/_tmp_space.db
18/01/09 08:38:28 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is file:/home/snowch/spark-2.2.1-bin-hadoop2.7/spark-warehouse
18/01/09 08:38:28 DEBUG SessionState: Session is using authorization class class org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider
18/01/09 08:38:28 DEBUG StateStoreCoordinatorRef: Retrieved existing StateStoreCoordinator endpoint
18/01/09 08:38:28 DEBUG StreamExecution: Starting Trigger Calculation
18/01/09 08:38:28 INFO StreamExecution: Starting new streaming query.
18/01/09 08:38:28 DEBUG UserGroupInformation: PrivilegedAction as:snowch (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:331)
18/01/09 08:38:28 DEBUG KafkaSource$$anon$1: Unable to find batch /tmp/temporary-fb3e6e1a-fbbe-4098-991c-0b29f63ecade/sources/0/0
18/01/09 08:38:28 DEBUG AbstractCoordinator: Sending coordinator request for group spark-kafka-source-9a14bb54-8f1b-47db-8497-19c083128496--998588290-driver-0 to broker kafka05-prod02.messagehub.services.eu-gb.bluemix.net:9093 (id: -5 rack: null)
18/01/09 08:38:28 DEBUG NetworkClient: Initiating connection to node -5 at kafka05-prod02.messagehub.services.eu-gb.bluemix.net:9093.
18/01/09 08:38:28 DEBUG NetworkClient: Initialize connection to node -3 for sending metadata request
18/01/09 08:38:28 DEBUG NetworkClient: Initiating connection to node -3 at kafka01-prod02.messagehub.services.eu-gb.bluemix.net:9093.
18/01/09 08:38:28 DEBUG Metrics: Added sensor with name node--5.bytes-sent
18/01/09 08:38:28 DEBUG Metrics: Added sensor with name node--5.bytes-received
18/01/09 08:38:28 DEBUG Metrics: Added sensor with name node--5.latency
18/01/09 08:38:28 DEBUG NetworkClient: Completed connection to node -5
18/01/09 08:38:28 DEBUG Metrics: Added sensor with name node--3.bytes-sent
18/01/09 08:38:28 DEBUG Metrics: Added sensor with name node--3.bytes-received
18/01/09 08:38:28 DEBUG Metrics: Added sensor with name node--3.latency
18/01/09 08:38:28 DEBUG NetworkClient: Completed connection to node -3
18/01/09 08:38:28 DEBUG NetworkClient: Sending metadata request {topics=[transactions_load]} to node -5
18/01/09 08:38:28 DEBUG Selector: Connection with kafka05-prod02.messagehub.services.eu-gb.bluemix.net/159.8.179.153 disconnected
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:323)
at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:179)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:974)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:938)
at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$fetchLatestOffsets$1$$anonfun$apply$9.apply(KafkaOffsetReader.scala:174)
at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$fetchLatestOffsets$1$$anonfun$apply$9.apply(KafkaOffsetReader.scala:172)
at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$org$apache$spark$sql$kafka010$KafkaOffsetReader$$withRetriesWithoutInterrupt$1.apply$mcV$sp(KafkaOffsetReader.scala:263)
at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$org$apache$spark$sql$kafka010$KafkaOffsetReader$$withRetriesWithoutInterrupt$1.apply(KafkaOffsetReader.scala:262)
at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$org$apache$spark$sql$kafka010$KafkaOffsetReader$$withRetriesWithoutInterrupt$1.apply(KafkaOffsetReader.scala:262)
at org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:85)
at org.apache.spark.sql.kafka010.KafkaOffsetReader.org$apache$spark$sql$kafka010$KafkaOffsetReader$$withRetriesWithoutInterrupt(KafkaOffsetReader.scala:261)
at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$fetchLatestOffsets$1.apply(KafkaOffsetReader.scala:172)
at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$fetchLatestOffsets$1.apply(KafkaOffsetReader.scala:172)
at org.apache.spark.sql.kafka010.KafkaOffsetReader.runUninterruptibly(KafkaOffsetReader.scala:230)
at org.apache.spark.sql.kafka010.KafkaOffsetReader.fetchLatestOffsets(KafkaOffsetReader.scala:171)
at org.apache.spark.sql.kafka010.KafkaSource$$anonfun$initialPartitionOffsets$1.apply(KafkaSource.scala:132)
at org.apache.spark.sql.kafka010.KafkaSource$$anonfun$initialPartitionOffsets$1.apply(KafkaSource.scala:129)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.kafka010.KafkaSource.initialPartitionOffsets$lzycompute(KafkaSource.scala:129)
at org.apache.spark.sql.kafka010.KafkaSource.initialPartitionOffsets(KafkaSource.scala:97)
at org.apache.spark.sql.kafka010.KafkaSource.getOffset(KafkaSource.scala:163)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$10$$anonfun$apply$6.apply(StreamExecution.scala:521)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$10$$anonfun$apply$6.apply(StreamExecution.scala:521)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:279)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$10.apply(StreamExecution.scala:520)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$10.apply(StreamExecution.scala:518)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$constructNextBatch(StreamExecution.scala:518)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$populateStartOffsets(StreamExecution.scala:492)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(StreamExecution.scala:297)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$apply$mcZ$sp$1.apply(StreamExecution.scala:294)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$apply$mcZ$sp$1.apply(StreamExecution.scala:294)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:279)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1.apply$mcZ$sp(StreamExecution.scala:294)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:290)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:206)
18/01/09 08:38:28 DEBUG NetworkClient: Node -5 disconnected.
18/01/09 08:38:28 WARN NetworkClient: Bootstrap broker kafka05-prod02.messagehub.services.eu-gb.bluemix.net:9093 disconnected
18/01/09 08:38:28 DEBUG ConsumerNetworkClient: Cancelled GROUP_COORDINATOR request ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler#4eacac4e, request=RequestSend(header={api_key=10,api_version=0,correlation_id=0,client_id=consumer-1}, body={group_id=spark-kafka-source-9a14bb54-8f1b-47db-8497-19c083128496--998588290-driver-0}), createdTimeMs=1515487108479, sendTimeMs=1515487108598) with correlation id 0 due to node -5 being disconnected
18/01/09 08:38:28 DEBUG NetworkClient: Sending metadata request {topics=[transactions_load]} to node -3
18/01/09 08:38:28 DEBUG Selector: Connection with kafka01-prod02.messagehub.services.eu-gb.bluemix.net/159.8.179.149 disconnected
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
<<repeated>>
I have seen the following similar questions:
Kafka Error in I/O java.io.EOFException: null - however, this question is for a much older version of Kafka
I understand that some of the output is just 'noise', however, my spark streaming application does not appear to be receiving any data. I have connected with a console client and I am able to see data.
Update 1 - I've tried configuring JaaS, but still getting the same error. The issue may be that the JaaS code needs to run on each worker node, but isn't getting run on them.
sc.setLogLevel("DEBUG")
def jaasClientConfig(username: String, password: String): Unit = {
import javax.security.auth.login.AppConfigurationEntry
import javax.security.auth.login.Configuration
import javax.security.auth.login.LoginException
import scala.collection.JavaConversions._
System.setProperty("java.security.auth.login.config", "")
Configuration.setConfiguration(new Configuration() {
def getAppConfigurationEntry(name: String): Array[AppConfigurationEntry] = {
val idMap = Map(
"serviceName" -> "kafka",
"username" -> username,
"password" -> password
)
val ace = new AppConfigurationEntry(
"org.apache.kafka.common.security.plain.PlainLoginModule",
AppConfigurationEntry.LoginModuleControlFlag.REQUIRED,
idMap
)
return Array(ace)
}
})
}
def startSparkStreaming(): Unit = {
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SparkSession
val spark = SparkSession.builder.appName("StreamingRetailTransactions").getOrCreate()
import spark.implicits._
val df = spark.readStream.
format("kafka").
option("kafka.bootstrap.servers", "kafka03-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka04-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka01-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka02-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka05-prod02.messagehub.services.eu-gb.bluemix.net:9093").
option("subscribe", "transactions_load").
option("security.protocol", "SASL_SSL").
option("sasl.mechanism", "PLAIN").
option("ssl.protocol", "TLSv1.2").
option("ssl.enabled.protocols", "TLSv1.2").
option("ssl.endpoint.identification.algorithm", "HTTPS").
option("auto.offset.reset","earliest").
option("group.id", System.currentTimeMillis).
load()
val query = df.writeStream.format("console").start()
}
jaasClientConfig("****","****")
startSparkStreaming()
Update 2
I've also tried with a jaas.conf:
~/spark-2.2.1-bin-hadoop2.7$ ./bin/spark-shell --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.2.0 --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.conf=jaas.conf" --files "jaas.conf"
and ...
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="*****"
password="*****";
}
Still the same problem ...
First I needed to run spark-shell with --conf that points the executor and driver to my jaas.conf:
./bin/spark-shell --master local[1] \
--jars external/kafka-0-10-sql/target/spark-sql-kafka-0-10_2.11-2.2.2-SNAPSHOT.jar,external/kafka-0-10-assembly/target/spark-streaming-kafka-0-10-assembly_2.11-2.2.2-SNAPSHOT.jar \
--conf "spark.driver.extraJavaOptions=-Djava.security.auth.login.config=/path/to/jaas.conf" \
--conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=/path/to/jaas.conf" \
--num-executors 1 --executor-cores 1
Next I had to add some kafka options:
val df = spark.readStream.
format("kafka").
option("kafka.bootstrap.servers", "kafka03-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka04-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka01-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka02-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka05-prod02.messagehub.services.eu-gb.bluemix.net:9093").
option("subscribe", "transactions_load").
option("kafka.security.protocol", "SASL_SSL").
option("kafka.sasl.mechanism", "PLAIN").
option("kafka.ssl.protocol", "TLSv1.2").
option("kafka.ssl.enabled.protocols", "TLSv1.2").
load()
Note that the kafka options need to be prefixed with kafka., for example:
security.protocol => kafka.security.protocol
These changes solved the connectivity issue for me.
This appears to be a failed authentication. At the current MH version of Kafka the server just closes the connection when authentication fails
the "sasl.jaas.config" setting is used in the Kafka client from version 0.10.2
the Kafka client used by Spark 2.2.1 is 0.10.0 so authentication fails as suspected.
you can use the java.security.auth.login.config system property to specify a jaas file
Alternatively you can programmatically set the credentials for the client with a snippet like this
public static void jaasClientConfig(final String username, final String password) throws Exception {
System.setProperty("java.security.auth.login.config", "");
Configuration.setConfiguration(new Configuration() {
public AppConfigurationEntry[] getAppConfigurationEntry(String name) {
HashMap<String, String> idMap = new HashMap<>();
idMap.put("serviceName", "kafka"); // Seems to be optional
idMap.put("username", username);
idMap.put("password", password);
AppConfigurationEntry ace = new AppConfigurationEntry("org.apache.kafka.common.security.plain.PlainLoginModule",
AppConfigurationEntry.LoginModuleControlFlag.REQUIRED, idMap);
AppConfigurationEntry[] entry = { ace };
return entry;
}
});
}
I implemented curator leader election example which is given in this site
Instead of having number of curator clients I added only one curator client as follows
public void selectLeader() {
CuratorFramework client = null;
try {
client = CuratorFrameworkFactory.newClient("localhost:2181", new ExponentialBackoffRetry(1000, 3));
LeaderSelectorService service = new LeaderSelectorService(client, "/leaderSelections", "LeaderElector");
client.start();
Thread.sleep(10000);
service.start();
} catch (Exception e) {
System.out.println("error"+e);
}finally
{
System.out.println("Shutting down...");
// CloseableUtils.closeQuietly(client);
}
}
public class LeaderSelectorService extends LeaderSelectorListenerAdapter implements Closeable {
private final String name;
private final LeaderSelector leaderSelector;
public LeaderSelectorService(CuratorFramework client, String path, String name) {
this.name = name;
// create a leader selector using the given path for management
// all participants in a given leader selection must use the same path
// ExampleClient here is also a LeaderSelectorListener but this isn't required
leaderSelector = new LeaderSelector(client, path, this);
// for most cases you will want your instance to requeue when it relinquishes leadership
leaderSelector.autoRequeue();
}
public void start() throws IOException
{
// the selection for this instance doesn't start until the leader selector is started
// leader selection is done in the background so this call to leaderSelector.start() returns immediately
leaderSelector.start();
}
#Override
public void takeLeadership(CuratorFramework arg0) throws Exception {
// we are now the leader. This method should not return until we want to relinquish leadership
final int waitSeconds = (int)(5 * Math.random()) + 1;
System.out.println(name + " is now the leader. Waiting " + waitSeconds + " seconds...");
//System.out.println(name + " has been leader " + leaderCount.getAndIncrement() + " time(s) before.");
try
{
Thread.sleep(TimeUnit.SECONDS.toMillis(waitSeconds));
}
catch ( InterruptedException e )
{
System.err.println(name + " was interrupted.");
Thread.currentThread().interrupt();
}
finally
{
System.out.println(name + " relinquishing leadership.\n");
}
}
#Override
public void close() throws IOException {
leaderSelector.close();
}
}
I have only one zookeeper instance and I am using Zookeeper 3.4.6, curator-framework 4.0.0 and curator-recipes 4.0.0.
when I start the client, it connects to zookeeper and in the log I can see "State change : connected" message.
Then I wait 10s and start leader election which gives me below error repeatedly.
2017-09-06 09:34:22.727 INFO 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Unable to read additional data from server sessionid 0x15e555a719d0000, likely server has closed socket, closing socket connection and attempting reconnect
2017-09-06 09:34:22.830 INFO 1228 --- [c-1-EventThread] o.a.c.f.state.ConnectionStateManager : State change: SUSPENDED
2017-09-06 09:34:23.302 INFO 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-09-06 09:34:23.303 INFO 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established, initiating session, client: /127.0.0.1:49594, server: localhost/127.0.0.1:2181
2017-09-06 09:34:23.305 INFO 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15e555a719d0000, negotiated timeout = 120000
2017-09-06 09:34:23.305 INFO 1228 --- [c-1-EventThread] o.a.c.f.state.ConnectionStateManager : State change: RECONNECTED
2017-09-06 09:34:23.310 WARN 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x15e555a719d0000 for server localhost/127.0.0.1:2181, unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.8.0_131]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[na:1.8.0_131]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.8.0_131]
at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.8.0_131]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[na:1.8.0_131]
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:75) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:363) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
after some time it started to give me below error message.
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:831) [curator-framework-4.0.0.jar:4.0.0]
at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:623) [curator-framework-4.0.0.jar:4.0.0]
at org.apache.curator.framework.imps.WatcherRemovalFacade.processBackgroundOperation(WatcherRemovalFacade.java:152) [curator-framework-4.0.0.jar:4.0.0]
at org.apache.curator.framework.imps.GetConfigBuilderImpl$2.processResult(GetConfigBuilderImpl.java:222) [curator-framework-4.0.0.jar:4.0.0]
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:590) [zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:499) [zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
2017-09-06 09:34:31.897 INFO 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-09-06 09:34:31.898 INFO 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established, initiating session, client: /127.0.0.1:49611, server: localhost/127.0.0.1:2181
2017-09-06 09:34:31.899 INFO 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15e555a719d0000, negotiated timeout = 120000
2017-09-06 09:34:31.899 INFO 1228 --- [c-1-EventThread] o.a.c.f.state.ConnectionStateManager : State change: RECONNECTED
2017-09-06 09:34:31.907 WARN 1228 --- [localhost:2181)] org.apache.zookeeper.ClientCnxn : Session 0x15e555a719d0000 for server localhost/127.0.0.1:2181, unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Xid out of order. Got Xid 41 with err -6 expected Xid 40 for a packet with details: clientPath:/leaderSelections serverPath:/leaderSelections finished:false header:: 40,12 replyHeader:: 0,0,-4 request:: '/leaderSelections,F response:: v{}
at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:892) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:101) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:363) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1214) ~[zookeeper-3.5.3-beta.jar:3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60]
I tried several solution in the internet but non got succeeded. Does anybody know the root cause of this issue.
I have fixed this issue. There was version number mismatch between zookeeper version and curator version. I used curator version 4.0.0 with zookeeper 3.4.6. According to apache curator site
Curator 4.0.0 - compatible with ZooKeeper 3.5.x. I changed my curator version to 2.8.0
I have Just started with Kafka, I am able to produce and consume data through command prompt and also produce data through Java code even from remote server.
But I am trying this simple consumer Java code it is not working.
public class Simpleconsumer {
private final ConsumerConnector consumer;
private final String topic;
public Simpleconsumer(String topic) {
Properties props = new Properties();
props.put("zookeeper.connect", "127.0.0.1:2181");
props.put("group.id", "topic1");
props.put("auto.offset.reset", "smallest");
consumer = Consumer.createJavaConsumerConnector(new ConsumerConfig(props));
this.topic = topic;
}
public void testConsumer() {
try{
Map<String, Integer> topicCount = new HashMap<String, Integer>();
topicCount.put(topic, 1);
Map<String, List<KafkaStream<byte[], byte[]>>> consumerStreams = consumer.createMessageStreams(topicCount);
List<KafkaStream<byte[], byte[]>> streams = consumerStreams.get(topic);
System.out.println("start.......");
for (final KafkaStream stream : streams) {
ConsumerIterator<byte[], byte[]> it = stream.iterator();
System.out.println("iterate.......");
while (it.hasNext()) {
System.out.println("Message from Single Topic: " + new String(it.next().message()));
}
}
System.out.println("end.......");
if (consumer != null) {
consumer.shutdown();
}
}
catch(Exception e)
{
System.out.println(e);
}
}
public static void main(String[] args) {
// String topic = args[0];
Simpleconsumer simpleHLConsumer = new Simpleconsumer("topic1");
simpleHLConsumer.testConsumer();
}
}
Output is :-
log4j:WARN No appenders could be found for logger (kafka.utils.VerifiableProperties).
log4j:WARN Please initialize the log4j system properly.
start.......
iterate.......
There is no error, the programme doesn't terminate or give any output!!
Log of zookeeper
2016-02-18 17:31:31,790 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /127.0.0.1:33338
2016-02-18 17:31:31,793 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#868] - Client attempting to establish new session at /127.0.0.1:33338
2016-02-18 17:31:31,821 [myid:] - INFO [SyncThread:0:ZooKeeperServer#617] - Established session 0x152f4265b0b0009 with negotiated timeout 6000 for client /127.0.0.1:33338
2016-02-18 17:31:31,891 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#645] - Got user-level KeeperException when processing sessionid:0x152f4265b0b0009 type:create cxid:0x1 zxid:0x718 txntype:-1 reqpath:n/a Error Path:/consumers Error:KeeperErrorCode = NodeExists for /consumers
2016-02-18 17:31:31,892 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#645] - Got user-level KeeperException when processing sessionid:0x152f4265b0b0009 type:create cxid:0x2 zxid:0x719 txntype:-1 reqpath:n/a Error Path:/consumers/artinew Error:KeeperErrorCode = NodeExists for /consumers/artinew
2016-02-18 17:31:31,892 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#645] - Got user-level KeeperException when processing sessionid:0x152f4265b0b0009 type:create cxid:0x3 zxid:0x71a txntype:-1 reqpath:n/a Error Path:/consumers/artinew/ids Error:KeeperErrorCode = NodeExists for /consumers/artinew/ids
2016-02-18 17:31:32,000 [myid:] - INFO [SessionTracker:ZooKeeperServer#347] - Expiring session 0x152f4265b0b0008, timeout of 6000ms exceeded
2016-02-18 17:31:32,000 [myid:] - INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#494] - Processed session termination for sessionid: 0x152f4265b0b0008
2016-02-18 17:31:32,002 [myid:] - INFO [SyncThread:0:NIOServerCnxn#1007] - Closed socket connection for client /127.0.0.1:33337 which had sessionid 0x152f4265b0b0
I am getting this in Kafka console in infinite loop. please explain
[2016-02-17 20:50:08,594] INFO Closing socket connection to /xx.xx.xx.xx. (kafka.network.Processor)
[2016-02-17 20:50:08,174] INFO Closing socket connection to /xx.xx.xx.xx. (kafka.network.Processor)
[2016-02-17 20:50:08,385] INFO Closing socket connection to /xx.xx.xx.xx. (kafka.network.Processor)
[2016-02-17 20:50:08,760] INFO Closing socket connection to /xx.xx.xx.xx. (kafka.network.Processor)
i created the topic in the following manner
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 5 --topic topic1
i am able to consume it in command prompt using
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic topic1
Not able to understand what is the issue.
Try localhost instead of 127.0.0.1 in code to make sure local resolution is working fine.
I'm trying to send an email invoked from code.
#Stateless
public class MailBean {
private static Logger LOGGER = Logger.getLogger(MailBean.class);
private String EMAIL_REGEX = "^(([a-zA-Z0-9_\\-\\.]+)#([a-zA-Z0-9_\\-\\.]+)\\.([a-zA-Z]{2,5}){1,25})+([;.](([a-zA-Z0-9_\\-\\.]+)#([a-zA-Z0-9_\\-\\.]+)\\.([a-zA-Z]{2,5}){1,25})+)*$";
#Resource(name = "java:jboss/mail/Default")
private Session mailSession;
#Asynchronous
public void send(String addresses, String topic, String textMessage) {
try {
MimeMessage message = new MimeMessage(mailSession);
message.setRecipients(Message.RecipientType.TO, InternetAddress.parse(addresses));
message.setSubject(topic);
message.setText(textMessage);
Transport transport = mailSession.getTransport();
transport.connect();
transport.send(message);
} catch (MessagingException ex) {
LOGGER.error("Cannot send mail to " + addresses + ". Error: " + ex.getMessage(), ex);
}
}
public boolean isValidEmailAddress(String email) {
if (email == null)
return false;
else
return email.matches(EMAIL_REGEX);
}
}
My Wildfly 8.1 Server is configured as follows:
<subsystem xmlns="urn:jboss:domain:mail:2.0">
<mail-session name="mail-session-default" jndi-name="java:jboss/mail/Default">
<smtp-server outbound-socket-binding-ref="mail-smtp"
ssl="false"
username="john#doe.com"
password="****"/>
</mail-session>
</subsystem>
The socket outbound like this:
<outbound-socket-binding name="mail-smtp">
<remote-destination host="mail.doe.com" port="25"/>
<outbound-socket-binding>
The reported error is
(EJB default - 2) L:38 Cannot send mail to jane#doe.com. Error: 550 5.7.1 Command rejected
: com.sun.mail.smtp.SMTPSendFailedException: 550 5.7.1 Command rejected
As in the example I try to send an email from my account john#doe.com to jane#doe.com. Not to another domain.
On startup, wildfly does not report any errors with this configuration.
[org.jboss.as.mail.extension] (MSC service thread 1-5) L:136 JBAS015400: Bound mail session [java:jboss/mail/Default]
Any clue why that fails? In general I wonder why Java-Mail behaves not like a regular mail client.
Noting is wrong with your configuration or javamail implementation.
Just mail server is rejecting commands from you/your sever/...
There are many reason why mail server would do this, but in most cases all are related to preventing spam.
see related threads on SO about this as wel, and they are all related to mail server configuration.
for more see:
https://serverfault.com/questions/453638/plesk-postfix-smtp-550-5-7-1-command-rejected-for-one-external-sender
https://serverfault.com/questions/540159/remote-host-said-550-5-7-1-message-content-rejected