Why i'm getting FailedToSendMessageException Exception when sending message at first time - apache-kafka

When sending a message for the first time I get the exception. But second message sends properly.
Here is my Producer Configuration:
public void Init()
{
_logger.info("Initializing KAFKA server context...");
Properties ProducerProperties = new Properties();
ProducerProperties.put("metadata.broker.list", "localhost:9092");
ProducerProperties.put("serializer.class", "kafka.serializer.StringEncoder");
ProducerProperties.put("producer.type", "async");
_kafkaProducerConfig = new ProducerConfig(ProducerProperties);
_logger.info("KAFKA configuration done ...!!");
}`
I am sending Message using this method:
public Event Send(WSSession ws, JSONObject obj) throws JSONException
{
String roomId = obj.getString("RoomId");
Room room = _roomList.get(roomId);
String message = "";
if (_hmRooms.containsKey(room.getRoomId()))
{
String msg = new KafkaMessage(obj.getString("ReqId"), obj.getString("ReqType"), ws.getUser().getTaskName(), obj.getString("Message")).toString();
kafka.javaapi.producer.Producer<String, String> producer = new kafka.javaapi.producer.Producer<String, String>(_kafkaProducerConfig);
KeyedMessage<String, String> km = new KeyedMessage<String, String>(room.getRoomId(), msg);
producer.send(km);
producer.close();
message = "sent";
}
else
{
message = "failled";
}
EventSuccess evt = new EventSuccess(obj);
evt.setMessage(message);
return evt;
}

Related

Using kafka streams to segregate messages

I have a setup where each kafka message will contain a "sender" field. All these message are sent to a single topic.
Is there a way to segregate these messages at the consumer side? I would like sender specific consumer that will read all messages pertaining to that sender alone.
Should I be using Kafka Streams to achieve this? I am new to Kafka Streams, any advice guidance will be helpful.
public class KafkaStreams3 {
public static void main(String[] args) throws JSONException {
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "kafkastreams1");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
final Serde < String > stringSerde = Serdes.String();
Properties kafkaProperties = new Properties();
kafkaProperties.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
kafkaProperties.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
kafkaProperties.put("bootstrap.servers", "localhost:9092");
KafkaProducer<String, String> producer = new KafkaProducer<String, String>(kafkaProperties);
KStreamBuilder builder = new KStreamBuilder();
KStream<String, String> source = builder.stream(stringSerde, stringSerde, "topic1");
KStream<String, String> s1 = source.map(new KeyValueMapper<String, String, KeyValue<String, String>>() {
#Override
public KeyValue<String, String> apply(String dummy, String record) {
JSONObject jsonObject;
try {
jsonObject = new JSONObject(record);
return new KeyValue<String,String>(jsonObject.get("sender").toString(), record);
} catch (JSONException e) {
e.printStackTrace();
return new KeyValue<>(record, record);
}
}
});
s1.print();
s1.foreach(new ForeachAction<String, String>() {
#Override
public void apply(String key, String value) {
ProducerRecord<String, String> data1 = new ProducerRecord<String, String>(
key, key, value);
producer.send(data1);
}
});
KafkaStreams streams = new KafkaStreams(builder, props);
streams.start();
Runtime.getRuntime().addShutdownHook(new Thread(new Runnable() {
#Override
public void run() {
streams.close();
producer.close();
}
}));
}
}
I believe the simplest way to achieve this is to use your "sender" field as a key and to have a single topic partitioned by "sender", this will give you locality and order per "sender" so you get a stronger ordering guarantee per "sender" and you can connect clients to consume from specific partitions.
Other possibility is that from the initial topic you stream your messages to other topics aggregating by key so you would end up having one topic per "sender".
Here's a fragment of code for a producer and then streaming with json serializers and deserializers.
Producer:
private Properties kafkaClientProperties() {
Properties properties = new Properties();
final Serializer<JsonNode> jsonSerializer = new JsonSerializer();
properties.put("bootstrap.servers", config.getHost());
properties.put("client.id", clientId);
properties.put("key.serializer", StringSerializer.class);
properties.put("value.serializer", jsonSerializer.getClass());
return properties;
}
public Future<RecordMetadata> send(String topic, String key, Object instance) {
ObjectMapper objectMapper = new ObjectMapper();
JsonNode jsonNode = objectMapper.convertValue(instance, JsonNode.class);
return kafkaProducer.send(new ProducerRecord<>(topic, key,
jsonNode));
}
The stream:
log.info("loading kafka stream configuration");
final Serializer<JsonNode> jsonSerializer = new JsonSerializer();
final Deserializer<JsonNode> jsonDeserializer = new JsonDeserializer();
final Serde<JsonNode> jsonSerde = Serdes.serdeFrom(jsonSerializer, jsonDeserializer);
KStreamBuilder kStreamBuilder = new KStreamBuilder();
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, config.getStreamEnrichProduce().getId());
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, hosts);
//stream from topic...
KStream<String, JsonNode> stockQuoteRawStream = kStreamBuilder.stream(Serdes.String(), jsonSerde , config.getStockQuote().getTopic());
Map<String, Map> exchanges = stockExchangeMaps.getExchanges();
ObjectMapper objectMapper = new ObjectMapper();
kafkaProducer.configure(config.getStreamEnrichProduce().getTopic());
// - enrich stockquote with stockdetails before producing to new topic
stockQuoteRawStream.foreach((key, jsonNode) -> {
StockQuote stockQuote = null;
StockDetail stockDetail;
try {
stockQuote = objectMapper.treeToValue(jsonNode, StockQuote.class);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
JsonNode exchangeNode = jsonNode.get("exchange");
// get stockDetail that matches current quote being processed
Map<String, StockDetail> stockDetailMap = exchanges.get(exchangeNode.toString().replace("\"", ""));
stockDetail = stockDetailMap.get(key);
stockQuote.setStockDetail(stockDetail);
kafkaProducer.send(config.getStreamEnrichProduce().getTopic(), null, stockQuote);
});
return new KafkaStreams(kStreamBuilder, props);

Understanding kafka zookeper auto reset

I still having doubts with kafka ZOOKEPER_AUTO_RESET.I have seen lot of questions asked on this regard. Kindly excuse if the same is a duplicate query .
I am having a high level java consumer which keeps on consuming.
I do have multiple topics and all topics are having a single partition.
My concern is on the below.
I started the consumerkafka.jar with consumer group name as “ncdev1” and ZOOKEPER_AUTO_RESET = smallest . Could observe that init offset is set as -1. Then I stop/started the jar after sometime. At this time, it picks the latest offset assigned to the consumer group (ncdev1) ie 36. I again restarted after sometime, then the initoffset is set to 39. Which is the latest value.
Then I changed the group name to ZOOKEPER_GROUP_ID = ncdev2. And restarted the jar file, this time again the offset is set to -1. In further restarts, it jumped to the latest value ie 39
Then I set the
ZOOKEPER_AUTO_RESET=largest and ZOOKEPER_GROUP_ID = ncdev3
Then tried restarting the jar file with group name ncdev3. There is no difference in the way it picks offset when it restarts. That is it is picking 39 when it restarts, which is same as the previous configuration.
Any idea on why is it not picking offset form the beginning.Any other configuration to be done to make it read from the beginning?(largest and smallest understanding from What determines Kafka consumer offset?)
Thanks in Advance
Code addedd
public class ConsumerForKafka {
private final ConsumerConnector consumer;
private final String topic;
private ExecutorService executor;
ServerSocket soketToWrite;
Socket s_Accept ;
OutputStream s1out ;
DataOutputStream dos;
static boolean logEnabled ;
static File fileName;
private static final Logger logger = Logger.getLogger(ConsumerForKafka.class);
public ConsumerForKafka(String a_zookeeper, String a_groupId, String a_topic,String session_timeout,String auto_reset,String a_commitEnable) {
consumer = kafka.consumer.Consumer.createJavaConsumerConnector(
createConsumerConfig(a_zookeeper, a_groupId,session_timeout,auto_reset,a_commitEnable));
this.topic =a_topic;
}
public void run(int a_numThreads,String a_zookeeper, String a_topic) throws InterruptedException, IOException {
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put(topic, new Integer(a_numThreads));
Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
String socketURL = PropertyUtils.getProperty("SOCKET_CONNECT_HOST");
int socketPort = Integer.parseInt(PropertyUtils.getProperty("SOCKET_CONNECT_PORT"));
Socket socks = new Socket(socketURL,socketPort);
//****
String keeper = a_zookeeper;
String topic = a_topic;
long millis = new java.util.Date().getTime();
//****
PrintWriter outWriter = new PrintWriter(socks.getOutputStream(), true);
List<KafkaStream<byte[], byte[]>> streams = null;
// now create an object to consume the messages
//
int threadNumber = 0;
// System.out.println("going to forTopic value is "+topic);
boolean keepRunningThread =false;
boolean chcek = false;
logger.info("logged");
BufferedWriter bw = null;
FileWriter fw = null;
if(logEnabled){
fw = new FileWriter(fileName, true);
bw = new BufferedWriter(fw);
}
for (;;) {
streams = consumerMap.get(topic);
keepRunningThread =true;
for (final KafkaStream stream : streams) {
ConsumerIterator<byte[], byte[]> it = stream.iterator();
while(keepRunningThread)
{
try{
if (it.hasNext()){
if(logEnabled){
String data = new String(it.next().message())+""+"\n";
bw.write(data);
bw.flush();
outWriter.print(data);
outWriter.flush();
consumer.commitOffsets();
logger.info("Explicit commit ......");
}else{
outWriter.print(new String(it.next().message())+""+"\n");
outWriter.flush();
}
}
// logger.info("running");
} catch(ConsumerTimeoutException ex) {
keepRunningThread =false;
break;
}catch(NullPointerException npe ){
keepRunningThread =true;
npe.printStackTrace();
}catch(IllegalStateException ile){
keepRunningThread =true;
ile.printStackTrace();
}
}
}
}
}
private static ConsumerConfig createConsumerConfig(String a_zookeeper, String a_groupId,String session_timeout,String auto_reset,String commitEnable) {
Properties props = new Properties();
props.put("zookeeper.connect", a_zookeeper);
props.put("group.id", a_groupId);
props.put("zookeeper.session.timeout.ms", session_timeout);
props.put("zookeeper.sync.time.ms", "2000");
props.put("auto.offset.reset", auto_reset);
props.put("auto.commit.interval.ms", "60000");
props.put("consumer.timeout.ms", "30");
props.put("auto.commit.enable",commitEnable);
//props.put("rebalance.max.retries", "4");
return new ConsumerConfig(props);
}
public static void main(String[] args) throws InterruptedException {
String zooKeeper = PropertyUtils.getProperty("ZOOKEEPER_URL_PORT");
String groupId = PropertyUtils.getProperty("ZOOKEPER_GROUP_ID");
String session_timeout = PropertyUtils.getProperty("ZOOKEPER_SESSION_TIMOUT_MS"); //6400
String auto_reset = PropertyUtils.getProperty("ZOOKEPER_AUTO_RESET"); //smallest
String enableLogging = PropertyUtils.getProperty("ENABLE_LOG");
String directoryPath = PropertyUtils.getProperty("LOG_DIRECTORY");
String log4jpath = PropertyUtils.getProperty("LOG_DIR");
String commitEnable = PropertyUtils.getProperty("ZOOKEPER_COMMIT"); //false
PropertyConfigurator.configure(log4jpath);
String socketURL = PropertyUtils.getProperty("SOCKET_CONNECT_HOST");
int socketPort = Integer.parseInt(PropertyUtils.getProperty("SOCKET_CONNECT_PORT"));
try {
Socket socks = new Socket(socketURL,socketPort);
boolean connected = socks.isConnected() && !socks.isClosed();
if(connected){
//System.out.println("Able to connect ");
}else{
logger.info("Not able to conenct to socket ..Exiting...");
System.exit(0);
}
} catch (UnknownHostException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
} catch(java.net.ConnectException cne){
logger.info("Not able to conenct to socket ..Exitring...");
System.exit(0);
}
catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
// String zooKeeper = args[0];
// String groupId = args[1];
String topic = args[0];
int threads = 1;
logEnabled = Boolean.parseBoolean(enableLogging);
if(logEnabled)
createDirectory(topic,directoryPath);
ConsumerForKafka example = new ConsumerForKafka(zooKeeper, groupId, topic, session_timeout,auto_reset,commitEnable);
try {
example.run(threads,zooKeeper,topic);
} catch(java.net.ConnectException cne){
cne.printStackTrace();
System.exit(0);
}
catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private static void createDirectory(String topic,String d_Path) {
try{
File file = new File(d_Path);
if (!file.exists()) {
if (file.mkdir()) {
logger.info("Directory Created" +file.getPath());
} else {
logger.info("Directory Creation failed");
}
}
fileName = new File(d_Path + topic + ".log");
if (!fileName.exists()) {
fileName.createNewFile();
}
}catch(IOException IOE){
//logger.info("IOException occured during Directory or During File creation ");
}
}
}
After rereading your post carefully, I think what you ran into should be as expected.
I started the consumerkafka.jar with consumer group name as “ncdev1” and ZOOKEPER_AUTO_RESET = smallest . Could observe that init offset is set as -1. Then I stop/started the jar after sometime. At this time, it picks the latest offset assigned to the consumer group (ncdev1) ie 36.
auto.offset.reset only applies when there is no initial offset or if an offset is out of range. Since you only have 36 messages in the log, it's possible for the consumer group to read all those records very quickly, that's why you see consumer group always picked the latest offsets every time it got restarted.

Asynchronous email notification in groovy

I have the below code in a groovy class, I want to call this method asynchronously from various other groovy classes.
public void sendNotification(){
//async true
String from = ApplicationConfig.email_From;
String sendTo = ApplicationConfig.email_To;
String host = ApplicationConfig.email_Host;
String subject = ApplicationConfig.email_Subject;
String textToSend = ApplicationConfig.email_Text;
Properties properties = System.getProperties();
properties.setProperty("mail.smtp.host", host);
Session session = Session.getDefaultInstance(properties);
try{
MimeMessage message = new MimeMessage(session);
message.setFrom(new InternetAddress(from));
message.addRecipients(Message.RecipientType.TO, InternetAddress.parse(sendTo));
message.setSubject(subject);
message.setText(textToSend);
Transport.send(message);
}catch (MessagingException mex) {
mex.printStackTrace();
}
}
So far I couldn't find anything that fits my requirement, there are some plugins in grails, but I'm not using grails.
Just use an ExecutorService
ExecutorService pool = Executors.newFixedThreadPool(2)
def sender = { ->
Properties properties = System.getProperties();
properties.setProperty("mail.smtp.host", ApplicationConfig.email_Host);
Session session = Session.getDefaultInstance(properties);
try{
MimeMessage message = new MimeMessage(session);
message.setFrom(new InternetAddress(ApplicationConfig.email_From));
message.addRecipients(Message.RecipientType.TO, InternetAddress.parse(ApplicationConfig.email_To));
message.setSubject(ApplicationConfig.email_Subject);
message.setText(ApplicationConfig.email_Text);
Transport.send(message);
}catch (MessagingException mex) {
mex.printStackTrace();
}
}
public void sendNotification() {
pool.submit(sender)
}

Javamail Read Body of Message Until EOF

The following code is supposed to send and save messages sent via yahoomail. The sending part works OK but id does not save the sent message. I've been researching and found the following code:
/**
* Read the body of the message until EOF.
*/
public static String collect(BufferedReader in) throws IOException {
String line;
StringBuffer sb = new StringBuffer();
while ((line = in.readLine()) != null) {
sb.append(line);
sb.append("\n");
}
return sb.toString();
}
How do I incorporate it in the following code?
public void doSendYahooMail(){
from = txtFrom.getText();
password= new String(txtPassword.getPassword());
to = txtTo.getText();
cc = txtCC.getText();
bcc = txtBCC.getText();
subject = txtSubject.getText();
message_body = jtaMessage.getText();
//String imapHost = "imap.mail.yahoo.com";
Properties props = new Properties();
props.put("mail.smtp.starttls.enable", "true");
props.put("mail.smtp.auth", "true");
props.put("mail.smtp.host", "smtp.mail.yahoo.com");
props.put("mail.smtp.port", "465");
props.put("mail.smtp.ssl.enable", "true");
props.put("mail.imap.host", "imap.mail.yahoo.com");
props.put("mail.imap.ssl.enable", "true");
props.put("mail.imap.port", "993");
Session session = Session.getInstance(props,null);
try {
// message definition
Message message = new MimeMessage(session);
message.setFrom(new InternetAddress(from));
message.setRecipients(Message.RecipientType.TO,InternetAddress.parse(to));
if(!cc.equals("")){
message.setRecipients(Message.RecipientType.CC, InternetAddress.parse(cc));
}
if(!bcc.equals("")){
message.setRecipients(Message.RecipientType.BCC, InternetAddress.parse(bcc));
}
message.setSubject(subject);
if(!filename.equals("")){// if a file has been attached...
BodyPart messageBodyPart = new MimeBodyPart();
messageBodyPart.setText(message_body);// actual message
Multipart multipart = new MimeMultipart();// create multipart message
// set the text message part
multipart.addBodyPart(messageBodyPart);//add the text message to the multipart
// attachment part
messageBodyPart = new MimeBodyPart();
String attachment = fileAbsolutePath;
DataSource source = new FileDataSource(attachment);
messageBodyPart.setDataHandler(new DataHandler(source));
messageBodyPart.setFileName(filename);
multipart.addBodyPart(messageBodyPart);//add the attachment to the multipart message
// combine text and attachment
message.setContent(multipart);
// send the complete message
Transport.send(message, from, password);
}
else{// if no file has been attached
message.setText(message_body);
Transport.send(message, from, password);
}
JOptionPane.showMessageDialog(this, "Message Sent!","Sent",JOptionPane.INFORMATION_MESSAGE);
filename = "";//reset filename to null after message is sent
fileAbsolutePath = "";//reset absolute path name to null after message is sent
// save sent message
Store store = session.getStore("imap");
store.connect(from, password);
Folder folder = store.getFolder("Sent");
if(!folder.exists()){
folder.create(Folder.HOLDS_MESSAGES);
Message[] msg = new Message[1];
msg[0] = message;
folder.appendMessages(msg);
}
store.close();
} catch (Exception e) {
JOptionPane.showMessageDialog(this, e.toString());
}
}

Use a TemporaryQueue on client-side for synchronous Request/Reply JMS with a JBoss server bean

I have a MDB running on JBoss 7.1, and a simple Java application as a client on another machine. The goal is the following:
the client sends a request (ObjectMessage) to the server
the server processes the request and sends back a response to the client (ObjectMessage again)
I thought to use a TemporaryQueue on the client to listen for the response (because I don't know how to do it asynchronously), and the JMSReplyTo Message's property to correctly reply back because I should support multiple independent clients.
This is the client:
public class MessagingService{
private static final String JBOSS_HOST = "localhost";
private static final int JBOSS_PORT = 5455;
private static Map connectionParams = new HashMap();
private Window window;
private Queue remoteQueue;
private TemporaryQueue localQueue;
private ConnectionFactory connectionFactory;
private Connection connection;
private Session session;
public MessagingService(Window myWindow){
this.window = myWindow;
MessagingService.connectionParams.put(TransportConstants.PORT_PROP_NAME, JBOSS_PORT);
MessagingService.connectionParams.put(TransportConstants.HOST_PROP_NAME, JBOSS_HOST);
TransportConfiguration transportConfiguration = new TransportConfiguration(NettyConnectorFactory.class.getName(), connectionParams);
this.connectionFactory = (ConnectionFactory) HornetQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF, transportConfiguration);
}
public void sendRequest(ClientRequest request) {
try {
connection = connectionFactory.createConnection();
this.session = connection.createSession(false, QueueSession.AUTO_ACKNOWLEDGE);
this.remoteQueue = HornetQJMSClient.createQueue("testQueue");
this.localQueue = session.createTemporaryQueue();
MessageProducer producer = session.createProducer(remoteQueue);
MessageConsumer consumer = session.createConsumer(localQueue);
ObjectMessage message = session.createObjectMessage();
message.setObject(request);
message.setJMSReplyTo(localQueue);
producer.send(message);
ObjectMessage response = (ObjectMessage) consumer.receive();
ServerResponse serverResponse = (ServerResponse) response.getObject();
this.window.dispatchResponse(serverResponse);
this.session.close();
} catch (JMSException e) {
// TODO splittare e differenziare
e.printStackTrace();
}
}
Now I'm having troubles writing the server side, as I cannot figure out how to establish a Connection to a TemporaryQueue...
public void onMessage(Message message) {
try {
if (message instanceof ObjectMessage) {
Destination replyDestination = message.getJMSReplyTo();
ObjectMessage objectMessage = (ObjectMessage) message;
ClientRequest request = (ClientRequest) objectMessage.getObject();
System.out.println("Queue: I received an ObjectMessage at " + new Date());
System.out.println("Client Request Details: ");
System.out.println(request.getDeparture());
System.out.println(request.getArrival());
System.out.println(request.getDate());
System.out.println("Replying...");
// no idea what to do here
Connection connection = ? ? ? ? ? ? ? ?
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer replyProducer = session.createProducer(replyDestination);
ServerResponse serverResponse = new ServerResponse("TEST RESPONSE");
ObjectMessage response = session.createObjectMessage();
response.setObject(serverResponse);
replyProducer.send(response);
} else {
System.out.println("Not a valid message for this Queue MDB");
}
} catch (JMSException e) {
e.printStackTrace();
}
}
I cannot figure out what am I missing
You are asking the wrong question here.. You should look at how to create a Connection inside any Bean.
you need to get the ConnectionFactory, and create the connection accordingly.
For more information, look at the javaee examples on the HornetQ download.
In specific look at javaee/mdb-tx-send/ when you download hornetq.
#MessageDriven(name = "MDBMessageSendTxExample",
activationConfig =
{
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
#ActivationConfigProperty(propertyName = "destination", propertyValue = "queue/testQueue")
})
public class MDBMessageSendTxExample implements MessageListener
{
#Resource(mappedName = "java:/JmsXA")
ConnectionFactory connectionFactory;
public void onMessage(Message message)
{
Connection conn = null;
try
{
// your code here...
//Step 11. we create a JMS connection
conn = connectionFactory.createConnection();
//Step 12. We create a JMS session
Session sess = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
//Step 13. we create a producer for the reply queue
MessageProducer producer = sess.createProducer(replyDestination);
//Step 14. we create a message and send it
producer.send(sess.createTextMessage("this is a reply"));
}
catch (Exception e)
{
e.printStackTrace();
}
finally
{
if(conn != null)
{
try
{
conn.close();
}
catch (JMSException e)
{
}
}
}
}