I am not getting results from ksql StreamQuery integrated with java. when I am printing log for client is showing not completed - apache-kafka

I am using confluent kafka version 6.o
downloaded from https://www.confluent.io/download/
I am referring
https://docs.ksqldb.io/en/latest/developer-guide/ksqldb-clients/java-client/ acritical .
https://www.youtube.com/watch?v=85udigshlNI
with java producer code I am able to send value to ksql. But not able to retrieve this value.
when I am printing log for streamQuery result, I am getting Not Completed message.
used Maven dependency as:
<dependencies>
<dependency>
<groupId>io.confluent.ksql</groupId>
<artifactId>ksqldb-api-client</artifactId>
<version>${ksqldb.version}</version>
</dependency>
</dependencies>
java code :
public class ExampleApp {
public static String KSQLDB_SERVER_HOST = "localhost";
public static int KSQLDB_SERVER_HOST_PORT = 8088;
public static void main(String[] args) {
ClientOptions options = ClientOptions.create()
.setHost(KSQLDB_SERVER_HOST)
.setPort(KSQLDB_SERVER_HOST_PORT);
Client client = Client.create(options);
// Send requests with the client by following the other examples
// Terminate any open connections and close the client
client.close();
}
}public class ExampleApp {
public static String KSQLDB_SERVER_HOST = "localhost";
public static int KSQLDB_SERVER_HOST_PORT = 8088;
public static void main(String[] args) {
ClientOptions options = ClientOptions.create()
.setHost(KSQLDB_SERVER_HOST)
.setPort(KSQLDB_SERVER_HOST_PORT);
Client client = Client.create(options);
StreamedQueryResult streamedQueryResult = client.streamQuery("SELECT * FROM MY_STREAM EMIT CHANGES;").get();
for (int i = 0; i < 10; i++) {
// Block until a new row is available
Row row = streamedQueryResult.poll();
if (row != null) {
System.out.println("Received a row!");
System.out.println("Row: " + row.values());
} else {
System.out.println("Query has ended.");
}
}
client.close();
}
}
output :
get() is waiting for long time even after adding values into topic waiting and finally gives timeout exception.

Related

Spring framework integration TCP IP - Client application SSL not working and posting incomplete requests

I am new to Spring framework. We have a requirement where our application is acting as a client and needs to integrate with another application using TCP. We will be sending them fixed length requests and we will receive response for the same. We have been asked to use the same TCP connection for each request. Using the same open connection, our application will also be receiving heartbeat messages from server application and we do not need to send any response for them.
The request messages that we need to send is header + body where header has message type and length details.
We will be using SSL. When we try to test with SSL, it does not show any exception during getConnection but is not able to receive any heartbeat messages.
When we test without SSL, it is able to send requests and receive response as well as heartbeat messages. But after the first request response, it sends partial request text to server application for subsequent messages which is causing issues and connections are being reset by peer due to unexpected message received at their end.
I have tried many things referring to online documents available but not able to successfully implement the requirement.
Please find below code. Thanks in advance.
public class ClientConfig implements ApplicationEventPublisherAware{
protected String port;
protected String host;
protected String connectionTimeout;
protected String keyStorePath;
protected String trustStorePath;
protected String keyStorePassword;
protected String trustStorePassword;
protected String protocol;
private ApplicationEventPublisher applicationEventPublisher;
#Override
public void setApplicationEventPublisher(ApplicationEventPublisher applicationEventPublisher) {
this.applicationEventPublisher = applicationEventPublisher;
}
#Bean
public DefaultTcpNioSSLConnectionSupport connectionSupport() {
if("SSL".equalsIgnoreCase(getProtocol())) {
DefaultTcpSSLContextSupport sslContextSupport =
new DefaultTcpSSLContextSupport(getKeyStorePath(),
getTrustStorePath(), getKeyStorePassword(), getTrustStorePassword());
sslContextSupport.setProtocol(getProtocol());
DefaultTcpNioSSLConnectionSupport tcpNioConnectionSupport =
new DefaultTcpNioSSLConnectionSupport(sslContextSupport);
return tcpNioConnectionSupport;
}
return null;
}
#Bean
public AbstractClientConnectionFactory clientConnectionFactory() {
if(StringUtils.isNullOrEmptyTrim(getHost()) || StringUtils.isNullOrEmptyTrim(getPort())) {
return null;
}
TcpNioClientConnectionFactory tcpNioClientConnectionFactory =
new TcpNioClientConnectionFactory(getHost(), Integer.valueOf(getPort()));
tcpNioClientConnectionFactory.setApplicationEventPublisher(applicationEventPublisher);
tcpNioClientConnectionFactory.setSoKeepAlive(true);
tcpNioClientConnectionFactory.setDeserializer(new CustomSerializerDeserializer());
tcpNioClientConnectionFactory.setSerializer(new CustomSerializerDeserializer());
tcpNioClientConnectionFactory.setLeaveOpen(true);
tcpNioClientConnectionFactory.setSingleUse(false);
if("SSL".equalsIgnoreCase(getProtocol())) {
tcpNioClientConnectionFactory.setSslHandshakeTimeout(60);
tcpNioClientConnectionFactory.setTcpNioConnectionSupport(connectionSupport());
}
return tcpNioClientConnectionFactory;
}
#Bean
public MessageChannel outboundChannel() {
return new DirectChannel();
}
#Bean
public PollableChannel receiverChannel() {
return new QueueChannel();
}
#Bean
#ServiceActivator(inputChannel = "outboundChannel")
public TcpSendingMessageHandler outboundClient
(AbstractClientConnectionFactory clientConnectionFactory) {
TcpSendingMessageHandler outbound = new TcpSendingMessageHandler();
outbound.setConnectionFactory(clientConnectionFactory);
if(!StringUtils.isNullOrEmpty(getConnectionTimeout())) {
long timeout = Long.valueOf(getConnectionTimeout());
outbound.setRetryInterval(TimeUnit.SECONDS.toMillis(timeout));
}
outbound.setClientMode(true);
return outbound;
}
#Bean
public TcpReceivingChannelAdapter inboundClient(TcpNioClientConnectionFactory connectionFactory) {
TcpReceivingChannelAdapter inbound = new TcpReceivingChannelAdapter();
inbound.setConnectionFactory(connectionFactory);
if(!StringUtils.isNullOrEmpty(getConnectionTimeout())) {
long timeout = Long.valueOf(getConnectionTimeout());
inbound.setRetryInterval(TimeUnit.SECONDS.toMillis(timeout));
}
inbound.setOutputChannel(receiverChannel());
inbound.setClientMode(true);
return inbound;
}
}
public class CustomSerializerDeserializer implements Serializer<String>, Deserializer<String> {
#Override
public String deserialize(InputStream inputStream) throws IOException {
int i = 0;
byte[] lenbuf = new byte[8];
String message = null;
while ((i = inputStream.read(lenbuf)) != -1) {
String messageType = new String(lenbuf);
if(messageType.contains(APP_DATA_LEN)){
byte byteResp[] = new byte[RESP_MSG_LEN-8];
inputStream.read(byteResp, 0, RESP_MSG_LEN-8);
String readMsg = new String(byteResp);
message = messageType + readMsg;
}else {
byte byteResp[] = new byte[HANDSHAKE_LEN-8];
inputStream.read(byteResp, 0, HANDSHAKE_LEN-8);
String readMsg = new String(byteResp);
message = messageType + readMsg;
}
}
return message;
}
#Override
public void serialize(String object, OutputStream outputStream) throws IOException {
outputStream.write(object.getBytes());
outputStream.flush();
}
}
#Override
public String sendMessage(String message) {
Message<String> request = MessageBuilder.withPayload(message).build();
DirectChannel outboundChannel = (DirectChannel) applicationContext.getBean(DirectChannel.class);
outboundChannel.send(request);
}
//Below code is being used to open connection
TcpNioClientConnectionFactory cf = (TcpNioClientConnectionFactory) applicationContext.getBean(AbstractClientConnectionFactory.class);
if(cf != null) {
TcpNioConnection conn = (TcpNioConnection) cf.getConnection();
}

Curator ServiceCacheListener is triggered three times when a service is added

I am learning zookeeper and trying out the Curator framework for service discoveries. However, I am facing a weird issue that I have difficulties to figure out. The problem is when I tried to register an instance via serviceDiscovery, the cacheChanged event of the serviceCache gets triggered three times. When I removed an instance, it is only triggered once, which is the expected behavior. Please see the code below:
public class DiscoveryExample {
private static String PATH = "/base";
static ServiceDiscovery<InstanceDetails> serviceDiscovery = null;
public static void main(String[] args) throws Exception {
CuratorFramework client = null;
try {
// this is the ip address of my VM
client = CuratorFrameworkFactory.newClient("192.168.149.129:2181", new ExponentialBackoffRetry(1000, 3));
client.start();
JsonInstanceSerializer<InstanceDetails> serializer = new JsonInstanceSerializer<InstanceDetails>(
InstanceDetails.class);
serviceDiscovery = ServiceDiscoveryBuilder.builder(InstanceDetails.class)
.client(client)
.basePath(PATH)
.serializer(serializer)
.build();
serviceDiscovery.start();
ServiceCache<InstanceDetails> serviceCache = serviceDiscovery.serviceCacheBuilder()
.name("product")
.build();
serviceCache.addListener(new ServiceCacheListener() {
#Override
public void stateChanged(CuratorFramework curator, ConnectionState state) {
// TODO Auto-generated method stub
System.out.println("State Changed to " + state.name());
}
// THIS IS THE PART GETS TRIGGERED MULTIPLE TIMES
#Override
public void cacheChanged() {
System.out.println("Cached Changed ");
List<ServiceInstance<InstanceDetails>> list = serviceCache.getInstances();
Iterator<ServiceInstance<InstanceDetails>> it = list.iterator();
while(it.hasNext()) {
System.out.println(it.next().getAddress());
}
}
});
serviceCache.start();
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
System.out.print("> ");
String line = in.readLine();
} finally {
CloseableUtils.closeQuietly(serviceDiscovery);
CloseableUtils.closeQuietly(client);
}
}
}
AND
public class RegisterApplicationServer {
final static String PATH = "/base";
static ServiceDiscovery<InstanceDetails> serviceDiscovery = null;
public static void main(String[] args) throws Exception {
CuratorFramework client = null;
try {
client = CuratorFrameworkFactory.newClient("192.168.149.129:2181", new ExponentialBackoffRetry(1000, 3));
client.start();
JsonInstanceSerializer<InstanceDetails> serializer = new JsonInstanceSerializer<InstanceDetails>(
InstanceDetails.class);
serviceDiscovery = ServiceDiscoveryBuilder.builder(InstanceDetails.class).client(client).basePath(PATH)
.serializer(serializer).build();
serviceDiscovery.start();
// SOME OTHER CODE THAT TAKES CARES OF USER INPUT...
} finally {
CloseableUtils.closeQuietly(serviceDiscovery);
CloseableUtils.closeQuietly(client);
}
}
private static void addInstance(String[] args, CuratorFramework client, String command,
ServiceDiscovery<InstanceDetails> serviceDiscovery) throws Exception {
// simulate a new instance coming up
// in a real application, this would be a separate process
if (args.length < 2) {
System.err.println("syntax error (expected add <name> <description>): " + command);
return;
}
StringBuilder description = new StringBuilder();
for (int i = 1; i < args.length; ++i) {
if (i > 1) {
description.append(' ');
}
description.append(args[i]);
}
String serviceName = args[0];
ApplicationServer server = new ApplicationServer(client, PATH, serviceName, description.toString());
server.start();
serviceDiscovery.registerService(server.getThisInstance());
System.out.println(serviceName + " added");
}
private static void deleteInstance(String[] args, String command, ServiceDiscovery<InstanceDetails> serviceDiscovery) throws Exception {
// in a real application, this would occur due to normal operation, a
// crash, maintenance, etc.
if (args.length != 2) {
System.err.println("syntax error (expected delete <name>): " + command);
return;
}
final String serviceName = args[0];
Collection<ServiceInstance<InstanceDetails>> set = serviceDiscovery.queryForInstances(serviceName);
Iterator<ServiceInstance<InstanceDetails>> it = set.iterator();
while (it.hasNext()) {
ServiceInstance<InstanceDetails> si = it.next();
if (si.getPayload().getDescription().indexOf(args[1]) != -1) {
serviceDiscovery.unregisterService(si);
}
}
System.out.println("Removed an instance of: " + serviceName);
}
}
I appriciate if anyone can please point out where I am doing wrong and maybe can share some good materials/examples so I can refer to. The official website and the examples on github does not help a lot.

Why use Kryo serialize framework into apache storm will over write data when blot get values

Maybe mostly develop were use AVRO as serialize framework in Kafka and Apache Storm scheme. But I need handle most complex data then I found the Kryo serialize framework also were successfully integrate it into our project which follow Kafka and Apache Storm environment. But when want to further operation there had a strange status.
I had sent 5 times message to Kafka, the Storm job also can read the 5 messages and deserialize success. But next blot get the data value is wrong. There print out the same value as the last message. Then I had add the print out after when complete the deserialize code. Actually it print out true there had different 5 message. Why the next blot can't the values? See my code below:
KryoScheme.java
public abstract class KryoScheme<T> implements Scheme {
private static final long serialVersionUID = 6923985190833960706L;
private static final Logger logger = LoggerFactory.getLogger(KryoScheme.class);
private Class<T> clazz;
private Serializer<T> serializer;
public KryoScheme(Class<T> clazz, Serializer<T> serializer) {
this.clazz = clazz;
this.serializer = serializer;
}
#Override
public List<Object> deserialize(byte[] buffer) {
Kryo kryo = new Kryo();
kryo.register(clazz, serializer);
T scheme = null;
try {
scheme = kryo.readObject(new Input(new ByteArrayInputStream(buffer)), this.clazz);
logger.info("{}", scheme);
} catch (Exception e) {
String errMsg = String.format("Kryo Scheme failed to deserialize data from Kafka to %s. Raw: %s",
clazz.getName(),
new String(buffer));
logger.error(errMsg, e);
throw new FailedException(errMsg, e);
}
return new Values(scheme);
}}
PrintFunction.java
public class PrintFunction extends BaseFunction {
private static final Logger logger = LoggerFactory.getLogger(PrintFunction.class);
#Override
public void execute(TridentTuple tuple, TridentCollector collector) {
List<Object> data = tuple.getValues();
if (data != null) {
logger.info("Scheme data size: {}", data.size());
for (Object value : data) {
PrintOut out = (PrintOut) value;
logger.info("{}.{}--value: {}",
Thread.currentThread().getName(),
Thread.currentThread().getId(),
out.toString());
collector.emit(new Values(out));
}
}
}}
StormLocalTopology.java
public class StormLocalTopology {
public static void main(String[] args) {
........
BrokerHosts zk = new ZkHosts("xxxxxx");
Config stormConf = new Config();
stormConf.put(Config.TOPOLOGY_DEBUG, false);
stormConf.put(Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS, 1000 * 5);
stormConf.put(Config.TOPOLOGY_WORKERS, 1);
stormConf.put(Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS, 5);
stormConf.put(Config.TOPOLOGY_TASKS, 1);
TridentKafkaConfig actSpoutConf = new TridentKafkaConfig(zk, topic);
actSpoutConf.fetchSizeBytes = 5 * 1024 * 1024 ;
actSpoutConf.bufferSizeBytes = 5 * 1024 * 1024 ;
actSpoutConf.scheme = new SchemeAsMultiScheme(scheme);
actSpoutConf.startOffsetTime = kafka.api.OffsetRequest.LatestTime();
TridentTopology topology = new TridentTopology();
TransactionalTridentKafkaSpout actSpout = new TransactionalTridentKafkaSpout(actSpoutConf);
topology.newStream(topic, actSpout).parallelismHint(4).shuffle()
.each(new Fields("act"), new PrintFunction(), new Fields());
LocalCluster cluster = new LocalCluster();
cluster.submitTopology(topic+"Topology", stormConf, topology.build());
}}
There also other problem why the kryo scheme only can read one message buffer. Is there other way get multi messages buffer then can batch send data to next blot.
Also if I send 1 message the full flow seems success.
Then send 2 message is wrong. the print out message like below:
56157 [Thread-18-spout0] INFO s.s.a.s.s.c.KryoScheme - 2016-02- 05T17:20:48.122+0800,T6mdfEW#N5pEtNBW
56160 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Scheme data size: 1
56160 [Thread-18-spout0] INFO s.s.a.s.s.c.KryoScheme - 2016-02- 05T17:20:48.282+0800,T(o2KnFxtGB0Tlp8
56161 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Thread-20-b-0.99--value: 2016-02-05T17:20:48.282+0800,T(o2KnFxtGB0Tlp8
56162 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Scheme data size: 1
56162 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Thread-20-b-0.99--value: 2016-02-05T17:20:48.282+0800,T(o2KnFxtGB0Tlp8
I'm sorry this my mistake. Just found a bug in Kryo deserialize class, there exist an local scope parameter, so it can be over write in multi thread environment. Not change the parameter in party scope, the code run well.
reference code see blow:
public class KryoSerializer<T extends BasicEvent> extends Serializer<T> implements Serializable {
private static final long serialVersionUID = -4684340809824908270L;
// It's wrong set
//private T event;
public KryoSerializer(T event) {
this.event = event;
}
#Override
public void write(Kryo kryo, Output output, T event) {
event.write(output);
}
#Override
public T read(Kryo kryo, Input input, Class<T> type) {
T event = new T();
event.read(input);
return event;
}
}

Sending message with external call in netty socket programming

I'm new to socket programming and Netty framework. I was trying to modify the Echo Server example so that the message is not sent from client as soon as a message is received, but a call from another thread would trigger the client send a message to the server.
The problem is, the server does not get the message unless the client sends it from readChannel or MessageReceived or channelActive which are where the server is specified with a parameter (ChannelHandlerContext). I couldn't manage to find a way to save the server channel and send a message later and repeatedly.
Here's my Client Handler code;
import io.netty.channel.ChannelHandlerAdapter;
import io.netty.channel.ChannelHandlerContext;
public class EchoClientHandler extends ChannelHandlerAdapter {
ChannelHandlerContext server;
#Override
public void channelActive(ChannelHandlerContext ctx) {
this.server = ctx;
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
// ctx.write(msg); //not
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
//ctx.flush();
}
public void externalcall(String msg) throws Exception {
if(server!=null){
server.writeAndFlush("[" + "] " + msg + '\n');
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
// Close the connection when an exception is raised.
ctx.close();
}
}
When Client creates the handler, it also creates a thread with a "SourceGenerator" object which gets the handler as parameter so as to call the externalcall() method.
import io.netty.bootstrap.Bootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
/**
* Sends one message when a connection is open and echoes back any received
* data to the server. Simply put, the echo client initiates the ping-pong
* traffic between the echo client and server by sending the first message to
* the server.
*/
public class EchoClient {
private final String host;
private final int port;
public EchoClient(String host, int port, int firstMessageSize) {
this.host = host;
this.port = port;
}
public void run() throws Exception {
// Configure the client.
EventLoopGroup group = new NioEventLoopGroup();
final EchoClientHandler x = new EchoClientHandler();
SourceGenerator sg = new SourceGenerator(x);
new Thread(sg).start();
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.option(ChannelOption.TCP_NODELAY, true)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(x);
}
});
// Start the client.
ChannelFuture f = b.connect(host, port).sync();
// Wait until the connection is closed.
f.channel().closeFuture().sync();
} finally {
// Shut down the event loop to terminate all threads.
group.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
// Print usage if no argument is specified.
if (args.length < 2 || args.length > 3) {
System.err.println(
"Usage: " + EchoClient.class.getSimpleName() +
" <host> <port> [<first message size>]");
return;
}
// Parse options.
final String host = args[0];
final int port = Integer.parseInt(args[1]);
final int firstMessageSize;
if (args.length == 3) {
firstMessageSize = Integer.parseInt(args[2]);
} else {
firstMessageSize = 256;
}
new EchoClient(host, port, firstMessageSize).run();
}
}
and the SourceGenerator class;
public class SourceGenerator implements Runnable {
public String dat;
public EchoClientHandler asd;
public SourceGenerator(EchoClientHandler x) {
asd = x;
System.out.println("initialized source generator");
dat = "";
}
#Override
public void run() {
try{
while(true){
Thread.sleep(2000);
dat += "a";
asd.externalcall(dat);
System.out.print("ha!");
}
}catch(Exception e){
e.printStackTrace();
}
}
}
Thanks in advance!
If you want to write a String you need to have the StringEncoder in the ChannelPipeline.
Otherwise you can only send ByteBuf instances.

Implementing MDB Pool Listener in JBoss JMS

I've an application deployed in JBoss with multiple MDBs deployed using JBoss JMS implementation, each one with a different configuration of MDB Pool Size. I was looking forward to some kind of mechanism where we can have a listener on each MDB Pool size where we can check if at any point all instances from the MDB pool are getting utilized. This will help in analyzing and configuring the appropriate MDB pool size for each MDB.
We use Jamon to monitor instances of MDBs, like this:
#MessageDriven
#TransactionManagement(value = TransactionManagementType.CONTAINER)
#TransactionAttribute(value = TransactionAttributeType.REQUIRED)
#ResourceAdapter("wmq.jmsra.rar")
#AspectDomain("YourDomainName")
public class YourMessageDrivenBean implements MessageListener
{
// jamon package constant
protected static final String WB_ONMESSAGE = "wb.onMessage";
// instance counter
private static AtomicInteger counter = new AtomicInteger(0);
private int instanceIdentifier = 0;
#Resource
MessageDrivenContext ctx;
#Override
public void onMessage(Message message)
{
final Monitor monall = MonitorFactory.start(WB_ONMESSAGE);
final Monitor mon = MonitorFactory.start(WB_ONMESSAGE + "." + toString()
+ "; mdb instance identifier=" + instanceIdentifier);
try {
// process your message here
}
} catch (final Exception x) {
log.error("Error onMessage " + x.getMessage(), x);
ctx.setRollbackOnly();
} finally {
monall.stop();
mon.stop();
}
}
#PostConstruct
public void init()
{
instanceIdentifier = counter.incrementAndGet();
log.debug("constructed instance #" + instanceIdentifier);
}
}
You can then see in the Jamon-Monitor every created instance of your MDB.