Flink handling Kafka messages with parsing error - apache-kafka

I have some Kafka messages of type InputIoTMessage coming in from Kafka and consumed through FlinkKafkaConsumer as below. I want to add an error field in InputIoTMessage class if there is a NoSuchFieldException. Also, Is this the best practice to handle this types of scenario or we have something more elegant in Java 8 e.g. using Option or Future?
String inputTopic = "sensors";
String outputTopic = "sensors_out";
String consumerGroup = "baeldung";
String address = "kafka:9092";
StreamExecutionEnvironment environment = StreamExecutionEnvironment.getExecutionEnvironment();
FlinkKafkaConsumer011<InputIoTMessage> flinkKafkaConsumer = createIoTConsumerForTopic(inputTopic, address, consumerGroup);
flinkKafkaConsumer.setStartFromEarliest();
DataStream<InputIoTMessage> stringInputStream = environment.addSource(flinkKafkaConsumer);
System.out.println("IoT Message received :: " );
stringInputStream
.filter((event) -> {
if(event.has("jsonParseError")) {
LOG.warn("JsonParseException was handled: " + event.get("jsonParseError").asText());
return false;
}
return true;
})
.print();
InputIoTMessage.java (has method to check if field exists)
public boolean has(String fieldName) {
boolean isExists;
try {
isExists = fieldName.equalsIgnoreCase(this.getClass().getField(fieldName).getName());
} catch (NoSuchFieldException | SecurityException e) {
Field[] fieldArr = this.getClass().getDeclaredFields();
//Question: how to add "jsonParseError" field to the object here ?
}
return true;
}

The filter function does not modify the input records, maybe you can implement the flatMap function, after modifying the record, output through out.collect
stringInputStream.flatMap(new FlatMapFunction<InputIoTMessage, InputIoTMessage>() {
#Override
public void flatMap(InputIoTMessage input, Collector<InputIoTMessage> out) {
if (!input.has("jsonParseError")) {
InputIoTMessage output = xxxxx;
out.collect(output);
}
}
});

Related

HttpMessageConverter - AVRO to JSON to AVRO

I'm looking for an easy to use solution allowing to send all sorts of AVRO objects I read from a kafka stream to synchroneous recipients via REST. This can be single objects as well as collections or arrays of the same object types. The same for a binary format (between instances e.g. for interactive query feature where the record would be on a nother node). And finally I need to support compression.
There are several sources discussing Spring solutions for HttpMessageConverter simplifying the handling of domain objects in micro services using Kafka and AVRO e.g.:
Apache Avro Serialization with Spring MVC
Avro Converter. Serializing Apache Avro Objects via REST API and Other Transformations
Above proposed solutions work perfectly fine for scenarios where one instance of an AVRO object needs to be sent back or received. What is missing however, would be a solution allowing to send/receive collections or arrays of same AVRO objects.
For serializing single AVRO objects the code may look like
public byte[] serialize(T data) throws SerializationException {
try {
byte[] result = null;
if (data != null) {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
Encoder encoder = EncoderFactory.get().jsonEncoder(data.getSchema(), byteArrayOutputStream);
DatumWriter<T> datumWriter = new SpecificDatumWriter<>(data.getSchema());
datumWriter.write(data, encoder);
encoder.flush();
byteArrayOutputStream.close();
result = byteArrayOutputStream.toByteArray();
}
return result;
} catch (IOException e) {
throw new SerializationException("Can't serialize data='" + data + "'", e);
}
}
Similarly, the deserialization
public T deserialize(Class<? extends T> clazz, byte[] data) throws SerializationException {
try {
T result = null;
if (data != null) {
Class<? extends SpecificRecordBase> specificRecordClass =
(Class<? extends SpecificRecordBase>) clazz;
Schema schema = specificRecordClass.newInstance().getSchema();
DatumReader<T> datumReader =
new SpecificDatumReader<>(schema);
Decoder decoder = DecoderFactory.get().jsonDecoder(schema, new ByteArrayInputStream(data));
result = datumReader.read(null, decoder);
}
return result;
} catch (InstantiationException | IllegalAccessException | IOException e) {
throw new SerializationException("Can't deserialize data '" + Arrays.toString(data) + "'", e);
}
}
Examples focusing on JSON but same principle also applies for a binary format. What is missing however would be a solution allowing to send/receive collections or arrays of same AVRO objects. I therefore introduced two methods:
public byte[] serialize(final Iterator<T> iterator) throws SerializationException {
Encoder encoder = null;
DatumWriter<T> datumWriter = null;
try (final ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream()) {
while (iterator.hasNext()) {
T data = iterator.next();
if (encoder == null) {
// now that we have our first object we can get the schema
encoder = EncoderFactory.get().jsonEncoder(data.getSchema(), byteArrayOutputStream);
datumWriter = new SpecificDatumWriter<>(data.getSchema());
byteArrayOutputStream.write('[');
}
datumWriter.write(data, encoder);
if (iterator.hasNext()) {
encoder.flush();
byteArrayOutputStream.write(',');
}
}
if (encoder != null) {
encoder.flush();
byteArrayOutputStream.write(']');
return byteArrayOutputStream.toByteArray();
} else {
return null;
}
} catch (IOException e) {
throw new SerializationException("Can't serialize the data = '" + iterator + "'", e);
}
}
Deserialization gets even more a bit of hack:
public Collection<T> deserializeCollection(final Class<? extends T> clazz, final byte[] data) throws SerializationException {
try {
if (data != null) {
final Schema schema = clazz.getDeclaredConstructor().newInstance().getSchema();
final SpecificDatumReader<T> datumReader = new SpecificDatumReader<>(schema);
final ArrayList<T> resultList = new ArrayList<>();
int i = 0;
int startRecord = 0;
int openCount = 0;
ParserStatus parserStatus = ParserStatus.NA;
while (i < data.length) {
if (parserStatus == ParserStatus.NA) {
if (data[i] == '[') {
parserStatus = ParserStatus.ARRAY;
}
} else if (parserStatus == ParserStatus.ARRAY) {
if (data[i] == '{') {
parserStatus = ParserStatus.RECORD;
openCount = 1;
startRecord = i;
// } else if (data[i] == ',') {
// ignore
} else if (data[i] == ']') {
parserStatus = ParserStatus.NA;
}
} else { // parserStatus == ParserStatus.RECORD
if (data[i] == '}') {
openCount--;
if (openCount == 0) {
// now carve out the part start - i+1 and use a datumReader to create avro object
try (final ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(data, startRecord, i - startRecord +1)) {
final Decoder decoder = DecoderFactory.get().jsonDecoder(schema, byteArrayInputStream);
final SpecificRecordBase avroRecord = datumReader.read(null, decoder);
resultList.add((T) avroRecord);
}
parserStatus = ParserStatus.ARRAY;
}
} else if (data[i] == '{') {
openCount++;
}
}
i++;
}
if (parserStatus != ParserStatus.NA) {
log.warn("Malformed json input '{}'", new String(data));
}
return resultList;
}
return null;
} catch (InstantiationException | InvocationTargetException | IllegalAccessException | NoSuchMethodException | IOException e) {
throw new SerializationException("Can't deserialize data '" + new String(data) + "'", e);
}
}
Doing the same with format binary is far more straight forward as for the serialization one record after another can be serialized using datumWriter.write(data, encoder) with encoder = EncoderFactory.get().binaryEncoder(byteArrayOutputStream, null). There is no additional syntax needed and similarly deserialize SpecificRecordBase avroRecord = datumReader.read(null, decoder) and then adding the avroRecord to the collection.
For JSON the additional syntax is needed as the recipient might use its own deserialization e.g. for creating plain POJ objects or the other way round might have created the input using a POJ object and serialized it.
My current solution looks to me quite hacky. One option I thought of would be to create an intermediate AVRO object which only contains an array of the enclosed object and then I could use this intermediate object for serialization and deserialization as then the out of the box encoder and decoder would take over the extra logic. But introducing an extra AVRO object only for this purpose seems unnecessary overhead.
As an alternative solution I’ve started looking into org.apache.avro.io. JsonDecoder but didn’t see an easy way to extend it in a way to abstract from above home grown solution.
I'm currently extending above to support also compression using a decorator for compression and decompression (Deflator, GZIP and LZ4).
Any help or better solution is appreciated.

Stop processing kafka messages if something goes wrong during process

In my processor API I store the messages in a key value store and every 100 messages I make a POST request. If something fails while trying to send the messages (api is not responding etc.) I want to stop processing messages. Until there is evidence the API calls work.
Here is my code:
public class BulkProcessor implements Processor<byte[], UserEvent> {
private KeyValueStore<Integer, ArrayList<UserEvent>> keyValueStore;
private BulkAPIClient bulkClient;
private String storeName;
private ProcessorContext context;
private int count;
#Autowired
public BulkProcessor(String storeName, BulkClient bulkClient) {
this.storeName = storeName;
this.bulkClient = bulkClient;
}
#Override
public void init(ProcessorContext context) {
this.context = context;
keyValueStore = (KeyValueStore<Integer, ArrayList<UserEvent>>) context.getStateStore(storeName);
count = 0;
// to check every 15 minutes if there are any remainders in the store that are not sent yet
this.context.schedule(Duration.ofMinutes(15), PunctuationType.WALL_CLOCK_TIME, (timestamp) -> {
if (count > 0) {
sendEntriesFromStore();
}
});
}
#Override
public void process(byte[] key, UserEvent value) {
int userGroupId = Integer.valueOf(value.getUserGroupId());
ArrayList<UserEvent> userEventArrayList = keyValueStore.get(userGroupId);
if (userEventArrayList == null) {
userEventArrayList = new ArrayList<>();
}
userEventArrayList.add(value);
keyValueStore.put(userGroupId, userEventArrayList);
if (count == 100) {
sendEntriesFromStore();
}
}
private void sendEntriesFromStore() {
KeyValueIterator<Integer, ArrayList<UserEvent>> iterator = keyValueStore.all();
while (iterator.hasNext()) {
KeyValue<Integer, ArrayList<UserEvent>> entry = iterator.next();
BulkRequest bulkRequest = new BulkRequest(entry.key, entry.value);
if (bulkRequest.getLocation() != null) {
URI url = bulkClient.buildURIPath(bulkRequest);
try {
bulkClient.postRequestBulkApi(url, bulkRequest);
keyValueStore.delete(entry.key);
} catch (BulkApiException e) {
logger.warn(e.getMessage(), e.fillInStackTrace());
}
}
}
iterator.close();
count = 0;
}
#Override
public void close() {
}
}
Currently in my code if a call to the API fails it will iterate the next 100 (and this will keep happening as long as it fails) and add them to the keyValueStore. I don't want this to happen. Instead I would prefer to stop the stream and continue once the keyValueStore is emptied. Is that possible?
Could I throw a StreamsException?
try {
bulkClient.postRequestBulkApi(url, bulkRequest);
keyValueStore.delete(entry.key);
} catch (BulkApiException e) {
throw new StreamsException(e);
}
Would that kill my stream app and so the process dies?
You should only delete the record from state store after you make sure your record is successfully processed by the API, so remove the first keyValueStore.delete(entry.key); and keep the second one. If not then you can potentially lost some messages when keyValueStore.delete is committed to underlying changelog topic but your messages are not successfully process yet, so it's only at most one guarantee.
Just wrap the calling API code around an infinite loop and keep trying until the record successfully processed, your processor will not consume new message from above processor node cause it's running in a same StreamThread:
private void sendEntriesFromStore() {
KeyValueIterator<Integer, ArrayList<UserEvent>> iterator = keyValueStore.all();
while (iterator.hasNext()) {
KeyValue<Integer, ArrayList<UserEvent>> entry = iterator.next();
//remove this state store delete code : keyValueStore.delete(entry.key);
BulkRequest bulkRequest = new BulkRequest(entry.key, entry.value);
if (bulkRequest.getLocation() != null) {
URI url = bulkClient.buildURIPath(bulkRequest);
while (true) {
try {
bulkClient.postRequestBulkApi(url, bulkRequest);
keyValueStore.delete(entry.key);//only delete after successfully process the message to achieve at least one processing guarantee
break;
} catch (BulkApiException e) {
logger.warn(e.getMessage(), e.fillInStackTrace());
}
}
}
}
iterator.close();
count = 0;
}
Yes you could throw a StreamsException, this StreamTask will be migrate to another StreamThread during re-balance, maybe on the sample application instance. If the API keep causing Exception until all StreamThread had died, your application will not automatically exit and receive below Exception, you should add a custom StreamsException handler to exit your app when all stream threads had died using KafkaStreams#setUncaughtExceptionHandler or listen to Stream State change (to ERROR state):
All stream threads have died. The instance will be in error state and should be closed.
In the end I used a simple KafkaConsumer instead of KafkaStreams, but the bottom line was that I changed the BulkApiException to extend RuntimeException, which I throw again after I log it. So now it looks as follows:
} catch (BulkApiException bae) {
logger.error(bae.getMessage(), bae.fillInStackTrace());
throw new BulkApiException();
} finally {
consumer.close();
int exitCode = SpringApplication.exit(ctx, () -> 1);
System.exit(exitCode);
}
This way the application is exited and the k8s restarts the pod. That was because if the api where I'm trying to forward the requests is down, then there is no point on continue reading messages. So until the other api is back up k8s will restart a pod.

Why use Kryo serialize framework into apache storm will over write data when blot get values

Maybe mostly develop were use AVRO as serialize framework in Kafka and Apache Storm scheme. But I need handle most complex data then I found the Kryo serialize framework also were successfully integrate it into our project which follow Kafka and Apache Storm environment. But when want to further operation there had a strange status.
I had sent 5 times message to Kafka, the Storm job also can read the 5 messages and deserialize success. But next blot get the data value is wrong. There print out the same value as the last message. Then I had add the print out after when complete the deserialize code. Actually it print out true there had different 5 message. Why the next blot can't the values? See my code below:
KryoScheme.java
public abstract class KryoScheme<T> implements Scheme {
private static final long serialVersionUID = 6923985190833960706L;
private static final Logger logger = LoggerFactory.getLogger(KryoScheme.class);
private Class<T> clazz;
private Serializer<T> serializer;
public KryoScheme(Class<T> clazz, Serializer<T> serializer) {
this.clazz = clazz;
this.serializer = serializer;
}
#Override
public List<Object> deserialize(byte[] buffer) {
Kryo kryo = new Kryo();
kryo.register(clazz, serializer);
T scheme = null;
try {
scheme = kryo.readObject(new Input(new ByteArrayInputStream(buffer)), this.clazz);
logger.info("{}", scheme);
} catch (Exception e) {
String errMsg = String.format("Kryo Scheme failed to deserialize data from Kafka to %s. Raw: %s",
clazz.getName(),
new String(buffer));
logger.error(errMsg, e);
throw new FailedException(errMsg, e);
}
return new Values(scheme);
}}
PrintFunction.java
public class PrintFunction extends BaseFunction {
private static final Logger logger = LoggerFactory.getLogger(PrintFunction.class);
#Override
public void execute(TridentTuple tuple, TridentCollector collector) {
List<Object> data = tuple.getValues();
if (data != null) {
logger.info("Scheme data size: {}", data.size());
for (Object value : data) {
PrintOut out = (PrintOut) value;
logger.info("{}.{}--value: {}",
Thread.currentThread().getName(),
Thread.currentThread().getId(),
out.toString());
collector.emit(new Values(out));
}
}
}}
StormLocalTopology.java
public class StormLocalTopology {
public static void main(String[] args) {
........
BrokerHosts zk = new ZkHosts("xxxxxx");
Config stormConf = new Config();
stormConf.put(Config.TOPOLOGY_DEBUG, false);
stormConf.put(Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS, 1000 * 5);
stormConf.put(Config.TOPOLOGY_WORKERS, 1);
stormConf.put(Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS, 5);
stormConf.put(Config.TOPOLOGY_TASKS, 1);
TridentKafkaConfig actSpoutConf = new TridentKafkaConfig(zk, topic);
actSpoutConf.fetchSizeBytes = 5 * 1024 * 1024 ;
actSpoutConf.bufferSizeBytes = 5 * 1024 * 1024 ;
actSpoutConf.scheme = new SchemeAsMultiScheme(scheme);
actSpoutConf.startOffsetTime = kafka.api.OffsetRequest.LatestTime();
TridentTopology topology = new TridentTopology();
TransactionalTridentKafkaSpout actSpout = new TransactionalTridentKafkaSpout(actSpoutConf);
topology.newStream(topic, actSpout).parallelismHint(4).shuffle()
.each(new Fields("act"), new PrintFunction(), new Fields());
LocalCluster cluster = new LocalCluster();
cluster.submitTopology(topic+"Topology", stormConf, topology.build());
}}
There also other problem why the kryo scheme only can read one message buffer. Is there other way get multi messages buffer then can batch send data to next blot.
Also if I send 1 message the full flow seems success.
Then send 2 message is wrong. the print out message like below:
56157 [Thread-18-spout0] INFO s.s.a.s.s.c.KryoScheme - 2016-02- 05T17:20:48.122+0800,T6mdfEW#N5pEtNBW
56160 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Scheme data size: 1
56160 [Thread-18-spout0] INFO s.s.a.s.s.c.KryoScheme - 2016-02- 05T17:20:48.282+0800,T(o2KnFxtGB0Tlp8
56161 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Thread-20-b-0.99--value: 2016-02-05T17:20:48.282+0800,T(o2KnFxtGB0Tlp8
56162 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Scheme data size: 1
56162 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Thread-20-b-0.99--value: 2016-02-05T17:20:48.282+0800,T(o2KnFxtGB0Tlp8
I'm sorry this my mistake. Just found a bug in Kryo deserialize class, there exist an local scope parameter, so it can be over write in multi thread environment. Not change the parameter in party scope, the code run well.
reference code see blow:
public class KryoSerializer<T extends BasicEvent> extends Serializer<T> implements Serializable {
private static final long serialVersionUID = -4684340809824908270L;
// It's wrong set
//private T event;
public KryoSerializer(T event) {
this.event = event;
}
#Override
public void write(Kryo kryo, Output output, T event) {
event.write(output);
}
#Override
public T read(Kryo kryo, Input input, Class<T> type) {
T event = new T();
event.read(input);
return event;
}
}

CAS consumer not working as expected

I have a CAS consumer AE which is expected to iterates over CAS objects in a pipeline, serialize them and add the serialized CASs to an xml file.
public class DataWriter extends JCasConsumer_ImplBase {
private File outputDirectory;
public static final String PARAM_OUTPUT_DIRECTORY = "outputDir";
#ConfigurationParameter(name=PARAM_OUTPUT_DIRECTORY, defaultValue=".")
private String outputDir;
CasToInlineXml cas2xml;
public void initialize(UimaContext context) throws ResourceInitializationException {
super.initialize(context);
ConfigurationParameterInitializer.initialize(this, context);
outputDirectory = new File(outputDir);
if (!outputDirectory.exists()) {
outputDirectory.mkdirs();
}
}
#Override
public void process(JCas jCas) throws AnalysisEngineProcessException {
String file = fileCollectionReader.fileName;
File outFile = new File(outputDirectory, file + ".xmi");
FileOutputStream out = null;
try {
out = new FileOutputStream(outFile);
String xmlAnnotations = cas2xml.generateXML(jCas.getCas());
out.write(xmlAnnotations.getBytes("UTF-8"));
/* XmiCasSerializer ser = new XmiCasSerializer(jCas.getCas().getTypeSystem());
XMLSerializer xmlSer = new XMLSerializer(out, false);
ser.serialize(jCas.getCas(), xmlSer.getContentHandler());*/
if (out != null) {
out.close();
}
}
catch (IOException e) {
throw new AnalysisEngineProcessException(e);
}
catch (CASException e) {
throw new AnalysisEngineProcessException(e);
}
}
I am using it inside a pipeline after all my annotators, but it couldn't read CAS objects (I am getting NullPointerException at jCas.getCas()). It looks like I don't seem to understand the proper usage of CAS consumer. I appreciate any suggestions.

GWT-RPC method returns empty list on success

I am creating a webpage having CellTable.I need to feed this table with data from hbase table.
I have written a method to retrieve data from hbase table and tested it.
But when I call that method as GWT asynchronous RPC method then rpc call succeeds but it returns nothing.In my case it returns empty list.The alert box show list's size as 0.
Following is the related code.
Please help.
greetingService.getDeviceIDData(new AsyncCallback<List<DeviceDriverBean>>(){
public void onFailure(Throwable caught) {
// Show the RPC error message to the user
System.out.println("RPC Call failed");
Window.alert("Data : RPC call failed");
}
public void onSuccess(List<DeviceDriverBean> result) {
//on success do something
Window.alert("Data : RPC call successful");
//deviceDataList.addAll(result);
Window.alert("Result size: " +result.size());
// Add a text column to show the driver name.
TextColumn<DeviceDriverBean> nameColumn = new TextColumn<DeviceDriverBean>() {
#Override
public String getValue(DeviceDriverBean object) {
Window.alert(object.getName());
return object.getName();
}
};
table.addColumn(nameColumn, "Name");
// Add a text column to show the device id
TextColumn<DeviceDriverBean> deviceidColumn = new TextColumn<DeviceDriverBean>() {
#Override
public String getValue(DeviceDriverBean object) {
return object.getDeviceId();
}
};
table.addColumn(deviceidColumn, "Device ID");
table.setRowCount(result.size(), true);
// more code here to add columns in celltable
// Push the data into the widget.
table.setRowData(0, result);
SimplePager pager = new SimplePager();
pager.setDisplay(table);
VerticalPanel vp = new VerticalPanel();
vp.add(table);
vp.add(pager);
// Add it to the root panel.
RootPanel.get("datagridContainer").add(vp);
}
});
Code to retrieve data from hbase (server side code)
public List<DeviceDriverBean> getDeviceIDData()
throws IllegalArgumentException {
List<DeviceDriverBean> deviceidList = new ArrayList<DeviceDriverBean>();
// Escape data from the client to avoid cross-site script
// vulnerabilities.
/*
* input = escapeHtml(input); userAgent = escapeHtml(userAgent);
*
* return "Hello, " + input + "!<br><br>I am running " + serverInfo +
* ".<br><br>It looks like you are using:<br>" + userAgent;
*/
try {
Configuration config = HbaseConnectionSingleton.getInstance()
.HbaseConnect();
HTable testTable = new HTable(config, "driver_details");
byte[] family = Bytes.toBytes("details");
Scan scan = new Scan();
int cnt = 0;
ResultScanner rs = testTable.getScanner(scan);
for (Result r = rs.next(); r != null; r = rs.next()) {
DeviceDriverBean deviceDriverBean = new DeviceDriverBean();
byte[] rowid = r.getRow(); // Category, Date, Sentiment
NavigableMap<byte[], byte[]> map = r.getFamilyMap(family);
Iterator<Entry<byte[], byte[]>> itrt = map.entrySet()
.iterator();
deviceDriverBean.setDeviceId(Bytes.toString(rowid));
while (itrt.hasNext()) {
Entry<byte[], byte[]> entry = itrt.next();
//cnt++;
//System.out.println("Count : " + cnt);
byte[] qual = entry.getKey();
byte[] val = entry.getValue();
if (Bytes.toString(qual).equalsIgnoreCase("account_number")) {
deviceDriverBean.setAccountNo(Bytes.toString(val));
} else if (Bytes.toString(qual).equalsIgnoreCase("make")) {
deviceDriverBean.setMake(Bytes.toString(val));
} else if (Bytes.toString(qual).equalsIgnoreCase("model")) {
deviceDriverBean.setModel(Bytes.toString(val));
} else if (Bytes.toString(qual).equalsIgnoreCase("driver_name")) {
deviceDriverBean.setName(Bytes.toString(val));
} else if (Bytes.toString(qual).equalsIgnoreCase("premium")) {
deviceDriverBean.setPremium(Bytes.toString(val));
} else if (Bytes.toString(qual).equalsIgnoreCase("year")) {
deviceDriverBean.setYear(Bytes.toString(val));
} else {
System.out.println("No match found");
}
/*
* System.out.println(Bytes.toString(rowid) + " " +
* Bytes.toString(qual) + " " + Bytes.toString(val));
*/
}
deviceidList.add(deviceDriverBean);
}
}
catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
catch (Exception e) {
// System.out.println("Message: "+e.getMessage());
e.printStackTrace();
}
return deviceidList;
}
Could this be lazy fetching on the server side by hbase. This means if you return the list hbase won't get a trigger to actually read the list and you will simple get an empty list. I don't know a correct solution, in the past I've seen a similar problem on GAE. This could by solved by simply asking the size of the list just before returning it to the client.
I don't have the exact answer, but I have an advise. In similar situation I put my own trace to check every step in my program.
On the server side before return put : System.out.println("size of table="+deviceidList.size());
You can put this trace in the loop for deviceidList;