eclipselink + #convert(json) + postgres + list property - postgresql

I'm using eclipselink 2.6 as a persistence provider of spring data jpa, that in my understanding, now allows you to serialize a subtree of an entity as json using the internal moxy serializer.
So I'm trying to mix this to migrate from embedded element collections to a serialized json using the json datatype of postgres.
I have an entity named Product, and this entity have the following mapped property:
#Convert(Convert.JSON)
private List<MetadataIndex> indexes=new ArrayList<MetadataIndex> ();
In which metadata index is a simple class with a few string properties.
I would like to convert this list of object into a json and store it into a column of json data type in postgres.
I thought that the above code should suffice, but it does not. The application crashes on boot (can't create entitymanager factory - npe somwhere inside eclipselink).
If I change the converter to #Convert(Convert.SERIALIZED) it works. It creates a field on the table Products named indexes of type bytea and store the serialized list in it.
Is this an eclipselink bug or I'm missing something?
Thank you.

well, I've used a custom eclipselink converter to convert my classes into json objects, then store them into the db using directly the postgres driver. This is the converter.
import fr.gael.dhus.database.jpa.domain.MetadataIndex;
import org.codehaus.jackson.map.ObjectMapper;
import org.codehaus.jackson.type.TypeReference;
import org.eclipse.persistence.mappings.DatabaseMapping;
import org.eclipse.persistence.sessions.Session;
import org.postgresql.util.PGobject;
import javax.persistence.AttributeConverter;
import javax.persistence.Converter;
import java.io.IOException;
import java.sql.SQLException;
import java.util.Collection;
import java.util.List;
/**
* Created by fmarino on 20/03/2015.
*/
#Converter
public class JsonConverter implements org.eclipse.persistence.mappings.converters.Converter {
private static ObjectMapper mapper = new ObjectMapper();
#Override
public Object convertObjectValueToDataValue(Object objectValue, Session session) {
try {
PGobject out = new PGobject();
out.setType("jsonb");
out.setValue( mapper.writerWithType( new TypeReference<Collection<MetadataIndex>>() {} )
.writeValueAsString(objectValue) );
return out;
} catch (IOException e) {
throw new IllegalArgumentException("Unable to serialize to json field ", e);
} catch (SQLException e) {
throw new IllegalArgumentException("Unable to serialize to json field ", e);
}
}
#Override
public Object convertDataValueToObjectValue(Object dataValue, Session session) {
try {
if(dataValue instanceof PGobject && ((PGobject) dataValue).getType().equals("jsonb"))
return mapper.reader( new TypeReference<Collection<MetadataIndex>>() {}).readValue(((PGobject) dataValue).getValue());
return "-";
} catch (IOException e) {
throw new IllegalArgumentException("Unable to deserialize to json field ", e);
}
}
#Override
public boolean isMutable() {
return false;
}
#Override
public void initialize(DatabaseMapping mapping, Session session) {
}
}
as you can see I use jackson for serialization, and specify the datatype as Collection. You can use the type you want here.
Inside my classes, I've mapped my field with this:
#Convert(converter = JsonConverter.class)
#Column (nullable = true, columnDefinition = "jsonb")
adding also this annotation to the class:
#Converter(converterClass = JsonConverter.class, name = "jsonConverter")
To make things works properly with jackson I've also added to my MetadataIndex class this annotation, on the class element:
#JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.PROPERTY, property = "#class")
I personally like using directly the postgres driver to store those kind of special datatype. I didn't manage to achieve the same with hibernate.
As for the converter, I've would preferred a more general solution, but jackson forced me to state the object type I want to convert. If you find a better way to do it, let me know.
With a similar approach, I've also manage to use the hstore datatype of postgres.

Related

how properly save data to MongoDb throw Spring-data in no-blocking stack using CompletableFuture

The question could be summarized: how properly save data to MongoDb throw Spring-data in no-blocking stack using CompletableFuture (i.e. Spring Webflux + reactive.ReactiveCrudRepository + java.util.concurrent)?
I have struglled for the last three days studing and searcing around and reading several tutorials in order to find a recommended way or at least a "north path" to persist data when someone wants to use CompletableFuture for that. I could reach the code bellow succesfully working but I am not sure if I am doing some weird stuff.
Basically, I want to use CompletableFuture because I want to chain futures. Let say, save firstly in MongoDb and if well-done then "thenAcceptAsync" and finally "thenCombine" them.
Well, ReactiveCrudRepository.save returns Mono<> and I must subscribe in order to effectivelly save it. Additionally Mono<>.subscribe() returns dispose whic I understand I can use to cancel it let's say if the thread takes too long because MongoDb is out for instance or any other exception. SO far so good.
What is unclear to me is if I am not messing up the idea of using using saving the data which blocks in assyncronous method. Since my puporse is leave to "future" resolution am I am blocking during the save method bellow and completely losing the benefitis of saving in different thread and get a future result?
Code saving properly to MongoDb but not clear to me if it is really "no-blocking" approach. Note that completableFuture.get() is commented since I don't need it in onder to effectively save my data
#Async("taskExecutor")
public void transferirDisposableReturnedSupplyAsync(Extrato e) throws InterruptedException, ExecutionException {
CompletableFuture<Disposable> completableFuture = CompletableFuture
.supplyAsync(() -> extratoRepository.save(e).subscribe());
//completableFuture.get(); unnecessary since subscribe() above already saved it
}
In case it is relevant:
Repository:
import org.springframework.data.mongodb.repository.Query;
import org.springframework.data.repository.reactive.ReactiveCrudRepository;
import com.noblockingcase.demo.model.Extrato;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import org.springframework.data.domain.Pageable;
public interface ExtratoRepository extends ReactiveCrudRepository<Extrato, String> {
#Query("{ id: { $exists: true }}")
Flux<Extrato> retrieveAllExtratosPaged(final Pageable page);
}
AsyncConfiguration:
import java.util.concurrent.Executor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.annotation.EnableAsync;
import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor;
// The #EnableAsync annotation enables Spring’s ability to run #Async methods in a background thread pool.
// The bean taskExecutor helps to customize the thread executor such as configuring number of threads for an application, queue limit size and so on.
// Spring will specifically look for this bean when the server is started.
// If this bean is not defined, Spring will create SimpleAsyncTaskExecutor by default.
#Configuration
#EnableAsync
public class AsyncConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(AsyncConfiguration.class);
#Bean(name = "taskExecutor")
public Executor taskExecutor() {
LOGGER.debug("Creating Async Task Executor");
final ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(2);
executor.setMaxPoolSize(2);
executor.setQueueCapacity(100);
executor.setThreadNamePrefix("ExtratoThread-");
executor.initialize();
return executor;
}
}
*** added
import { Injectable, NgZone } from '#angular/core';
import { Observable } from 'rxjs';
import { Extrato } from './extrato';
#Injectable({
providedIn: "root"
})
export class SseService {
extratos: Extrato[] = [];
constructor(private _zone: NgZone) { }
getServerSentEvent(url: string): Observable<any> {
this.extratos = [];
return Observable.create(observer => {
const eventSource = this.getEventSource(url);
eventSource.onmessage = event => {
this._zone.run(() => {
let json = JSON.parse(event.data);
this.extratos.push(new Extrato(json['id'], json['description'], json['value'], json['status']));
observer.next(this.extratos);
});
};
eventSource.onerror = (error) => {
if (eventSource.readyState === 0) {
console.log('The stream has been closed by the server.');
eventSource.close();
observer.complete();
} else {
observer.error('EventSource error: ' + error);
}
}
});
}
private getEventSource(url: string): EventSource {
return new EventSource(url);
}
}

Is it possible to deserialize Avro message(consuming message from Kafka) without giving Reader schema in ConfluentRegistryAvroDeserializationSchema

I am using Kafka Connector in Apache Flink for access to streams served by Confluent Kafka.
Apart from schema registry url ConfluentRegistryAvroDeserializationSchema.forGeneric(...) expecting 'reader' schema.
Instead of providing read schema I want to use same writer's schema(lookup in registry) for reading the message too because Consumer will not have latest schema.
FlinkKafkaConsumer010<GenericRecord> myConsumer =
new FlinkKafkaConsumer010<>("topic-name", ConfluentRegistryAvroDeserializationSchema.forGeneric(<reader schema goes here>, "http://host:port"), properties);
myConsumer.setStartFromLatest();
https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/connectors/kafka.html
"Using these deserialization schema record will be read with the schema that was retrieved from Schema Registry and transformed to a statically provided"
Since I do not want to keep schema definition at consumer side how do I deserialize Avro message from Kafka using writer's schema?
Appreciate your help!
I don't think it is possible to use directly ConfluentRegistryAvroDeserializationSchema.forGeneric. It is intended to be used with a reader schema and they have preconditions checking for this.
You have to implement your own. Two import things:
Set specific.avro.reader to false (other wise you'll get specific records)
The KafkaAvroDeserializer has to be lazily initialized (because it isn't serializable it self, as it holds a reference to the schema registry client)
import io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient;
import io.confluent.kafka.schemaregistry.client.SchemaRegistryClient;
import io.confluent.kafka.serializers.AbstractKafkaAvroSerDeConfig;
import io.confluent.kafka.serializers.KafkaAvroDeserializer;
import io.confluent.kafka.serializers.KafkaAvroDeserializerConfig;
import java.util.HashMap;
import java.util.Map;
import org.apache.avro.generic.GenericRecord;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.api.java.typeutils.TypeExtractor;
import org.apache.flink.streaming.util.serialization.KeyedDeserializationSchema;
public class KafkaGenericAvroDeserializationSchema
implements KeyedDeserializationSchema<GenericRecord> {
private final String registryUrl;
private transient KafkaAvroDeserializer inner;
public KafkaGenericAvroDeserializationSchema(String registryUrl) {
this.registryUrl = registryUrl;
}
#Override
public GenericRecord deserialize(
byte[] messageKey, byte[] message, String topic, int partition, long offset) {
checkInitialized();
return (GenericRecord) inner.deserialize(topic, message);
}
#Override
public boolean isEndOfStream(GenericRecord nextElement) {
return false;
}
#Override
public TypeInformation<GenericRecord> getProducedType() {
return TypeExtractor.getForClass(GenericRecord.class);
}
private void checkInitialized() {
if (inner == null) {
Map<String, Object> props = new HashMap<>();
props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, registryUrl);
props.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, false);
SchemaRegistryClient client =
new CachedSchemaRegistryClient(
registryUrl, AbstractKafkaAvroSerDeConfig.MAX_SCHEMAS_PER_SUBJECT_DEFAULT);
inner = new KafkaAvroDeserializer(client, props);
}
}
}
env.addSource(
new FlinkKafkaConsumer<>(
topic,
new KafkaGenericAvroDeserializationSchema(schemaReigstryUrl),
kafkaProperties));

Mongodb scala driver custom conversion to JSON

If I am using "native" json support from mongodb oficial scala driver:
val jsonText = Document(...).toJson()
it produces json text with type prefixes for extended types:
{ "$oid" : "AABBb...." } - for ObjectID,
{ "$longNumber" : 123123 } - for Long and etc.
I want to avoid such type conversion and write directly just values for each type. Is it possible somehow to overwrite encoding behavior for some type?
You can subclass JsonWriter and override writeXXX methods. For example, to customize date serialization you can use:
class CustomJsonWriter extends JsonWriter {
public CustomJsonWriter(Writer writer) {
super(writer);
}
public CustomJsonWriter(Writer writer, JsonWriterSettings settings) {
super(writer, settings);
}
#Override
protected void doWriteDateTime(long value) {
doWriteString(DateTimeFormatter.ISO_DATE_TIME
.withZone(ZoneId.of("Z"))
.format(Instant.ofEpochMilli(value)));
}
}
And then you can use the overridden version that way:
public static String toJson(Document doc) {
CustomJsonWriter writer = new CustomJsonWriter(new StringWriter(), new JsonWriterSettings());
DocumentCodec encoder = new DocumentCodec();
encoder.encode(writer, doc, EncoderContext.builder().isEncodingCollectibleDocument(true).build());
return writer.getWriter().toString();
}

Play 2.0! [Java] - Generating XML response from the REST API

I am parsing through Play framework documents and trying to figure out if there is anything out of the box available for generating XML response from the given domain object, just like how we have for Json.toJson(Object).
The following code works fine for Json REST API in play framework 2.1.2, can anyone suggest how can XML be generated out of the box here instead of Json?
package controllers;
import java.util.List;
import java.util.concurrent.Callable;
import play.Logger;
import play.libs.F.Function;
import play.libs.F.Promise;
import play.libs.Json;
import play.mvc.Controller;
import play.mvc.Result;
import com.amazonaws.services.simpledb.model.Item;
public class ShowItemsJson extends Controller {
public static Result allItems() {
// Now create the async process to lookup items in simpledb
AllItems<List<Item>> callable = new AllItems<List<Item>>();
Promise<List<Item>> promise = play.libs.Akka.future(callable);
return async(promise.map(new Function<List<Item>, Result>() {
public Result apply(List<Item> rm) throws Throwable {
// Convert the result into json before sending.
// TODO How to do same for XML?
return ok(Json.toJson(rm));
}
}));
}
// One instance of this class should be used for each create request
static class AllItems<V> implements Callable<V> {
#SuppressWarnings("unchecked")
public V call() throws Exception {
try {
return (V) Test.getAllItems();
} catch (Error e) {
// Error is handled here to log NoClassDefFoundError
Logger.error("Error: ", e);
throw e;
}
}
}
}
There is no built in support for generating XML from Java objects in Play 2 as far as I know, there are loads of options in Java-land though.
To name a few:
JAXB for doing it with reflection/annotations - reference implementation http://jaxb.java.net
xom - http://www.xom.nu
jdom - http://www.jdom.org,
dom4j - http://dom4j.sourceforge.net

Reading xls file in gwt

I am looking to read xls file using the gwt RPC and when I am using the code which excecuted fine in normal file it is unable to load the file and giving me null pointer exception.
Following is the code
{
{
import com.arosys.readExcel.ReadXLSX;
import com.google.gwt.user.server.rpc.RemoteServiceServlet;
import org.Preview.client.GWTReadXL;
import java.io.InputStream;
import com.arosys.customexception.FileNotFoundException;
import com.arosys.logger.LoggerFactory;
import java.util.Iterator;
import org.apache.log4j.Logger;
import org.apache.poi.xssf.usermodel.XSSFCell;
import org.apache.poi.xssf.usermodel.XSSFRow;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
/**
*
* #author Amandeep
*/
public class GWTReadXLImpl extends RemoteServiceServlet implements GWTReadXL
{
private String fileName;
private String[] Header=null;
private String[] RowData=null;
private int sheetindex;
private String sheetname;
private XSSFWorkbook workbook;
private XSSFSheet sheet;
private static Logger logger=null;
public void loadXlsxFile() throws Exception
{
logger.info("inside loadxlsxfile:::"+fileName);
InputStream resourceAsStream =ClassLoader.getSystemClassLoader().getSystemResourceAsStream("c:\\test2.xlsx");
logger.info("resourceAsStream-"+resourceAsStream);
if(resourceAsStream==null)
throw new FileNotFoundException("unable to locate give file");
else
{
try
{
workbook = new XSSFWorkbook(resourceAsStream);
sheet = workbook.getSheetAt(sheetindex);
}
catch (Exception ex)
{
logger.error(ex.getMessage());
}
}
}// end loadxlsxFile
public String getNumberOfColumns() throws Exception
{
int NO_OF_Column=0; XSSFCell cell = null;
loadXlsxFile();
Iterator rowIter = sheet.rowIterator();
XSSFRow firstRow = (XSSFRow) rowIter.next();
Iterator cellIter = firstRow.cellIterator();
while(cellIter.hasNext())
{
cell = (XSSFCell) cellIter.next();
NO_OF_Column++;
}
return NO_OF_Column+"";
}
}
}
I am calling it in client program by this code:
final AsyncCallback<String> callback1 = new AsyncCallback<String>() {
public void onSuccess(String result) {
RootPanel.get().add(new Label("In success"));
if(result==null)
{
RootPanel.get().add(new Label("result is null"));
}
RootPanel.get().add(new Label("result is"+result));
}
public void onFailure(Throwable caught) {
RootPanel.get().add(new Label("In Failure"+caught));
}
};
try{
getService().getNumberOfColumns(callback1);
}catch(Exception e){}
}
Pls tell me how can I resolve this issue as the code runs fine when run through the normal java file.
Why are using using the system classloader, rather than the normal one?
But, If you still want to use then look at this..
As you are using like a web application. In that case, you need to use the ClassLoader which is obtained as follows:
ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
This one has access to the all classpath paths tied to the webapplication in question and you're not anymore dependent on which parent classloader (a webapp has more than one!) has loaded your class.
Then, on this classloader, you need to just call getResourceAsStream() to get a classpath resource as stream, not the getSystemResourceAsStream() which is dependent on how the webapplication is started. You don't want to be dependent on that as well since you have no control over it at external hosting:
InputStream input = classLoader.getResourceAsStream("filename.extension");
The location of file should in your CLASSPATH.