Error : org.apache.avro.AvroRuntimeException: Not an array: in nested objects - spring-cloud

I have this payload and set this payload to avro classes
{
"tradeQuantity":13,
"tradeMarket":"sssss",
"stockName":"teststock",
"tradeType":"testtype",
"price":12.2,
"amount":12.5,
"address":{
"stret" : "aaa",
"city" : "bbb"
}
}
classes
StockHistory stockHistory= new StockHistory();
stockHistory.setStockName(model.getStockName());
stockHistory.setTradeType(model.getTradeType());
stockHistory.setPrice(model.getPrice());
stockHistory.setAmount(model.getAmount());
stockHistory.setTradeId(new Random(1000).nextInt());
stockHistory.setTradeMarket(model.getTradeMarket());
stockHistory.setTradeQuantity(model.getTradeQuantity());
Address address = new Address();
address.setCity("sss");
address.setStreetaddress("aaaa");
stockHistory.setAddress(address);
when converting to an object mapper like
ObjectMapper mapper = new ObjectMapper();
mapper.addMixIn(StockHistory.class, IgnoreSchemaProperty.class);
mapper.disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES);
Its works when only have stockHistory without address but for nested objects how we can handle its throwing an exception
org.apache.avro.AvroRuntimeException: Not an array: {"type":"record","name":"Address","namespace":"com.spring.cloud.streams.avro","fields":[{"name":"streetaddress","type":["null","string"],"default":null},{"name":"city","type":["null","string"],"default":null}]}
when using Google GSON converting object to JSON its working but I am not sure is that correct approach or not
Can anyone suggest how we can handle nested avro classes/object using objectMapper while converting to json

Related

How to convert SinkRecord to JSON string?

Imagine myAPICreate requires a JSON string.
public void put(Collection<SinkRecord> collection) {
for (SinkRecord record : collection) {
JSONObject recordJson = toJSON(record.value());
String recordJsonString = recordJson.toString();
myAPICreate(recordJsonString);
}
}
toJSON is a helper I have defined which just takes the record and returns a JSONObject.
JSONObject json = new JSONObject()
.put("a", record.getString("a"))
.put("b", record.getString("b"))
.put("c", record.getString("c"));
I feel like I might be doing a lot of redundant work here. Is it necessary to have the code in put convert it to JSON or is there a way to use the converters so that record already comes in as JSON or a JSON string? Then I can just pass myAPICreate(record.value().toString()) without having to manually do it?
When you create a SinkRecord, you have a key & value schema w/ a key and value Object. Those objects should be Struct instances that must be created with the matching Schema
In the Connector configuration, you would then use JSONConverter (or other converter) to get the serialized output

Deserialize request body to specific class instead of JsonObject

Say we have this:
Router router = Router.router(vertx);
router.put("/products/:productID").handler(this::handleAddProduct);
and this:
private void handleAddProduct(RoutingContext ctx) {
String productID = ctx.request().getParam("productID");
HttpServerResponse response = ctx.response();
JsonObject product = ctx.getBodyAsJson();
products.put(productID, product);
response.end();
}
my question is - how can we deserialize ctx.getBodyAsJson() to a specific Java class instead of the generic JsonObject class?
you can use JsonObject.mapTo(Class), e.g.:
JsonObject product = ctx.getBodyAsJson();
Product instance = product.mapTo(Product.class);
UPDATE
you can customize the (de)serialization behavior by manipulating the ObjectMapper instance(s) associated with the Json class. here are some examples:
// only serialize non-null values
Json.mapper.setSerializationInclusion(JsonInclude.Include.NON_NULL);
// ignore values that don't map to a known field on the target type
Json.mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
keep in mind Json holds a reference to two different ObjectMappers:
mapper, and
prettyMapper

org.apache.solr.common.SolrException: TransactionLog doesn't know how to serialize class org.bson.types.ObjectId; try implementing ObjectResolver?

When performing a data import from mongodb, Solr throws the following error:
org.apache.solr.common.SolrException: TransactionLog doesn't know how to serialize class org.bson.types.ObjectId; try implementing ObjectResolver?
at org.apache.solr.update.TransactionLog$1.resolve(TransactionLog.java:100)
at org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:234)
at org.apache.solr.common.util.JavaBinCodec.writeSolrInputDocument(JavaBinCodec.java:589)
at org.apache.solr.update.TransactionLog.write(TransactionLog.java:395)
at org.apache.solr.update.UpdateLog.add(UpdateLog.java:532)
at org.apache.solr.update.UpdateLog.add(UpdateLog.java:516)
at org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:320)
at org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:239)
at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:194)
at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
at org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:979)
at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1192)
at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:748)
at org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
at org.apache.solr.handler.dataimport.SolrWriter.upload(SolrWriter.java:80)
at org.apache.solr.handler.dataimport.DataImportHandler$1.upload(DataImportHandler.java:254)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:526)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:415)
at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:474)
at org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:457)
at java.lang.Thread.run(Thread.java:748)
My Solr version is 6.6.0. What could be the reason for the error and how can it be resolved?
I came across this issue while trying to import data from multiple collections in mongoDB.
Assuming you are not using mongo-connector, I used the following to import data.
Solr-6.6.0
solr-dataimporthandler-6.6.0
mongo-java-driver-3.5.0
Solr Mongo importer
Since the returned '_id' is of type ObjectId, my work around solution was to convert the '_id' to String before indexing it into solr and while querying with respect to '_id', convert it to type ObjectId before running the query.
Download the solr mongo importer and make the following changes.
MongoMapperTransformer.java
public class MongoMapperTransformer extends Transformer {
#Override
public Object transformRow(Map<String, Object> row, Context context) {
for (Map<String, String> map : context.getAllEntityFields()) {
String mongoFieldName = map.get(MONGO_FIELD);
String mongoId = map.get(MONGO_ID);
if (mongoFieldName == null)
continue;
String columnFieldName = map.get(DataImporter.COLUMN);
//If the field is ObjectId convert it into String
if (mongoId != null && Boolean.parseBoolean(mongoId)) {
Object srcId = row.get(columnFieldName);
row.put(columnFieldName, srcId.toString());
}
else{
row.put(columnFieldName, row.get(mongoFieldName));
}
}
return row;
}
public static final String MONGO_FIELD = "mongoField";
//To identify the _id field
public static final String MONGO_ID = "objectIdToString";
}
Next, Replace the function
public Iterator <Map<String, Object>> getData(String query){...}
in MongoDataSource.java with the following:
#Override
public Iterator<Map<String, Object>> getData(String query) {
DBObject queryObject = new BasicDBObject();
/* If querying by _id, since the id is a string now,
* it has to be converted back to type ObjectId() using the
* constructor
*/
if(query.contains("_id")){
#SuppressWarnings("unchecked")
Map<String, String> queryWithId = (Map<String, String>) JSON.parse(query);
String id = queryWithId.get("_id");
queryObject = new BasicDBObject("_id", new ObjectId(id));
}
else{
queryObject = (DBObject) JSON.parse(query);
}
LOG.debug("Executing MongoQuery: " + query.toString());
long start = System.currentTimeMillis();
mongoCursor = this.mongoCollection.find(queryObject);
LOG.trace("Time taken for mongo :"
+ (System.currentTimeMillis() - start));
ResultSetIterator resultSet = new ResultSetIterator(mongoCursor);
return resultSet.getIterator();
}
After these changes you can build the jar using ant.
Copy the jars (solr mongo importer and the mongo-java-driver) into the lib directory. I copied them into ${solr-install-dir}/contrib/dataimport-handler/lib
Add the lib directives in solr-config.xml for the above jars:
<lib dir="${solr.install.dir:../../../..}/contrib/dataimporthandler/lib" regex=".*\.jar" />
Finally, here's an example of the mongo collections and data-config.xml
User collection
{
"_id" : ObjectId("56e9c892e4b0355017b2fa0f"),
"name" : "User1",
"phone" : "123456789"
}
Address collection
{
"_id" : ObjectId("56e9c892e4b0355017b2fa0f"),
"address" : "#666, Maiden street"
}
data-config.xml
Do not forget to mention objectIdToString="true" for the _id field so that the MongoMapperTransformer can stringify the id.
<dataConfig>
<dataSource name="MyMongo"
type="MongoDataSource"
database="test"
/>
<document name="UserDetails">
<!-- if query="" then it imports everything -->
<entity name="users"
processor="MongoEntityProcessor"
query=""
collection="user"
datasource="MyMongo"
transformer="MongoMapperTransformer">
<field column="_id" name="id" mongoField="_id" objectIdToString="true" />
<field column="phone" name="phone" mongoField="phone"/>
<entity name="address"
processor="MongoEntityProcessor"
query="{_id:'${users._id}'}"
collection="address"
datasource="MyMongo"
transformer="MongoMapperTransformer">
<field column="address" name="adress" mongoField="address"/>
</entity>
</entity>
</document>
</dataConfig>
The managed-schema will have the id field as string.
Also, if you have nested objects in mongodb you will have to use script transformers to index them in solr.
Hope this helps,
Good luck !
According to the error message,
You need to implement JavaBinCodec.ObjectResolver for org.bson.types.ObjectId type, so Solr will know how to serialize instances of this class.
JavaBinCodec.ObjectResolver Documentation
public static interface JavaBinCodec.ObjectResolver Allows extension
of JavaBinCodec to support serialization of arbitrary data types.
Implementors of this interface write a method to serialize a given
object using an existing JavaBinCodec
Once you write your JavaBinCodec.ObjectResolver implementation you should register it using JavaBinCodec
JavaBinCodec Documentation
public class JavaBinCodec extends Object Defines a space-efficient
serialization/deserialization format for transferring data.
JavaBinCodec has built in support many commonly used types. This
includes primitive types (boolean, byte, short, double, int, long,
float), common Java containers/utilities (Date, Map, Collection,
Iterator, String, Object[], byte[]), and frequently used Solr types
(NamedList, SolrDocument, SolrDocumentList). Each of the above types
has a pair of associated methods which read and write that type to a
stream.
Classes that aren't supported natively can still be
serialized/deserialized by providing an JavaBinCodec.ObjectResolver
object that knows how to work with the unsupported class. This allows
JavaBinCodec to be used to marshall/unmarshall arbitrary content.
NOTE -- JavaBinCodec instances cannot be reused for more than one
marshall or unmarshall operation.

Spring MVC REST using #RequestBody List<?> returns HTTP 400 syntactically incorrect

I am using Spring 4 + Jackson 2 and have written a fully functional POST method using #RequestBody on a custom class. This method has no trouble unmarshalling the object.
#ResponseBody
#RequestMapping(value="store", method = RequestMethod.POST)
public ServiceResponse store(#RequestBody CustomClass list) {
...
}
// Request: { code: "A", amount: 200 }
When I attempted to add another method to handle a collection of the same class instead, my POST requests were returning with the following error.
HTTP Status 400: The request sent by the client was syntactically incorrect.
I note that this error typically occurs when the JSON submitted does not match the entity class. However, all I am doing is submitting an array of the same object instead of the object itself, which has already proven to work.
#ResponseBody
#RequestMapping(value="store-bulk", method = RequestMethod.POST)
public ServiceResponse storeBulk(#RequestBody List<CustomClass> list) {
...
}
// Request: [{ code: "A", amount: 200 }, { code: "B", amount: 400 }]
Am I missing something here?
In Java, type information for generics is erased at runtime, so Spring sees your List<CustomClass> object as List<Object> object, thus it cannot understand how to parse it.
One of ways to solve it, you could capture the type information by creating a wrapper class for your list, like this:
public class CustomClassList extends ArrayList<CustomClass> {
}
Sergey is right that the issue is due to type erasure. Your easiest way out is to bind to an array, so
#ResponseBody
#RequestMapping(value="store-bulk", method = RequestMethod.POST)
public ServiceResponse storeBulk(#RequestBody CustomClass[] object) {
...
}
The answer is that Spring 4 doesn't actually get rid of type erasure, contrary to what some other solutions suggest. While experimenting on debugging via manual unmarshalling, I decided to just handle that step myself instead of an implicit cast that I have no control over. I do hope someone comes along and proves me wrong, demonstrating a more intuitive solution though.
#ResponseBody
#RequestMapping(value="store-bulk", method = RequestMethod.POST)
public ServiceResponse storeBulk(#RequestBody String json) {
try {
List<CustomClass> list = new ObjectMapper().readValue(json, new TypeReference<List<CustomClass>>() { });
...
} catch (Exception e) {
...
}
}
Bonus: Right after I got this working, I bumped into this exception:
IllegalStateException: Already had POJO for id
If anyone gets this, it's because the objects in the list happen to reference some object that another item in the list already references. I could work around this since that object was identical for my entire collection, so I just removed the reference from the JSON side from all but the first object. I then added the missing references back after the JSON was unmarshalled into the List object.
Two-liner for the Java 8 users (the User object reference was the issue in my case):
User user = list.get(0).getUser();
list.stream().filter(c -> c.getUser() == null).forEach(t -> t.setUser(user));

An exception of using dozer to copy data from map to java bean

I want to copy data from map(request.getParameterMap()) to java bean. For example:
Map<String,Object> map = new HashMap<>();
map.put("payment_code", "1420956468542a2");
//...
public class PaymentLogDTO {
#Mapping("payment_code")
private String paymentCode;
//...
}
but when I execute map method in unit test,
DozerBeanMapper dozer = new DozerBeanMapper();
dozer.map(map, PaymentLogDTO.class);
it failed. The exception message is:
org.dozer.MappingException: No such field found java.util.HashMap.payment_code
at org.dozer.util.ReflectionUtils.getFieldFromBean(ReflectionUtils.java:322)
at org.dozer.util.ReflectionUtils.getFieldFromBean(ReflectionUtils.java:320)
at org.dozer.util.ReflectionUtils.getFieldFromBean(ReflectionUtils.java:320)
at org.dozer.util.ReflectionUtils.getFieldFromBean(ReflectionUtils.java:309)
at org.dozer.propertydescriptor.FieldPropertyDescriptor$ChainedPropertyDescriptor.<init>(FieldPropertyDescriptor.java:104)
at org.dozer.propertydescriptor.FieldPropertyDescriptor.<init>(FieldPropertyDescriptor.java:51)
at org.dozer.propertydescriptor.PropertyDescriptorFactory.getPropertyDescriptor(PropertyDescriptorFactory.java:64)
at org.dozer.fieldmap.FieldMap.getSrcPropertyDescriptor(FieldMap.java:385)
at org.dozer.fieldmap.FieldMap.getSrcFieldValue(FieldMap.java:86)
at org.dozer.MappingProcessor.mapField(MappingProcessor.java:294)
at org.dozer.MappingProcessor.map(MappingProcessor.java:267)
at org.dozer.MappingProcessor.mapToDestObject(MappingProcessor.java:216)
at org.dozer.MappingProcessor.createByCreationDirectiveAndMap(MappingProcessor.java:196)
at org.dozer.MappingProcessor.mapGeneral(MappingProcessor.java:170)
at org.dozer.MappingProcessor.map(MappingProcessor.java:104)
at org.dozer.MappingProcessor.map(MappingProcessor.java:99)
at org.dozer.DozerBeanMapper.map(DozerBeanMapper.java:120)
at org.springside.modules.mapper.BeanMapper.map(BeanMapper.java:36)
Is there any method can solve this problem? That is I don't need to create a java bean using the same name of query paramter name as it's properties' names.
The #Mapping annotation tells the source/destination field, it has nothing related to the Map class...
So with #Mapping("payment_code"), it looks for a field "payement_code", and not for an element in your map collection.