I am having difficulties getting multiple datasets out of my database with RestTemplate. I have many routines that extract a single row, with a format like:
IndicatorModel indicatorModel = restTemplate.getForObject(URL + id,
IndicatorModel.class);
and they work fine. However, if I try to extract a set of data, such as:
Map<String, List<S_ServiceCoreTypeModel>> coreTypesMap =
restTemplate.getForObject(URL + id, Map.class);
this returns values in a
Map<String, LinkedHashMap<>>
format. Is there an easy way to return a List<> or Set<> in the desired format?
Fundamentally the issue is that your Java object model does not match the structure of your json document. You are attempting to deserialize a single json element into a java List. Your JSON document looks like:
{
"serviceCoreTypes":[
{
"serviceCoreType":{
"name":"ALL",
"description":"All",
"dateCreated":"2016-06-23 14:46:32.09",
"dateModified":"2016-06-23 14:46:32.09",
"deleted":false,
"id":1
}
},
{
"serviceCoreType":{
"name":"HSI",
"description":"High-speed Internet",
"dateCreated":"2016-06-23 14:47:31.317",
"dateModified":"2016-06-23 14:47:31.317",
"deleted":false,
"id":2
}
}
]
}
But you cannot turn a serviceCoreTypes into a List, you can only turn a Json Array into a List. For instance if you removed the unnecessary wrapper elements from your json and your input document looked like:
[
{
"name": "ALL",
"description": "All",
"dateCreated": "2016-06-23 14:46:32.09",
"dateModified": "2016-06-23 14:46:32.09",
"deleted": false,
"id": 1
},
{
"name": "HSI",
"description": "High-speed Internet",
"dateCreated": "2016-06-23 14:47:31.317",
"dateModified": "2016-06-23 14:47:31.317",
"deleted": false,
"id": 2
}
]
You should be able to then deserialize THAT into a List< S_ServiceCoreTypeModel>. Alternately if you cannot change the json structure, you could create a Java object model that models the json document by creating some wrapper classes. Something like:
class ServiceCoreTypes {
List<ServiceCoreType> serviceCoreTypes;
...
}
class ServiceCoreTypeWrapper {
ServiceCoreType serviceCoreType;
...
}
class ServiceCoreType {
String name;
String description;
...
}
I'm assuming you don't actually mean database, but instead a restful service as you're using RestTemplate
The problem you're facing is that you want to get a Collection back, but the getForObject method can only take in a single type parameter and cannot figure out what the type of the returned collection is.
I'd encourage you to consider using RestTemplate.exchange(...)
which should allow you request for and receive back a collection type.
I have a solution that works, for now at least. I would prefer a solution such as the one proposed by Ben, where I can get the HTTP response body as a list of items in the format I chose, but at least here I can extract each individual item from the JSON node. The code:
S_ServiceCoreTypeModel endModel;
RestTemplate restTemplate = new RestTemplate();
JsonNode node = restTemplate.getForObject(URL, JsonNode.class);
JsonNode allNodes = node.get("serviceCoreTypes");
JsonNode oneNode = allNodes.get(1);
ObjectMapper objectMapper = new ObjectMapper();
endModel = objectMapper.readValue(oneNode.toString(), S_ServiceCoreTypeModel.class);
If anyone has thoughts on how to make Ben's solution work, I would love to hear it.
Related
My aim is to reduce the json file size, which contains the base64 image sections of the documents by default.
I am using the Document AI - Contract Processor in US region, nodejs SDK.
It is my understanding that setting fieldMask attribute in batchProcessDocuments request filters out the properties that will be in the resulting json.
I want to keep only the entities property.
Here are my call parameters:
const documentai = require('#google-cloud/documentai').v1;
const client = new documentai.DocumentProcessorServiceClient(options);
let params = {
"name": "projects/XXX/locations/us/processors/3e85a4841d13ce5",
"region": "us",
"inputDocuments": {
"gcsDocuments": {
"documents": [{
"mimeType": "application/pdf",
"gcsUri": "gs://bubble-bucket-XXX/files/CymbalContract.pdf"
}]
}
},
"documentOutputConfig": {
"gcsOutputConfig": {
"gcsUri": "gs://bubble-bucket-XXXX/ocr/"
},
"fieldMask": {
"paths": [
"entities"
]
}
}
};
client.batchProcessDocuments(params, function(error, operation) {
if (error) {
return reject(error);
}
return resolve({
"operationName": operation.name
});
});
However, the resulting json is still containing the full set of data.
Am I missing something here?
The auto-generated documentation for the Node.JS Client Library is a little hard to follow, but it looks like the fieldMask should be a member of the gcsOutputConfig instead of the documentOutputConfig. (I'm surprised the API didn't throw an error)
https://cloud.google.com/nodejs/docs/reference/documentai/latest/documentai/protos.google.cloud.documentai.v1.documentoutputconfig.gcsoutputconfig
The REST Docs are a little more clear
https://cloud.google.com/document-ai/docs/reference/rest/v1/DocumentOutputConfig#gcsoutputconfig
Note: For a REST API call and for other client libraries, the fieldMask is structured as a string (e.g. text,entities,pages.pageNumber)
I haven't tried this with the Node Client libraries before, but I'd recommend trying this as well if moving the parameter doesn't work on its own.
https://cloud.google.com/document-ai/docs/send-request#async-processor
Using Azure Data Factory and a data transformation flow. I have a csv that contains a column with a json object string, below an example including the header:
"Id","Name","Timestamp","Value","Metadata"
"99c9347ab7c34733a4fe0623e1496ffd","data1","2021-03-18 05:53:00.0000000","0","{""unit"":""%""}"
"99c9347ab7c34733a4fe0623e1496ffd","data1","2021-03-19 05:53:00.0000000","4","{""jobName"":""RecipeB""}"
"99c9347ab7c34733a4fe0623e1496ffd","data1","2021-03-16 02:12:30.0000000","state","{""jobEndState"":""negative""}"
"99c9347ab7c34733a4fe0623e1496ffd","data1","2021-03-19 06:33:00.0000000","23","{""unit"":""kg""}"
Want to store the data in a json like this:
{
"id": "99c9347ab7c34733a4fe0623e1496ffd",
"name": "data1",
"values": [
{
"timestamp": "2021-03-18 05:53:00.0000000",
"value": "0",
"metadata": {
"unit": "%"
}
},
{
"timestamp": "2021-03-19 05:53:00.0000000",
"value": "4",
"metadata": {
"jobName": "RecipeB"
}
}
....
]
}
The challenge is that metadata has dynamic content, meaning, that it will be always a json object but the content can vary. Therefore I cannot define a schema. Currently the column "metadata" on the sink schema is defined as object, but whenever I run the transformation I run into an exception:
Conversion from ArrayType(StructType(StructField(timestamp,StringType,false),
StructField(value,StringType,false), StructField(metadata,StringType,false)),true) to ArrayType(StructType(StructField(timestamp,StringType,true),
StructField(value,StringType,true), StructField(metadata,StructType(StructField(,StringType,true)),true)),false) not defined
We can get the output you expected, we need the expression to get the object Metadata.value.
Please ref my steps, here's my source:
Derived column expressions, create a JSON schema to convert the data:
#(id=Id,
name=Name,
values=#(timestamp=Timestamp,
value=Value,
metadata=#(unit=substring(split(Metadata,':')[2], 3, length(split(Metadata,':')[2])-6))))
Sink mapping and output data preview:
The key is that your matadata value is an object and may have different schema and content, may be 'value' or other key. We only can manually build the schema, it doesn't support expression. That's the limit.
We can't achieve that within Data Factory.
HTH.
I am doing documentation for a REST service returning an object like this:
Map<String, HashMap<Long, String>>
and i find no way to describe response fields for such object.
Let's have a look at my code.
The service:
#RequestMapping(value = "/data", method = RequestMethod.GET)
public Map<String, HashMap<Long, String>> getData()
{
Map<String, HashMap<Long, String>> list = dao.getData();
return list;
}
My unit-test-based documentation:
#Test
public void testData() throws Exception
{
TestUtils.beginTestLog(log, "testData");
RestDocumentationResultHandler document = document(SNIPPET_NAME_PATTERN ,preprocessResponse(prettyPrint()));
document.snippets(
// ,responseFields(
// fieldWithPath("key").description("key description").type("String"),
// fieldWithPath("value").description("value as a Hashmap").type("String"),
// fieldWithPath("value.key").description("value.key description").type("String"),
// fieldWithPath("value.value").description("value.value description").type("String"),
// )
String token = TestUtils.performLogin(mockMvc, "user", "password");
mockMvc
.perform(get(APP_BUILD_NAME + "/svc/data").contextPath(APP_BUILD_NAME)
.header("TOKEN", token)
)
.andExpect(status().is(200))
.andExpect(content().contentType("application/json;charset=UTF-8"))
.andExpect(jsonPath("$").isMap())
.andDo(document);
TestUtils.endTestLog(log, "testData");
}
As you can see the code for response fields is commented out since I haven't had any solution for it yet. I am working on that but i really appreciate your help. Thank you in advance.
Your JSON contains a huge number of different fields. There looks to be over 1000 different entries in the map. Each of those entries is itself a map with a single key-value pair. Those keys all appear to vary as well. Potentially, that gives you over 2000 fields to document:
cancel
cancel.56284
year
year.41685
segment_de_clientele
segment_de_clientele.120705
…
This structure is making it hard to document and is also a strong indicator that it will be hard to consume by clients. Ideally, you would restructure the JSON so that each entry has the same keys and it's only the values the vary from entry to entry. Something like this, for example:
{
"translations": [ {
"name": "cancel",
"id": 56284,
"text": "Exit"
}, {
"name": "year",
"id": 41685,
"text": "Year"
}, {
"name": "segment_de_clientele",
"id": 120705,
"text": "Client segment"
}]
}
This would mean that you only have a handful of fields to document:
translations[]
translations[].name
translations[].id
translations[].text
If that's not possible, then I'd stop trying to use the response fields snippet to document the structure of the response. Instead, you should describe its structure manually in your main Asciidoctor source file.
There are two options:
1> Changing MAP to LIST of objects so that the response fields can be described easily.
2> Putting description manually to index.adoc file.
In my case, I go for option 2 because I have to stick to MAP.
Trying to convert JSON to POJO using Jackson for my rest API. However, I'm not allowed to alter the object classes. One of the services consumes -
{
"user": {
"name": "adds",
"key": "adds"
},
"role": {
"example1": "adds",
"example12": "adds"
}
}
However, the PUT method only accepts one object; therefore, I have combined this into a single object UserRole. Now I'm trying to deserialise this into a Java object.
How do I render embedded objects in Apigility? For example, if I have a 'user' object and it composes a 'country' object, should I be rendering the 'country' object as an embedded object? And how should I do this?
I am using the Zend\Stdlib\Hydrator\ArraySerializable. My getArrayCopy() method simply returns an array of properties that I want exposed. The array keys are the property names. The array values are the property values. In the case of user->country, the value is an object, not a scalar.
When I return the user object from UserResource->fetch(), here's how it is rendered:
{
"id": "1",
"firstName": "Joe",
"lastName": "Bloggs",
"status": "Active",
"email": "test#example.com",
"country": {
"code": "AU",
"name": "Australia"
},
"settings": "0",
"_links": {
"self": {
"href": "http://api.mydomain.local/users/1"
}
}
}
Note that 'country' is not in an _embedded field. If it is supposed to be in _embedded, I would have thought that Apigility would automatically do that (since it automatically adds the _links object).
As a related issue, how do I go about returning other rel links, such as back, forward, etc?
The easiest way to get Apigility to render embedded resources is when there is an API/resource associated to the embedded object. What I mean for your example is that you'd have an API resource that has a country entity. In that case, if your getArrayCopy returned the the CountryEntity, Apigility would render it automatically as an embedded resource.
If your getArrayCopy is returning country as an array with code and name, you'll end up with what you saw.
For the other part, the rel links for first, last, prev and next will come from the fetchAll method when you return a Paginator. Your collection extends from this already, but it needs an adapter. The code could look something like this:
public function fetchAll($params)
{
// Return a \Zend\Db\Select object that will retrieve the
// stuff you want from the database
$select = $this->service->fetchAll($params);
$entityClass = $this->getEntityClass();
$entity = new $entityClass();
$hydrator = new \Zend\Stdlib\ArraySerializable();
$prototype = new \Zend\Db\ResultSet\HydratingResultSet($hydrator, $entity);
$paginator = new \Zend\Paginator\Adapter\DbSelect($select, $this->sql, $prototype);
$collectionClass = $this->getCollectionClass();
return new $collectionClass($paginator);
}
There are other paginator adapters as well - an ArrayAdapter which will take in an array of however big and then paginate it so you only get the desired number of results. The downside to this if you use it with database results, you'll potentially be retrieving and discarding a lot of results. The DbSelect paginator will modify the $select object to add the limit and order clause automatically so you only retrieve the bits you need. There are also adapters if you're using DbTableGateway, Iterators or even callbacks. You can also implement your own of course.
Hope this helps. If you have more specific needs or clarification, please comment and I'll do my best.
I posted this example on github.
https://github.com/martins-smb/apigility-renderCollection-example
Hope this helps.