Data from nested DTOs in NestJS are not fetched and not visible - visual-studio-code

I'm new in NestJS.
I'd like to build DTO with nested DTO (array). The issue is that the array's data(from ParentsArray) is not fetched. I don't see it while debugging. Any help is appriciated!
In swagger the structure is like it should be
while debugging, the data entered in swagger to the nested array object is not visible:
what should I do to have access to this data?

Related

Projecting multiple fields to a POJO

Is there a way in hibernate-search 6 to project multiple fields and map them directly to a POJO object or I should handle it by myself. I'm not sure that I understand the composite method described in the documentation. For example I can do something like that:
SearchResult<List<?>> result = searchSession.search(indicies)
.select(f -> f.composite(f.field("field1"), f.field("field2"), f.field("field3"),f.field("field4")))
.where(SearchPredicateFactory::matchAll)
.fetch(20)
And then I can manually map the returned List of fields to a POJO. But is there a more fancy way to do that without the need to manually loop through the list of fields and set them to the POJO instance?
At the moment projecting to a POJO is only possible for fairly simple POJOs, with up to three fields, using the syntax shown in the documentation. For more fields than that, you have to go through a List.
If you're using the Elasticsearch backend, you can theoretically retrieve the document as a JsonObject and then use Gson to map it to a POJO.
There are plans to offer more fancy solutions, but we're not there yet.

Array of objects from REST in a GraphQL

Trying to wrap a RESTAPI with a graphQL but not able to put the response of API (an array of objects) in a graphQL Type. Can anyone help me with how to put an array of object in graphQL type and then resolve?

What benefits do I get from using ODataModel vs. JSONModel?

I'm reading data from HANA using a JSONModel and simply passing in the URL to the source and retrieving it as in the following:
var data = new sap.ui.model.json.JSONModel(urlPath);
Then I can bind it to my view: this.getView().setModel(data);
I have also seen the following way of doing it, where an ODataModel is created and then the JSONModel is created from the data.
var oModel = new sap.ui.model.odata.ODataModel(urlPath);
oModelJson = new sap.ui.model.json.JSONModel();
oModel.read("/Items",
null,
["$filter=ImItems eq 'imputParameter'"],
null,
function(oData, oResponse) {
oModelJson.setData(oData);
},
null
);
What difference is there in creating the ODataModel first than creating the JSONModel at once. So assuming I'm getting back from the database about 5,000 data points, which approach should I use, or would there be no difference?
JSONModel is a Client model to get the data and set data to the view for JSON format.
ODataModel is a model implementation for OData protocol.
This allows CRUD operations on the OData entities. JSONModel doesn't support Create/Update/Delete/Batch operations.
So coming to your scenario, I would suggest to use ODataModel always to do CRUD operations (inclusive of read). Then can use JSON model to bind the data to view.
Note that it's better to have one ODataModel per app and multiple JSONModels bound to views.
Consider using ODataModel V2 and since you have mentioned that you are dealing with 5K data points, if you don't all the data in the UI. Use setSizeLimit to make sure you have set proper upper bound.
Both models can be used without conflict. In fact, most applications will use both.
You want to use the OData model to send/retrieve data from the server. The OData model will handle building the URLs for you. For instance, if you want to filter, sort, or use paging in your data without the OData model, you will need to build the URL yourself.
yourUrl.com/EntitySet?$filter eq Property1='Value'&$sort= ..... &top=... etc.
This will be difficult without the OData model, and makes the application more difficult to maintain and debug. Let the OData model do that for you:
ODataModel.read("/EntitySet, {
filters: [new Filter("Property1", "EQ", "Value")]
});
The greatest benefit of the OData model in my opinion, though, is binding directly from the XML views.
<List items="{/EntitySet}">
<items>
<StandardListItem title="{objectTitle}"/>
</items>
</List>
This will automatically call the backend, retrieve the data from the entity set, and bind it to the list. No need to construct any URLs, make any calls, etc.
Using a JSON model to retrieve the data from an OData service will only make things more difficult than they have to be.
But... that being said... the JSON model is a very powerful tool. You can use it to store configuration data or any data you want to hold in the UI and manipulate. You can use the JSON model as sort of a mini-database in your application that can pass data globally across your application.
To summarize, you should use the OData model to get/send data. You should use the JSON model for local data storage. There will not be conflicts trying to use both.
One major difference between both of them is:
A lot of controls in SAPUI5 for instance, smarttable, bind automatically to the odata entities, meaning it dynamically creates the columns and the tuples based on the Odata metadata XML file. In this scenario, you cannot use a JSON Model.
IMHO, I would go with OData because of this "automatic binding" that a lot of components SAPUI5 have. But, I also ran into scenarios when the OData entities were not structured well, meaning the "automatic binding" that some SAP UI components had, did not work as expected.
In those scenarios, I had to get the JSON out of the OData, created/destroyed a few properties and then I did the bind to the mentioned SAP UI component.

Spring Data partial upsert not persisiting type information

I am using Spring Data with MongoDB to store very dynamic config data in a toolkit. These Config objects consist of a few organizational fields, along with a data field of type Object. On some instances of Config, the data object refers to a more deeply nested subdocument (such as "data.foo.bar" within the database. – this field name is set by getDataField() below). These Config objects are manipulated as they're sent to the database, so the storage code looks something like this:
MongoTemplate template; // This is autowired into the class.
Query query; // This is the same query which (successfully) finds the object.
Config myConfig; // The config to create or update in Mongo
Update update = new Update()
.set(getDataField(), myConfig.getData())
.set(UPDATE_TIME_FIELD, new Date())
.setOnInsert(CREATE_TIME_FIELD, new Date())
.setOnInsert(NAME_FIELD, myConfig.getName());
template.upsert(query, update, Config.class);
Spring recursively converts the data object into a DBObject correctly, but neither the data document nor any of its subdocuments have "_class" fields in the database. Consequentially, they do not deserialize correctly.
These issues seem quite similar to those previously reported in DATAMONGO-392 , DATAMONGO-407, and DATAMONGO-724. Those, however, have all been fixed. (I am using spring-data-mongodb 1.4.2.RELEASE)
Am I doing something incorrectly? Is there a possibility that this is a Spring issue?
Came across a similar issue. One solution is to write your own Converter for Config.class.

Backbone Project Approach

I would like to make an application with backbone.js I understand the basics of backbone however I dont really know what the right approach to my problem might be.
I have a big jsonp file that is being retrieve from the server. So the next step would be to put the data from the jsonp file into a model. The data is bloglike containing a imgurl/title/text.
Now I could start a new model like this:
new modelVar = new BackboneModel;
However would that means that I need to create a new variable for every post I want to retrieve or could I let backbone create a set of models containg the post data.
Any suggestions book / blogs are welcome
Thanks
A quick answer could be "no". You can let Backbone loading data in models using a Backbone Collection.
E.g.
new App.Photos([
{url:"http://(...)_1.png", title:"photo1"},
{url:"http://(...)_2.png", title:"photo2"},
{url:"http://(...)_3.png", title:"photo3"}
]);
You just have to get an array of objects in argument when you create your collection prototype. Backbone will automatically create models based on the model attribute defined into the collection object. It's particularly fitted to your needs because you just have to put in argument the parsed json response and your models will be created.
I suggest you Backbone Marionette which is a good choice to start with Backbone implementation in order to get best practices.
https://github.com/derickbailey/backbone.marionette