How to Add Custom Json Schema Validators - json-schema-validator

We are using the light4j json schema validator in our project.
Version 1.0.36
Is it possible to add custom validators like ISO8601 time/duration min and max for example? If so how do you add custom validation?
We realize these would not be part of the json schema standard, but we're just using it to validate our configuration json internally.

This has been discussed in the issues forum and a lot of users have added their customized validators. Let me know if you still have any questions.
https://github.com/networknt/json-schema-validator/issues/32

Related

NestJs #ResolveField alternative in REST

In NestJs when we use it in combination with GraphQl, we can call #ResolveField decorator, do we have some alternatives when we use REST?
I have worked once with GraphQl and used that decorator, and now im working with rest, and want to know do we have something simillar in NestJs without Graph.
REST does not have the idea of being able to optionally add extra fields to the response just by adding a sub field to the response. It's not a query language like GraphQL is. You would need to either add in a query parameter that allows the client to say "fetch extra data" or just return the extra data by default and let the client ignore it if it is not wanted

AzureDevops - extract test steps to powerbi with odata query

I need to extract a table with test steps that correspond to each test case, from AzureDevops to PowerBI.
I was able to retrieve a list of tables that I can extract with odata, but none of them contains test steps. I’m attaching the metadata request and an extract of its results. extract
I’ve read that another possibility would be you to use an api query, but I’m not sure which one.
Does anyone know a possible solution?
Thank you.
According to the note in this documentation,
You can’t add fields with a data type of Plain Text (long text) or HTML (rich-text). These fields aren’t available from Analytics for the purposes of reporting.
And the type of the Steps field is Text (multiple lines), so they cannot be extract with odata.
You can try to use this REST API to get the detail of the testcase which will contain the Steps detail. Please refer the screenshot:

Distinguishing a multi-select field in Azure DevOps

We are using the _apis/wit/workitemtypes/{workitemtype}/fields?$expand=all&api-version=5.1 API to fetch all fields for a particular workitem type and then use the _apis/wit/fields/{fieldreferencename}?api-version=5.1 API to fetch extra details about each field.
With the output we receive we're able to distinguish number, text, single-select fields.
However multi-select fields have no attribute that helps us identify them as multi-select fields. Is there any other API for that? Another problem we have is that we're not able to distinguish custom fields from fields fields.
However multi-select fields have no attribute that helps us identify
them as multi-select fields.
I'm afraid this is not supported in current Rest-API here. For now Rest-API doesn't support to check whether a field is multi-select.
Is there any other API for that? Another problem we have is that we're
not able to distinguish custom fields from fields fields.
I think the api(_apis/wit/fields/{fieldreferencename}?api-version=5.1) in your question can handle this. For those custom fields, their referenceName is always in format: Custom.FieldName. See:
So to determine if one field is custom one, we just need to check its referenceName's format. Hope it helps and if I misunderstand anything, feel free to correct me.

Conditional routing in Apache NiFi

I'm using NiFi to get data from an Oracle database and put some of this data in Kafka (using the processor PutKafka).
Example : if the attribute "id" contains "aaabb"
Is that possible in Apache NiFi? How can i do it?
This should definitely be possible, the flow might be something like this...
1) ExecuteSQL or QueryDatabaseTable to get the data from the database, these produce Avro
2) ConvertAvroToJson processor to convert the Avro to Json
3) EvaluateJsonPath to extract the id field into an attribute
4) RouteOnAttribute to route flow files where the id attribute contains "aaabbb"
5) PutKafka to deliver any of the matching results from RouteOnAttribute
To add on to Bryan's example flow, I wanted to point you to some great documentation that should help introduce you to Apache NiFi.
Firstly, I would suggest checking out the NiFi documentation. It is very good and should help a lot. In addition to providing details on each of the processors Bryan mentioned it also has general documentation for every type of user.
For a basic introduction to build a NiFi flow check out this video.
For example templates check out this repo. It's a has an excel file at it's root level which has a description and list of processors for each template.

Need pointers on how report generation can happen in CQ5

We have created a set of forms in CQ5 and we have a requirement that the content of these forms should be stored at a specific node, our forms interact with third party services and get some data from there as well, this is also stored on the same nodes.
Now, we have to give authors the permission to go and download these reports based on ACLs. I also will have to provide them start and end date and upon selecting these dates the content placed in these nodes should be exportable in CSV format.
Can anybody guide me in how to achieve this functionality. I have gone through report generation but need better clarity on how this can be achieved like how will i be able to use QueryBuilder api/ how can i export and how do i provide the dates on the UI.
This was achieved as described.
I actually had to override the default report generation mechanism and i created my own custom report using report generation tutorial in cq documentation.
Once the report templates and components were written, i also override cq report page component and provided input dates in body.jsp using date component of granite.
once users selected dates, with the help of querybuilder api i used to search for nodes at path(specified by author, can be different for different form data) and i also created an artificial resource type at nodes where i was storing the data, this lead me to exact nodes where data was stored and this property was also passed to querybuilder. The json returned as response from querybuilder was then supplied to a JS which converted the data to csv format.