Validation fails but error messages missing - manatee.json

I'm attempting to validate a JSON file against a specific schema using this code:
string data = File.ReadAllText("../../../testFiles/create.json");
string schemaText = File.ReadAllText("../../../schemas/request-payload.schema.json");
var serializer = new JsonSerializer();
var json = JsonValue.Parse(data);
var schema = serializer.Deserialize<JsonSchema>(JsonValue.Parse(schemaText));
var result = schema.Validate(json);
Assert.IsTrue(result.IsValid);
The assertions fails because result.IsValid is false (which is correct - there is an intentional error in my JSON) but there is no indication where the error is happening:
My schema does have sub-schemas in the definition section. Could that have anything to do with it? Do I need to set some property to see that error information?
Update: Added schema and test JSON
My original schema was several hundred lines long, but I pared it down to a subset which still has the problem. Here is the schema:
{
"$schema": "https://json-schema.org/draft/2019-09/schema#",
"$id": "request-payload.schema.json",
"type": "object",
"propertyNames": { "enum": ["template" ] },
"required": ["template" ],
"properties": {
"isPrivate": { "type": "boolean" },
"template": {
"type": "string",
"enum": [ "TemplateA", "TemplateB" ]}},
"oneOf": [
{
"if": {
"properties": { "template": { "const": "TemplateB" }}},
"then": { "required": [ "isPrivate" ] }}]
}
And here is a test JSON object:
{
"template": "TemplateA"
}
The above JSON validates fine. Switch the value to TemplateB and the JSON fails validation (because isPrivate is missing and it is required for TemplateB), but the result doesn't contain any information about why it failed.
The code I'm using to run the validation test is listed above

The issue is likely that you haven't set the output format. The default format is flag which means that you'll only get a true/false of whether the value passed.
To get more details, you'll need to use a different format setting. You can do this via the schema options.
For example:
JsonSchemaOptions.OutputFormat = SchemaValidationOutputFormat.Detailed;
The available options are here.

Related

Kafka Connect JSON Schema does not appear to support "$ref" tags

I am using Kafka Connect with JSONSchema and am in a situation where I need to convert the JSON schema manually (to "Schema") within a Kafka Connect plugin. I can successfully retrieve the JSON Schema from the Schema Registry and am successful converting with simple JSON Schemas but I am having difficulties with ones that are complex and have valid "$ref" tags referencing components within a single JSON Schema definition.
I have several questions:
The JsonConverter.java does not appear to handle "$ref". Am I correct, or does it handle it in another way elsewhere?
Does the Schema Registry handle the referencing of sub-definitions? If yes, is there code that shows how the dereferencing is handled?
Should the JSON Schema be resolved to a string without references (ie. inline the references) before submitting to the Schema Registry and thereby remove the "$ref" issue?
I am looking at the Kafka Source code module JsonConverter.java below:
https://github.com/apache/kafka/blob/trunk/connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java#L428
An example of the complex schema (taken from the JSON Schema site) is shown below (notice the "$ref": "#/$defs/veggie" tag the references a later sub-definition)
{
"$id": "https://example.com/arrays.schema.json",
"$schema": "https://json-schema.org/draft/2020-12/schema",
"description": "A representation of a person, company, organization, or place",
"title": "complex-schema",
"type": "object",
"properties": {
"fruits": {
"type": "array",
"items": {
"type": "string"
}
},
"vegetables": {
"type": "array",
"items": { "$ref": "#/$defs/veggie" }
}
},
"$defs": {
"veggie": {
"type": "object",
"required": [ "veggieName", "veggieLike" ],
"properties": {
"veggieName": {
"type": "string",
"description": "The name of the vegetable."
},
"veggieLike": {
"type": "boolean",
"description": "Do I like this vegetable?"
}
}
}
}
}
Below is the actual schema returned from the Schema Registry after it the schema was successfully registered:
[
{
"subject": "complex-schema",
"version": 1,
"id": 1,
"schemaType": "JSON",
"schema": "{\"$id\":\"https://example.com/arrays.schema.json\",\"$schema\":\"https://json-schema.org/draft/2020-12/schema\",\"description\":\"A representation of a person, company, organization, or place\",\"title\":\"complex-schema\",\"type\":\"object\",\"properties\":{\"fruits\":{\"type\":\"array\",\"items\":{\"type\":\"string\"}},\"vegetables\":{\"type\":\"array\",\"items\":{\"$ref\":\"#/$defs/veggie\"}}},\"$defs\":{\"veggie\":{\"type\":\"object\",\"required\":[\"veggieName\",\"veggieLike\"],\"properties\":{\"veggieName\":{\"type\":\"string\",\"description\":\"The name of the vegetable.\"},\"veggieLike\":{\"type\":\"boolean\",\"description\":\"Do I like this vegetable?\"}}}}}"
}
]
The actual schema is embedded in the above returned string (the contents of the "schema" field) and contains the $ref references:
{\"$id\":\"https://example.com/arrays.schema.json\",\"$schema\":\"https://json-schema.org/draft/2020-12/schema\",\"description\":\"A representation of a person, company, organization, or place\",\"title\":\"complex-schema\",\"type\":\"object\",\"properties\":{\"fruits\":{\"type\":\"array\",\"items\":{\"type\":\"string\"}},\"vegetables\":{\"type\":\"array\",\"items\":{\"$ref\":\"#/$defs/veggie\"}}},\"$defs\":{\"veggie\":{\"type\":\"object\",\"required\":[\"veggieName\",\"veggieLike\"],\"properties\":{\"veggieName\":{\"type\":\"string\",\"description\":\"The name of the vegetable.\"},\"veggieLike\":{\"type\":\"boolean\",\"description\":\"Do I like this vegetable?\"}}}}}
Again, the JsonConverter in the Apache Kafka source code has no notion of JSONSchema, therefore, no, $ref doesn't work and it also doesn't integrate with the Registry.
You seem to be looking for the io.confluent.connect.json.JsonSchemaConverter class + logic

How do I use an extended definition and not allow additional properties in a way that is compatible with multiple validators (JSON schema draft 7)?

I am creating a strict validator for a complex JSON file and want to re-use various definitions in order to keep the schema manageable and easier to update.
According to the documentation it is necessary to use allOf to extend a definition to add more properties. This is exactly what I've done, but I find that without use of additionalProperties set to false validation doesn't prevent arbitrary other properties being added.
The following massively cut-down schema demonstrates what I'm doing:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://example.com/schema/2021/02/example.json",
"description": "This schema demonstrates how VSCode's JSON schema mechanism fails with allOf used to extend a definition",
"definitions": {
"valueProvider": {
"type": "object",
"properties": {
"example": {
"type": "string"
},
"alternative": {
"type": "string"
}
},
"oneOf": [
{
"required": [
"example"
]
},
{
"required": [
"alternative"
]
}
]
},
"selector": {
"type": "object",
"allOf": [
{
"$ref": "#/definitions/valueProvider"
},
{
"required": [
"operator",
"value"
],
"properties": {
"operator": {
"type": "string",
"enum": [
"IsNull",
"Equals",
"NotEquals",
"Greater",
"GreaterOrEquals",
"Less",
"LessOrEquals"
]
},
"value": {
"type": "string"
}
}
}
],
"additionalProperties": false
}
},
"properties": {
"show": {
"properties": {
"name": {
"type": "string"
},
"selector": {
"description": "This property does not function correctly in VSCode",
"allOf": [
{
"$ref": "#/definitions/selector"
},
{
"additionalProperties": false
}
]
}
},
"additionalProperties": false
}
}
}
This works a treat in IntelliJ IDEA's JSON editor (2020.3.2 ultimate edition) when editing JSON against this schema (using a schema mapping). For example, the file ex-fail.json's content of:
{
"show": {
"name": "a",
"selector": {
"example": "a",
"operator": "IsNull",
"value": "false",
"d": "a"
}
}
}
Is correctly validated, simply highlighting "d" as not allowed, thus:
However, when I use the very same schema and JSON file with VSCode (1.53.2) with vanilla configuration (except for a schema mapping) VSCode erroneously marks "example", "operator", "value" and "d" as not allowed. It looks like this in the VSCode editor:
If I remove the additionalProperties definition from the show.selector property, both IDEA and VSCode indicate that all is well, including allowing the "d" property - in doing this I can simplify that property definition to:
"selector": {
"description": "This property does not function correctly in VSCode",
"$ref": "#/definitions/selector"
}
What can I do to the schema to support both IDEA and VSCode whilst disallowing additional properties where they should not appear?
PS: The schema mapping in VSCode is simply along the lines of:
{
"json.schemas": [
{
"fileMatch": [
"*/config/ex-*.json"
],
"url": "file:///C:/my/path/to/example-schema.json"
}
]
}
You cannot do what you ask with JSON Schema draft-07 or prior.
The reason is, when $ref is used in a schema object, all other properties MUST be ignored.
An object schema with a "$ref" property MUST be interpreted as a
"$ref" reference. The value of the "$ref" property MUST be a URI
Reference. Resolved against the current URI base, it identifies the
URI of a schema to use. All other properties in a "$ref" object MUST
be ignored.
https://datatracker.ietf.org/doc/html/draft-handrews-json-schema-01#section-8.3
We changed this to not be the case for draft 2019-09.
It sounds like VSCode is merging the properties in applicators upwards to the nearest schema object (which is wrong), and IntelliJ IDEA is doing something similar but in a different way (which is also wrong).
The correct validation result for your schema and instance is VALID. See the live demo here: https://jsonschema.dev/s/C6ent
additionalProperties relies on the values of properties and patternProperties within the SAME schema object. It cannot "see through" applicators such as $ref and allOf.
For draft 2019-09, we added unevaluatedProperties, which CAN "see through" applicator keywords (although it's a little more complex than that).
Update:
After reviewing your update, sadly the same is still true.
One approach makes it sort of possible but involves some duplication, and only works when you control the schemas you are referencing.
You would need to redefine your selector property like this...
"selector": {
"description": "This property did not function correctly in VSCode",
"allOf": [
{
"$ref": "#/definitions/selector"
},
{
"properties": {
"operator": true,
"value": true,
"example": true,
"alternative": true
},
"additionalProperties": false
}
]
}
The values of a property object are schema values, and booleans are valid schemas. You don't need (or want to) deal with their validation here, only say these are the allowed ones, followed by no additionalProperties.
You'll also need to remove the additionalProperties: false from your definition of selector, as that is preventing ALL properties (which I now guess is why you saw that issue in one of the editors).
It involves some duplication, but is the only way I'm aware of that you can do this for draft-07 or previous. As I said, not a problem for draft 2019-09 or above due to new kewords.
additionalProperties is problematic because it depends on the properties and patternProperties. The result is that "additionalProperties": false effectively blocks schema composition. #Relequestual showed one alternative approach, here is another approach that is a little less verbose, but still requires duplication of property names.
draft-06 and up
{
"allOf": [{ "$ref": "#/definitions/base" }],
"properties": {
"bar": { "type": "number" }
},
"propertyNames": { "enum": ["foo", "bar"] },
"definitions": {
"base": {
"properties": {
"foo": { "type": "string" }
}
}
}
}

Cannot parametrize any value under placement.managedCluster.config

My goal is to create dataproc workflow template from python code. Meanwhile I want to have ability to parametrize placement.managedCluster.config.gceClusterConfig.subnetworkUri field during template instantiation.
I read template from json file like:
{
"id": "bigquery-extractor",
"placement": {
"managed_cluster": {
"config": {
"gce_cluster_config": {
"subnetwork_uri": "some-subnet-name"
},
"software_config" : {
"image_version": "1.5"
}
},
"cluster_name": "some-name"
}
},
"jobs": [
{
"pyspark_job": {
"args": [
"job_argument"
],
"main_python_file_uri": "gs:///path-to-file"
},
"step_id": "extract"
}
],
"parameters": [
{
"name": "CLUSTER_NAME",
"fields": [
"placement.managedCluster.clusterName"
]
},
{
"name": "SUBNETWORK_URI",
"fields": [
"placement.managedCluster.config.gceClusterConfig.subnetworkUri"
]
},
{
"name": "MAIN_PY_FILE",
"fields": [
"jobs['extract'].pysparkJob.mainPythonFileUri"
]
},
{
"name": "JOB_ARGUMENT",
"fields": [
"jobs['extract'].pysparkJob.args[0]"
]
}
]
}
code snippet I use:
options = ClientOptions(api_endpoint="{}-dataproc.googleapis.com:443".format(region))
client = dataproc.WorkflowTemplateServiceClient(client_options=options)
template_file = open(path_to_file, "r")
template_dict = eval(template_file.read())
print(template_dict)
template = dataproc.WorkflowTemplate(template_dict)
full_region_id = "projects/{project_id}/regions/{region}".format(project_id=project_id, region=region)
try:
client.create_workflow_template(
parent=full_region_id,
template=template
)
except AlreadyExists as err:
print(err)
pass
when I try to run this code I get the following error:
google.api_core.exceptions.InvalidArgument: 400 Invalid field path placement.managed_cluster.configuration.gce_cluster_config.subnetwork_uri: Field gce_cluster_config does not exist.
This behavior is the same also if I try to parametrize placement.managedCluster.config.softwareConfig.imageVersion, I will get
google.api_core.exceptions.InvalidArgument: 400 Invalid field path placement.managed_cluster.configuration.software_config.image_version: Field software_config does not exist.
But if I exclude any field under placement.managedCluster.config from parameters map, template is created successfully.
I didn't find any restriction on parametrizing these fields. Is there any? Or is it just me doing something wrong?
This doc listed the parameterizable fields. It seems that only managedCluster.name of managedCluster is parameterizable:
Managed cluster name. Dataproc will use the user-supplied name as the name prefix, and append random characters to create a unique cluster name. The cluster is deleted at the end of the workflow.
I don't see managedCluster.config parameterizable.

In Katalon Studio, how can i check JSON metadata

I am configuring Katalon Studio for WebAPI testing. I want to test the metadata (schema) check for the JSON received from API. How can I do that? Please suggest.
You can check if you receive the expected JSON by extending the code in the "Validation" tab of your REST method. The statement that allows you to do this is the following:
WS.verifyElementPropertyValue(response, contentOf, withPrecision)
So, if you expect a JSON like this: { "id":1, "name":"Joe" }, you can use the following code:
WS.verifyElementPropertyValue(response, 'id', 1)
WS.verifyElementPropertyValue(response, 'name', 'Joe')
Katalon Studio offers another option too:
boolean successful = WS.validateJsonAgainstSchema(response,schema)
This way you can check a previously defined schema. Remember the schema must be String type. Katalon Studio autogenerate the following example:
String jsonPass =
"""
{
"\$id": "https://example.com/person.schema.json",
"\$schema": "https://json-schema.org/draft/2020-12/schema",
"title": "Person",
"type": "object",
"properties": {
"firstName": {
"type": "string",
"description": "The person's first name."
},
"lastName": {
"type": "string",
"description": "The person's last name."
},
"age": {
"description": "Age in years which must be equal to or greater than zero.",
"type": "integer",
"minimum": 0
}
}
}
"""
boolean successful = WS.validateJsonAgainstSchema(response,jsonPass)
If you need more information about this (or a better explanation), check here Katalon documentation:
ws-send-request-and-verify
ws-validate-json-string-against-a-schema
Hope it helps you!

Using ADF Copy Activity with dynamic schema mapping

I'm trying to drive the columnMapping property from a database configuration table. My first activity in the pipeline pulls in the rows from the config table. My copy activity source is a Json file in Azure blob storage and my sink is an Azure SQL database.
In copy activity I'm setting the mapping using the dynamic content window. The code looks like this:
"translator": {
"value": "#json(activity('Lookup1').output.value[0].ColumnMapping)",
"type": "Expression"
}
My question is, what should the value of activity('Lookup1').output.value[0].ColumnMapping look like?
I've tried several different json formats but the copy activity always seems to ignore it.
For example, I've tried:
{
"type": "TabularTranslator",
"columnMappings": {
"view.url": "url"
}
}
and:
"columnMappings": {
"view.url": "url"
}
and:
{
"view.url": "url"
}
In this example, view.url is the name of the column in the JSON source, and url is the name of the column in my destination table in Azure SQL database.
The issue is due to the dot (.) sign in your column name.
To use column mapping, you should also specify structure in your source and sink dataset.
For your source dataset, you need specify your format correctly. And since your column name has dot, you need specify the json path as following.
You could use ADF UI to setup a copy for a single file first to get the related format, structure and column mapping format. Then change it to lookup.
And as my understanding, your first format should be the right format. If it is already in json format, then you may not need use "json" function in your expression.
There seems to be a disconnect between the question and the answer, so I'll hopefully provide a more straightforward answer.
When setting this up, you should have a source dataset with dynamic mapping. The sink doesn't require one, as we're going to specify it in the mapping.
Within the copy activity, format the dynamic json like the following:
{
"structure": [
{
"name": "Address Number"
},
{
"name": "Payment ID"
},
{
"name": "Document Number"
},
...
...
]
}
You would then specify your dynamic mapping like this:
{
"translator": {
"type": "TabularTranslator",
"mappings": [
{
"source": {
"name": "Address Number",
"type": "Int32"
},
"sink": {
"name": "address_number"
}
},
{
"source": {
"name": "Payment ID",
"type": "Int64"
},
"sink": {
"name": "payment_id"
}
},
{
"source": {
"name": "Document Number",
"type": "Int32"
},
"sink": {
"name": "document_number"
}
},
...
...
]
}
}
Assuming these were set in separate variables, you would want to send the source as a string, and the mapping as json:
source: #string(json(variables('str_dyn_structure')).structure)
mapping: #json(variables('str_dyn_translator')).translator
VladDrak - You could skip the source dynamic definition by building dynamic mapping like this:
{
"translator": {
"type": "TabularTranslator",
"mappings": [
{
"source": {
"type": "String",
"ordinal": "1"
},
"sink": {
"name": "dateOfActivity",
"type": "String"
}
},
{
"source": {
"type": "String",
"ordinal": "2"
},
"sink": {
"name": "CampaignID",
"type": "String"
}
}
]
}
}