How is the value NULL? - azure-devops

I am getting query results that determine if user story hasn't been changed (changedate) in the last one day.
I'm following this article to build the logic app as the intention is similar
For some reason, despite the query returning a valid response (at least 1 user story result), the foreach expression is throwing this error:
ExpressionEvaluationFailed. The execution of template action 'For_each' failed: the result of the evaluation of 'foreach' expression '#body('Parse_JSON')?['body']?['value']' is of type 'Null'. The result must be a valid array.
How is it NULL when clearly there is a user story returned?
Get query results:
OUTPUTS:
[
{
"System.Id": 12345,
"System.WorkItemType": "User Story",
"System.State": "New",
"System.Title": "Experiment"
}
]
Parse JSON:
Inputs:
Content:
{
"value": [
{
"System.Id": 12345,
"System.WorkItemType": "User Story",
"System.State": "New",
"System.Title": "Experiment"
}
],
"#odata.nextLink": null
}
Schema
{
"type": "object",
"properties": {
"body": {
"type": "object",
"properties": {
"value": {
"type": "array",
"items": {
"type": "object",
"properties": {
"System.AssignedTo": {
"type": "string"
},
"System.Id": {
"type": "integer"
},
"System.State": {
"type": "string"
},
"System.Tags": {
"type": "string"
},
"System.Title": {
"type": "string"
},
"System.WorkItemType": {
"type": "string"
}
},
"required": [
"System.Id",
"System.WorkItemType",
"System.State",
"System.AssignedTo",
"System.Title"
]
}
},
"#odata.nextLink": {}
}
},
"headers": {
"type": "object",
"properties": {
"Cache-Control": {
"type": "string"
},
"Content-Length": {
"type": "string"
},
"Content-Type": {
"type": "string"
},
"Date": {
"type": "string"
},
"Expires": {
"type": "string"
},
"Pragma": {
"type": "string"
},
"Set-Cookie": {
"type": "string"
},
"Strict-Transport-Security": {
"type": "string"
},
"Timing-Allow-Origin": {
"type": "string"
},
"Transfer-Encoding": {
"type": "string"
},
"Vary": {
"type": "string"
},
"X-Content-Type-Options": {
"type": "string"
},
"X-Frame-Options": {
"type": "string"
},
"x-ms-apihub-cached-response": {
"type": "string"
},
"x-ms-apihub-obo": {
"type": "string"
},
"x-ms-request-id": {
"type": "string"
}
}
},
"statusCode": {
"type": "integer"
}
}
}
Outputs:
{
"value": [
{
"System.Id": 12345,
"System.WorkItemType": "User Story",
"System.State": "New",
"System.Title": "Experiment"
}
],
"#odata.nextLink": null
}

Using the Value from the Get Query Results directly works.

Related

Open API 3.0 parameter dependencies: required parameters if type is "one of" (with shared parameters)

I'm creating an openapi.json (version 3.0.3) schema and I'm modelling a post request. The body can look like this:
{
type: "A",
aParam: "string",
sharedParam1: "string",
sharedParam2: "integer",
sharedParam3: "string"
}
where type is one of A or B. If the type is A, the parameter aParam is required if the type is B aParam must be left out. Basically, the other way the request can look is:
{
type: "B",
sharedParam1: "string",
sharedParam2: "integer",
sharedParam3: "string"
}
How can I model this?
Here is what I tried:
{
"requestBody": {
"content": {
"application/json": {
"schema": {
"oneOf": [
{
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": ["A"]
},
"aParam": {
"type": "string"
},
"sharedParam1": {
"type": "string"
},
"sharedParam2": {
"type": "string"
},
"sharedParam3": {
"type": "string"
}
}
},
{
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": ["B"]
},
"sharedParam1": {
"type": "string"
},
"sharedParam2": {
"type": "string"
},
"sharedParam3": {
"type": "string"
}
}
}
]
}
}
}
}
}
Basically, I "overloaded" the request body by using oneOf but that has a lot of duplication.
You may extract the shared properties to a base schema. It won't make the definition much less verbose but at least will remove duplicated properties definitions making them more maintainable:
"components": {
"schemas": {
"baseRequestBody": {
"type": "object",
"required": [
"type",
"sharedParam1",
"sharedParam2",
"sharedParam3"
],
"properties": {
"type": {
"type": "string",
"enum": [
"A",
"B"
]
},
"sharedParam1": {
"type": "string"
},
"sharedParam2": {
"type": "integer"
},
"sharedParam3": {
"type": "string"
}
}
},
"requestBodyA": {
"allOf": [
{
"$ref": "#/components/schemas/baseRequestBody"
},
{
"type": "object",
"required": [
"aParam"
],
"properties": {
"aParam": {
"type": "string"
}
}
}
]
},
"requestBodyB": {
"allOf": [
{
"$ref": "#/components/schemas/baseRequestBody"
}
]
}
}
}
Additionally, you might want to use Discriminator which can be used by some tools like code generators:
"requestBody": {
"content": {
"application/json": {
"schema": {
"oneOf": [
{
"$ref": "#/components/schemas/requestBodyA"
},
{
"$ref": "#/components/schemas/requestBodyB"
}
],
"discriminator": {
"propertyName": "type",
"mapping": {
"A": "#/components/schemas/requestBodyA",
"B": "#/components/schemas/requestBodyB"
}
}
}
}
}
}

Getting error on null and empty string while copying a csv file from blob container to Azure SQL DB

I tried all combination on the datatype of my data but each time my data factory pipeline is giving me this error:
{
"errorCode": "2200",
"message": "ErrorCode=UserErrorColumnNameNotAllowNull,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Empty or Null string found in Column Name 2. Please make sure column name not null and try again.,Source=Microsoft.DataTransfer.Common,'",
"failureType": "UserError",
"target": "xxx",
"details": []
}
My Copy data source code is something like this:{
"name": "xxx",
"description": "uuu",
"type": "Copy",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"source": {
"type": "DelimitedTextSource",
"storeSettings": {
"type": "AzureBlobStorageReadSettings",
"recursive": true,
"wildcardFileName": "*"
},
"formatSettings": {
"type": "DelimitedTextReadSettings"
}
},
"sink": {
"type": "AzureSqlSink"
},
"enableStaging": false,
"translator": {
"type": "TabularTranslator",
"mappings": [
{
"source": {
"name": "populationId",
"type": "Guid"
},
"sink": {
"name": "PopulationID",
"type": "String"
}
},
{
"source": {
"name": "inputTime",
"type": "DateTime"
},
"sink": {
"name": "inputTime",
"type": "DateTime"
}
},
{
"source": {
"name": "inputCount",
"type": "Decimal"
},
"sink": {
"name": "inputCount",
"type": "Decimal"
}
},
{
"source": {
"name": "inputBiomass",
"type": "Decimal"
},
"sink": {
"name": "inputBiomass",
"type": "Decimal"
}
},
{
"source": {
"name": "inputNumber",
"type": "Decimal"
},
"sink": {
"name": "inputNumber",
"type": "Decimal"
}
},
{
"source": {
"name": "utcOffset",
"type": "String"
},
"sink": {
"name": "utcOffset",
"type": "Int32"
}
},
{
"source": {
"name": "fishGroupName",
"type": "String"
},
"sink": {
"name": "fishgroupname",
"type": "String"
}
},
{
"source": {
"name": "yearClass",
"type": "String"
},
"sink": {
"name": "yearclass",
"type": "String"
}
}
]
}
},
"inputs": [
{
"referenceName": "DelimitedTextFTDimensions",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "AzureSqlTable1",
"type": "DatasetReference"
}
]
}
Can anyone please help me understand the issue. I see in some blogs they ask me use treatnullasempty but I am not allowed to modify the JSON. is there a way to do that??
I suggest to using Data Flow DerivedColumn, DerivedColumn can help you build expression to replace the null column.
For example:
Derived Column, if Column_2 is null =true, return 'dd' :
iifNull(Column_2,'dd')
Mapping the column
Reference: Data transformation expressions in mapping data flow
Hope this helps.
fixed it.it was a easy fix as one of my column in destination was marked as not null, i changed it as null and it worked.

Conversion of JSON to Avro failed: Failed to convert JSON to Avro: Unknown union branch

I am trying to send a JSON message to a Kafka topic using Kafka-rest service to serialize JSON as an Avro object, but the JSON message is failed to get accepted by Kafka-rest with the following error:
Conversion of JSON to Avro failed: Failed to convert JSON to Avro: Unknown union branch postId
I suspect that there is an issue with the Avro schema I am using as it is a nested record type with nullable fields.
Avro schema:
{
"type": "record",
"name": "ExportRequest",
"namespace": "com.example.avro.model",
"fields": [
{
"name": "context",
"type": {
"type": "map",
"values": {
"type": "string",
"avro.java.string": "String"
},
"avro.java.string": "String"
}
},
{
"name": "exportInfo",
"type": {
"type": "record",
"name": "ExportInfo",
"fields": [
{
"name": "exportId",
"type": {
"type": "string",
"avro.java.string": "String"
}
},
{
"name": "exportType",
"type": {
"type": "string",
"avro.java.string": "String"
}
},
{
"name": "exportQuery",
"type": {
"type": "record",
"name": "ExportQuery",
"fields": [
{
"name": "postExport",
"type": [
"null",
{
"type": "record",
"name": "PostExport",
"fields": [
{
"name": "postId",
"type": {
"type": "string",
"avro.java.string": "String"
}
},
{
"name": "isCommentIncluded",
"type": "boolean"
}
]
}
],
"default": null
},
{
"name": "feedExport",
"type": [
"null",
{
"type": "record",
"name": "FeedExport",
"fields": [
{
"name": "accounts",
"type": {
"type": "array",
"items": {
"type": "string",
"avro.java.string": "String"
}
}
},
{
"name": "recordTypes",
"type": {
"type": "array",
"items": {
"type": "string",
"avro.java.string": "String"
}
}
},
{
"name": "actions",
"type": {
"type": "array",
"items": {
"type": "string",
"avro.java.string": "String"
}
}
},
{
"name": "contentTypes",
"type": {
"type": "array",
"items": {
"type": "string",
"avro.java.string": "String"
}
}
},
{
"name": "startDate",
"type": "long"
},
{
"name": "endDate",
"type": "long"
},
{
"name": "advancedSearch",
"type": [
"null",
{
"type": "record",
"name": "AdvancedSearchExport",
"fields": [
{
"name": "allOfTheWords",
"type": {
"type": "array",
"items": {
"type": "string",
"avro.java.string": "String"
}
}
},
{
"name": "anyOfTheWords",
"type": {
"type": "array",
"items": {
"type": "string",
"avro.java.string": "String"
}
}
},
{
"name": "noneOfTheWords",
"type": {
"type": "array",
"items": {
"type": "string",
"avro.java.string": "String"
}
}
},
{
"name": "hashtags",
"type": {
"type": "array",
"items": {
"type": "string",
"avro.java.string": "String"
}
}
},
{
"name": "keyword",
"type": {
"type": "string",
"avro.java.string": "String"
}
},
{
"name": "exactPhrase",
"type": {
"type": "string",
"avro.java.string": "String"
}
}
]
}
],
"default": null
}
]
}
],
"default": null
}
]
}
}
]
}
}
]
}
Json message:
{
"context": {
"user_id": "1",
"group_id": "1",
"organization_id": "1"
},
"exportInfo": {
"exportId": "93874dd7-35d7-4f1f-8cf8-051c606d920b",
"exportType": "json",
"exportQuery": {
"postExport": {
"postId": "dd",
"isCommentIncluded": false
},
"feedExport": {
"accounts": [
"1677143852565319"
],
"recordTypes": [],
"actions": [],
"contentTypes": [],
"startDate": 0,
"endDate": 0,
"advancedSearch": {
"allOfTheWords": [
"string"
],
"anyOfTheWords": [
"string"
],
"noneOfTheWords": [
"string"
],
"hashtags": [
"string"
],
"keyword": "string",
"exactPhrase": "string"
}
}
}
}
}
I would appreciate it if someone could help me to understand what the issue is.
Both of your JSON and Avro looks good.
You are facing the issue because JSON doesn't conform to Avro's JSON encoding spec.
So, if you convert your JSON accordingly, it will somehow look like this
{
"context": {
"user_id": "1",
"group_id": "1",
"organization_id": "1"
},
"exportInfo": {
"exportId": "93874dd7-35d7-4f1f-8cf8-051c606d920b",
"exportType": "json",
"exportQuery": {
"postExport": {
"com.example.avro.model.PostExport": {
"postId": "dd",
"isCommentIncluded": false
}
},
"feedExport": {
"com.example.avro.model.FeedExport": {
"accounts": [
"1677143852565319"
],
"recordTypes": [],
"actions": [],
"contentTypes": [],
"startDate": 0,
"endDate": 0,
"advancedSearch": {
"com.example.avro.model.AdvancedSearchExport": {
"allOfTheWords": [
"string"
],
"anyOfTheWords": [
"string"
],
"noneOfTheWords": [
"string"
],
"hashtags": [
"string"
],
"keyword": "string",
"exactPhrase": "string"
}
}
}
}
}
}
}

Search and replace JSON multiline using regex in VSCode

I have a really long JSON schema. Using VSCode, I need to replace the partnerName type to be string, null (it appears more than 20 times, the snippet below is just 1 appearance).
How can I search and replace the multiline for the entire partnerName entry?
From other question, I've tried using regex [\n\s]+, (.*\n)+ to be
"partnerName": {(.*\n)+"type": "null"(.*\n)+}
But it's still not matching.
Search for:
"partnerName": {
"type": "null"
},
Replace with:
"partnerName": {
"type": "string, null"
},
Snippet example:
{
"type": "object",
"properties": {
"node": {
"type": "object",
"properties": {
"id": {
"type": "string"
},
"name": {
"type": "string"
},
"description": {
"type": "string"
},
"type": {
"type": "string"
},
"frequency": {
"type": "string"
},
"maxCount": {
"type": "integer"
},
"points": {
"type": "integer"
},
"startAt": {
"type": "string"
},
"endAt": {
"type": "string"
},
"partnerName": {
"type": "null"
},
"action": {
"type": "null"
}
},
"required": [
"id",
"name",
"description",
"type",
"frequency",
"maxCount",
"points",
"startAt",
"endAt",
"partnerName",
"action"
]
}
},
"required": [
"node"
]
},
Try this regex:
(partnerName".*\n\s*"type":\s*)"null"
and replace with:
$1"string, null"

swagger with list of elements in an array

I am new to swagger implementation. I have a query parameter 'Geschaeftsvorfall' which can be of string value A or P and when I hit the end point. I expect an array[validPsd2Ids] filled with integers.
I have formulated below code and I don't know how to validate it. can someone tell me if I am going wrong some where?
Also what can I do to print a List instead of array in my response?
"parameters": {
"Geschaeftsvorfall": {
"name": "Geschaeftsvorfall",
"in": "query",
"description": "Geschaeftsvorfall",
"required": true,
"type": "string",
"enum": [
"A",
"P"
]
}
},
"definitions": {
"ValidePsd2Ids": {
"type": "array",
"items": {
"properties": {
"ValidePsd2Ids": {
"type": "integer",
example: [100000005,
100000006,
100000007,
100000008,
100000009,
100000010,
100000011,
100000012,
100000013,
100000014,
100000015,
100000016,
100000017,
100000018,
100000019,
100000020,
100000021,
100000022,
100000023,
100000024,
100000025,
100000034,
100000035,
100000036,
100000037,
100000038,
100000039,
100000048,
100000049,
100000050,
100000054,
100000055,
100000056,
100000057,
100000058,
100000117,
100000163,
100000165,
100000195,
100000196,
100000197,
100000198,
100000199,
100000201,
100000214,
100000217,
100000218]
}
}
}
}
},
"paths": {
"/payments/validaccounttypes/": {
"get": {
"tags": [
"payments"
],
"summary": "Valid PSD2 relevant accounts",
"description": "Reads the list of valid PSD2 revelant IDs.",
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"parameters": [
{
"$ref": "#/parameters/Geschaeftsvorfall"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"properties": {
"ValidePsd2Ids": {
"type": "integer"
}
}
},
"properties": {
"ValidePsd2Ids": {
"$ref": "#/definitions/ValidePsd2Ids"
}
}
}
}
}
}
}
}
The parameter definition is correct.
The response definition is not correct. You say that the response looks like
{"ValidePsd2Ids" : [1,2,3,4,5,6,7,...]}
In OpenAPI terms, this is a type: object with a property ValidePsd2Ids that contains an array of integers. This can be described as:
"definitions": {
"ValidePsd2Ids": {
"type": "object",
"properties": {
"ValidePsd2Ids": {
"type": "array",
"items": {
"type": "integer"
},
"example": [
100000005,
100000006,
100000007
]
}
}
}
},
and the responses should be:
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/ValidePsd2Ids"
}
}
}