Create large text field in Eloqua using REST API - rest

I was trying to create custom object and corresponding fields in Eloqua. While creating a field with datatype largeText it throws validation error. I can create fields with datatypes like date, text, numeric etc. How can I create largeText fields?
This is my request body
{
"type": "CustomObject",
"description": "TestObject",
"name": "TestObject",
"fields": [
{
"type": "CustomObjectField",
"name": "Description",
"dataType": "largeText",
"displayType": "text"
}
]
}
Response is [Status=Validation error, StatusCode=400]

You should use "displayType":"textArea"for creating largeText fields.

Related

PingOne /attributes endpoint throwing error for multiple valued body

I was trying to add few new attributes to a PingOne application using this endpoint. Since it doesn't support any kind of powershell I am trying to add values using the /environments/{{sourceEnvID}}/applications/{{appID}}/attributes endpoint with the help of Invoke-RestMethod. It works for a body having only one value. If I am passing multiple values as an array it is throwing an error of malformed data/Invalid data.
For example: the below body works
{
"name": "email",
"update": "EMPTY_ONLY",
"value": "${providerAttributes.email}"
}
, but if we pass a body having the below structure it gets failed.
[
{
"name": "email",
"update": "EMPTY_ONLY",
"value": "${providerAttributes.email}"
},
{
"name": "profile",
"update": "EMPTY_ONLY",
"value": "${providerAttributes.profile}"
}
]
Any help would be appreciated.

Mapping multiple csv files in Data Factory

Does anyone know if there is a way to pass a schema mapping to multiple csv without doing it manually? I have 30 csv passed through a data flow in a foreach activity, so I can't detect or set fields's type. (Because i could only for the first)
Thanks for your help! :)
A Copy Activity mapping can be parameterized and changed at runtime if explicit mapping is required. The parameter is just a json object that you'd pass in for each of the files you are processing. It looks something like this:
{
"type": "TabularTranslator",
"mappings": [
{
"source": {
"name": "Id"
},
"sink": {
"name": "CustomerID"
}
},
{
"source": {
"name": "Name"
},
"sink": {
"name": "LastName"
}
},
{
"source": {
"name": "LastModifiedDate"
},
"sink": {
"name": "ModifiedDate"
}
}
]
}
You can read more about it here: Schema and data type mapping in copy activity
So, you can either pre-generate these mapping and fetch them via a lookup in a previous step in the pipeline or if they need to be dynamic you an create them at runtime with code (e.g. have an Azure Function that looks up the current schema of the CSV and returns a properly formatted translator object).
Once you have the object as a parameter you can pass it to the copy activity. On the mapping properties of the copy activity you just Add Dynamic Content and select the appropriate parameter. It will look something like this:

Kafka Connect JSON Schema does not appear to support "$ref" tags

I am using Kafka Connect with JSONSchema and am in a situation where I need to convert the JSON schema manually (to "Schema") within a Kafka Connect plugin. I can successfully retrieve the JSON Schema from the Schema Registry and am successful converting with simple JSON Schemas but I am having difficulties with ones that are complex and have valid "$ref" tags referencing components within a single JSON Schema definition.
I have several questions:
The JsonConverter.java does not appear to handle "$ref". Am I correct, or does it handle it in another way elsewhere?
Does the Schema Registry handle the referencing of sub-definitions? If yes, is there code that shows how the dereferencing is handled?
Should the JSON Schema be resolved to a string without references (ie. inline the references) before submitting to the Schema Registry and thereby remove the "$ref" issue?
I am looking at the Kafka Source code module JsonConverter.java below:
https://github.com/apache/kafka/blob/trunk/connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java#L428
An example of the complex schema (taken from the JSON Schema site) is shown below (notice the "$ref": "#/$defs/veggie" tag the references a later sub-definition)
{
"$id": "https://example.com/arrays.schema.json",
"$schema": "https://json-schema.org/draft/2020-12/schema",
"description": "A representation of a person, company, organization, or place",
"title": "complex-schema",
"type": "object",
"properties": {
"fruits": {
"type": "array",
"items": {
"type": "string"
}
},
"vegetables": {
"type": "array",
"items": { "$ref": "#/$defs/veggie" }
}
},
"$defs": {
"veggie": {
"type": "object",
"required": [ "veggieName", "veggieLike" ],
"properties": {
"veggieName": {
"type": "string",
"description": "The name of the vegetable."
},
"veggieLike": {
"type": "boolean",
"description": "Do I like this vegetable?"
}
}
}
}
}
Below is the actual schema returned from the Schema Registry after it the schema was successfully registered:
[
{
"subject": "complex-schema",
"version": 1,
"id": 1,
"schemaType": "JSON",
"schema": "{\"$id\":\"https://example.com/arrays.schema.json\",\"$schema\":\"https://json-schema.org/draft/2020-12/schema\",\"description\":\"A representation of a person, company, organization, or place\",\"title\":\"complex-schema\",\"type\":\"object\",\"properties\":{\"fruits\":{\"type\":\"array\",\"items\":{\"type\":\"string\"}},\"vegetables\":{\"type\":\"array\",\"items\":{\"$ref\":\"#/$defs/veggie\"}}},\"$defs\":{\"veggie\":{\"type\":\"object\",\"required\":[\"veggieName\",\"veggieLike\"],\"properties\":{\"veggieName\":{\"type\":\"string\",\"description\":\"The name of the vegetable.\"},\"veggieLike\":{\"type\":\"boolean\",\"description\":\"Do I like this vegetable?\"}}}}}"
}
]
The actual schema is embedded in the above returned string (the contents of the "schema" field) and contains the $ref references:
{\"$id\":\"https://example.com/arrays.schema.json\",\"$schema\":\"https://json-schema.org/draft/2020-12/schema\",\"description\":\"A representation of a person, company, organization, or place\",\"title\":\"complex-schema\",\"type\":\"object\",\"properties\":{\"fruits\":{\"type\":\"array\",\"items\":{\"type\":\"string\"}},\"vegetables\":{\"type\":\"array\",\"items\":{\"$ref\":\"#/$defs/veggie\"}}},\"$defs\":{\"veggie\":{\"type\":\"object\",\"required\":[\"veggieName\",\"veggieLike\"],\"properties\":{\"veggieName\":{\"type\":\"string\",\"description\":\"The name of the vegetable.\"},\"veggieLike\":{\"type\":\"boolean\",\"description\":\"Do I like this vegetable?\"}}}}}
Again, the JsonConverter in the Apache Kafka source code has no notion of JSONSchema, therefore, no, $ref doesn't work and it also doesn't integrate with the Registry.
You seem to be looking for the io.confluent.connect.json.JsonSchemaConverter class + logic

How to validate http PATCH data with a JSON schema having required fields

I'm trying to validate a classic JSON schema (with Ajv and json-server) with required fields, but for a HTTP PATCH request.
It's okay for a POST because the data arrive in full and nothing should be ommited.
However, the required fields of the schema make problem when attempting to PATCH an existing resource.
Here's what I'm currently doing for a POST :
const schema = require('./movieSchema.json');
const validate = new Ajv().compile(schema);
// ...
movieValidation.post('/:id', function (req, res, next) {
const valid = validate(req.body);
if (!valid) {
const [err] = validate.errors;
let field = (err.keyword === 'required') ? err.params.missingProperty : err.dataPath;
return res.status(400).json({
errorMessage: `Erreur de type '${err.keyword}' sur le champs '${field}' : '${err.message}'`
});
}
next();
});
... but if i'm doing the same for a movieValidation.patch(...) and tries to send only this chunk of data :
{
"release_date": "2020-07-15",
"categories": [
"Action",
"Aventure",
"Science-Fiction",
"Thriller"
]
}
... it will fail the whole validation (whereas all the fields are okay and they validate the schema)
Here's my complete moviesSchema.json :
{
"type": "object",
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"title": {
"title": "Titre",
"type": "string",
"description": "Titre complet du film"
},
"release_date": {
"title": "Date de sortie",
"description": "Date de sortie du film au cinéma",
"type": "string",
"format": "date",
"example": "2019-06-28"
},
"categories": {
"title": "Catégories",
"description": "Catégories du film",
"type": "array",
"items": {
"type": "string"
}
},
"description": {
"title": "Résumé",
"description": "Résumé du film",
"type": "string"
},
"poster": {
"title": "Affiche",
"description": "Affiche officielle du film",
"type": "string",
"pattern": "^https?:\/\/"
},
"backdrop": {
"title": "Fond",
"description": "Image de fond",
"type": "string",
"pattern": "^https?:\/\/"
}
},
"required": [
"title",
"release_date",
"categories",
"description"
],
"additionalProperties": false
}
For now, I did the trick using a different schema which is the same as the original one, but without the required clause at the end. But I don't like this solution as it's duplicating code unnecessarily (and it's not elegant at all).
Is there any clever solution/tool to achieve this properly?
Thanks
If you're using HTTP PATCH correctly, there's another way to deal with this problem.
The body of a PATCH request is supposed be a diff media type of some kind, not plain JSON. The diff media type defines a set of operations (add, remove, replace) to perform to transform the JSON. The diff is then applied to the original resource resulting in the new state of the resource. A couple of JSON diff media types are JSON Patch (more powerful) and JSON Merge Patch (more natural).
If you validate the request body for a PATCH, you aren't really validating your resource, you are validating the diff format. However, if you apply the patch to your resource first, then you can validate the result with the full schema (then persist the changes or 400 depending on the result).
Remember, in REST it's resources and representations that matter, not requests and responses.
It's not uncommon to have multiple schemas, one per payload you want to validate.
In your situation, it looks like you've done exactly the right thing.
You can de-duplicate your schemas using references ($ref), splitting your property subschemas into a separate file.
You end up with a schema which contains your model, and a schema for each representation of said model, but without duplication (where possible).
If you need more guidance on how exactly you go about this, please comment and I'll update the answer with more details.
Here is an example of how you could do what you want.
You will need to create multiple schema files, and reference the right schema when you need to validate POST or PATCH requests.
I've simplified the examples to only include "title".
In one schema, you have something like...
{
"$id": "https://example.com/object/movie",
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"movie": {
"properties": {
"title": {
"$ref": "#/definitions/title"
}
},
"additionalProperties": false
},
"title": {
"title": "Titre",
"type": "string",
"description": "Titre complet du film"
}
}
}
Then you would have one for POST and PATCH...
{
"$id": "https://example.com/movie/patch",
"$schema": "http://json-schema.org/draft-07/schema#",
"allOf": [{
"$ref": "/object/movie#/definitions/movie"
}],
}
{
"$id": "https://example.com/movie/post",
"$schema": "http://json-schema.org/draft-07/schema#",
"allOf": [{
"$ref": "/object/movie#/definitions/movie"
}],
"required": ["title"]
}
Change example.com to whatever domain you want to use for your ID.
You will then need to load in to the implementation all of your schemas.
Once loaded in, the references will work, as they are based on URI resolution, using $id for each schema resource.
Notice the $ref values in the POST and PATCH schemas do not start with a #, meaning the target is not THIS schema resource.

json schema date validation not suitable for mongodb

I'm using npm jsonschema module for nodejs and my very simple json schema looks like:
"title": "ticket",
"type": "object",
"properties": {
"_id" : {"type": "string"},
"created": {"type": "string", "format": "date-time"}
},
"additionalProperties" : false
The data I validate through this schema is stored in mongodb. The problem is that created property has an index with expireAfterSeconds to auto delete these records after certain amount of time.
Now I have the following problem. If I send a string (no matter if it's valid or invalid according to json specification), the document in the database also has created property with type string, as mongo db is schemaless and I can't predefine that property as date type. For example if I send the data with created property as 2017-08-15T14:34:18.839Z ISO string and although mongo date properties looks very similar they still remains string. Ofc this breaks the expires logic.
If I send my data with real date for created property, JSON validation fails with
instance.created is not of a type(s) string
Ofc I can transform all date fields on insert and update from validated string to date type but this is kinda not sufficient, because on read I will have data with real date types that will fail validation on next update. So I can include a back transformation on every read from date to string but still this solution is not good enough for me.
Any other ideas ?
Correct types for _id (which is ObjectID) and date types are:
"bsonType": "objectId" // for ObjectID
"bsonType": "date" // For ISODate
Example:
"title": "ticket",
"type": "object",
"properties": {
"_id" : {"bsonType": "objectId"},
"created": {"bsonType": "date"}
},
"additionalProperties" : false
Not familiar with jsonschema but if you were to use ajv you could make use of custom keywords and validation such as:
{
"type": "object",
"properties": {
"someDate": {
"instanceof": "Date"
}
}
}