How can I define a managed storage schema for my Google Chrome App that supports a nested object structure? - google-chrome-app

I know the official reference is here.
I am having trouble understanding if I can have a managed_schema structured like this?
schema = {};
schema.propertyA = {};
schema.propertyA.property1 = "1";
schema.propertyA.property2 = "2";
schema.propertyB = "B";
And if so, how would the uploaded policy config file look like?

You can nest properties in a managed_schema just fine with the "object" type. For your example, the schema would look like this:
{
"type": "object",
"properties": {
"propertyA": {
"type": "object",
"properties": {
"property1": {
"type": "string"
},
"property2": {
"type": "string"
}
}
},
"propertyB": {
"type": "string"
}
}
}
And with this policy schema, the uploaded config file would have this format:
{
"propertyA": {
"Value": {
"property1": "1",
"property2": "2"
}
},
"propertyB": {
"Value": "B"
}
}
I've found this page useful when configuring and testing Chrome apps with the managed storage API.

Related

MongoDB Stitch GraphQL Custom Mutation Resolver returning null

GraphQL is a newer feature for MongoDB Stitch, and I know it is in beta, so thank you for your help in advance. I am excited about using GraphQL directly in Stitch so I am hoping that maybe I just overlooked something.
The documentation for the return Payload displays the use of bsonType, but when actually entering the JSON Schema for the payload type it asks for you to use "type" instead of "bsonType". It still works using "bsonType" to me which is odd as long as at least one of the properties uses "type".
Below is the function:
const mongodb = context.services.get("mongodb-atlas");
const collection = mongodb.db("<database>").collection("<collection>");
const query = { _id: BSON.ObjectId(input.id) }
const update = {
"$push": {
"notes": {
"createdBy": context.user.id,
"createdAt": new Date,
"text": input.text
}
}
};
const options = { returnNewDocument: true }
collection.findOneAndUpdate(query, update, options).then(updatedDocument => {
if(updatedDocument) {
console.log(`Successfully updated document: ${updatedDocument}.`)
} else {
console.log("No document matches the provided query.")
}
return {
_id: updatedDocument._id,
notes: updatedDocument.notes
}
})
.catch(err => console.error(`Failed to find and update document: ${err}`))
}
Here is the Input Type in the customer resolver:
"type": "object",
"title": "AddNoteToLeadInput",
"required": [
"id",
"text"
],
"properties": {
"id": {
"type": "string"
},
"text": {
"type": "string"
}
}
}
Below is the Payload Type:
{
"type": "object",
"title": "AddNoteToLeadPayload",
"properties": {
"_id": {
"type": "objectId"
},
"notes": {
"type": "array",
"items": {
"type": "object",
"properties": {
"createdAt": {
"type": "string"
},
"createdBy": {
"type": "string"
},
"text": {
"type": "string"
}
}
}
}
}
}
When entering the wrong "type" the error states:
Expected valid values are:[array boolean integer number null object string]
When entering the wrong "bsonType" the error states:
Expected valid values are:[string object array objectId boolean bool null regex date timestamp int long decimal double number binData]
I've tried every combination I can think of including changing all "bsonType" to "type". I also tried changing the _id to a string when using "type" or objectId when "bsonType". No matter what combination I try when I use the mutation it does what it is supposed to and adds the note into the lead, but the return payload always displays null. I need it to return the _id and note so that it will update the InMemoryCache in Apollo on the front end.
I noticed that you might be missing a return before your call to collection.findOneAndUpdate()
I tried this function (similar to yours) and got GraphiQL to return values (with String for all the input and payload types)
exports = function(input){
const mongodb = context.services.get("mongodb-atlas");
const collection = mongodb.db("todo").collection("dreams");
const query = { _id: input.id }
const update = {
"$push": {
"notes": {
"createdBy": context.user.id,
"createdAt": "6/10/10/10",
"text": input.text
}
}
};
const options = { returnNewDocument: true }
return collection.findOneAndUpdate(query, update, options).then(updatedDocument => {
if(updatedDocument) {
console.log(`Successfully updated document: ${updatedDocument}.`)
} else {
console.log("No document matches the provided query.")
}
return {
_id: updatedDocument._id,
notes: updatedDocument.notes
}
})
.catch(err => console.error(`Failed to find and update document: ${err}`))
}
Hi Bernard – There is an unfortunate bug in the custom resolver form UI at the moment which doesn't allow you to only use bsonType in the input/payload types – we are working on addressing this. In actually you should be able to use either type/bsonType or a mix of the two as long as they agree with your data. I think that the payload type definition you want is likely:
{
"type": "object",
"title": "AddNoteToLeadPayload",
"properties": {
"_id": {
"bsonType": "objectId"
},
"notes": {
"type": "array",
"items": {
"type": "object",
"properties": {
"createdAt": {
"bsonType": "date"
},
"createdBy": {
"type": "string"
},
"text": {
"type": "string"
}
}
}
}
}
}
If that doesn't work, it might be helpful to give us a sample of the data that you would like returned.

How do I add custom queries in GraphQL using Strapi?

I'm using graphQL to query a MongoDB database in React, using Strapi as my CMS. I'm using Apollo to handle the GraphQL queries. I'm able to get my objects by passing an ID argument, but I want to be able to pass different arguments like a name.
This works:
{
course(id: "5eb4821d20c80654609a2e0c") {
name
description
modules {
title
}
}
}
This doesn't work, giving the error "Unknown argument \"name\" on field \"course\" of type \"Query\"
{
course(name: "course1") {
name
description
modules {
title
}
}
}
From what I've read, I need to define a custom query, but I'm not sure how to do this.
The model for Course looks like this currently:
"kind": "collectionType",
"collectionName": "courses",
"info": {
"name": "Course"
},
"options": {
"increments": true,
"timestamps": true
},
"attributes": {
"name": {
"type": "string",
"unique": true
},
"description": {
"type": "richtext"
},
"banner": {
"collection": "file",
"via": "related",
"allowedTypes": [
"images",
"files",
"videos"
],
"plugin": "upload",
"required": false
},
"published": {
"type": "date"
},
"modules": {
"collection": "module"
},
"title": {
"type": "string"
}
}
}
and the
Any help would be appreciated.
Referring to Strapi GraphQL Query API
You can use where with the query courses to filter your fields. You will get a list of courses instead of one course
This should work:
{
courses(where: { name: "course1" }) {
name
description
modules {
title
}
}
}

JSON Schema - can array / list validation be combined with anyOf?

I have a json document I'm trying to validate with this form:
...
"products": [{
"prop1": "foo",
"prop2": "bar"
}, {
"prop3": "hello",
"prop4": "world"
},
...
There are multiple different forms an object may take. My schema looks like this:
...
"definitions": {
"products": {
"type": "array",
"items": { "$ref": "#/definitions/Product" },
"Product": {
"type": "object",
"oneOf": [
{ "$ref": "#/definitions/Product_Type1" },
{ "$ref": "#/definitions/Product_Type2" },
...
]
},
"Product_Type1": {
"type": "object",
"properties": {
"prop1": { "type": "string" },
"prop2": { "type": "string" }
},
"Product_Type2": {
"type": "object",
"properties": {
"prop3": { "type": "string" },
"prop4": { "type": "string" }
}
...
On top of this, certain properties of the individual product array objects may be indirected via further usage of anyOf or oneOf.
I'm running into issues in VSCode using the built-in schema validation where it throws errors for every item in the products array that don't match Product_Type1.
So it seems the validator latches onto that first oneOf it found and won't validate against any of the other types.
I didn't find any limitations to the oneOf mechanism on jsonschema.org. And there is no mention of it being used in the page specifically dealing with arrays here: https://json-schema.org/understanding-json-schema/reference/array.html
Is what I'm attempting possible?
Your general approach is fine. Let's take a slightly simpler example to illustrate what's going wrong.
Given this schema
{
"oneOf": [
{ "properties": { "foo": { "type": "integer" } } },
{ "properties": { "bar": { "type": "integer" } } }
]
}
And this instance
{ "foo": 42 }
At first glance, this looks like it matches /oneOf/0 and not oneOf/1. It actually matches both schemas, which violates the one-and-only-one constraint imposed by oneOf and the oneOf fails.
Remember that every keyword in JSON Schema is a constraint. Anything that is not explicitly excluded by the schema is allowed. There is nothing in the /oneOf/1 schema that says a "foo" property is not allowed. Nor does is say that "foo" is required. It only says that if the instance has a keyword "foo", then it must be an integer.
To fix this, you will need required and maybe additionalProperties depending on the situation. I show here how you would use additionalProperties, but I recommend you don't use it unless you need to because is does have some problematic properties.
{
"oneOf": [
{
"properties": { "foo": { "type": "integer" } },
"required": ["foo"],
"additionalProperties": false
},
{
"properties": { "bar": { "type": "integer" } },
"required": ["bar"],
"additionalProperties": false
}
]
}

Using ADF Copy Activity with dynamic schema mapping

I'm trying to drive the columnMapping property from a database configuration table. My first activity in the pipeline pulls in the rows from the config table. My copy activity source is a Json file in Azure blob storage and my sink is an Azure SQL database.
In copy activity I'm setting the mapping using the dynamic content window. The code looks like this:
"translator": {
"value": "#json(activity('Lookup1').output.value[0].ColumnMapping)",
"type": "Expression"
}
My question is, what should the value of activity('Lookup1').output.value[0].ColumnMapping look like?
I've tried several different json formats but the copy activity always seems to ignore it.
For example, I've tried:
{
"type": "TabularTranslator",
"columnMappings": {
"view.url": "url"
}
}
and:
"columnMappings": {
"view.url": "url"
}
and:
{
"view.url": "url"
}
In this example, view.url is the name of the column in the JSON source, and url is the name of the column in my destination table in Azure SQL database.
The issue is due to the dot (.) sign in your column name.
To use column mapping, you should also specify structure in your source and sink dataset.
For your source dataset, you need specify your format correctly. And since your column name has dot, you need specify the json path as following.
You could use ADF UI to setup a copy for a single file first to get the related format, structure and column mapping format. Then change it to lookup.
And as my understanding, your first format should be the right format. If it is already in json format, then you may not need use "json" function in your expression.
There seems to be a disconnect between the question and the answer, so I'll hopefully provide a more straightforward answer.
When setting this up, you should have a source dataset with dynamic mapping. The sink doesn't require one, as we're going to specify it in the mapping.
Within the copy activity, format the dynamic json like the following:
{
"structure": [
{
"name": "Address Number"
},
{
"name": "Payment ID"
},
{
"name": "Document Number"
},
...
...
]
}
You would then specify your dynamic mapping like this:
{
"translator": {
"type": "TabularTranslator",
"mappings": [
{
"source": {
"name": "Address Number",
"type": "Int32"
},
"sink": {
"name": "address_number"
}
},
{
"source": {
"name": "Payment ID",
"type": "Int64"
},
"sink": {
"name": "payment_id"
}
},
{
"source": {
"name": "Document Number",
"type": "Int32"
},
"sink": {
"name": "document_number"
}
},
...
...
]
}
}
Assuming these were set in separate variables, you would want to send the source as a string, and the mapping as json:
source: #string(json(variables('str_dyn_structure')).structure)
mapping: #json(variables('str_dyn_translator')).translator
VladDrak - You could skip the source dynamic definition by building dynamic mapping like this:
{
"translator": {
"type": "TabularTranslator",
"mappings": [
{
"source": {
"type": "String",
"ordinal": "1"
},
"sink": {
"name": "dateOfActivity",
"type": "String"
}
},
{
"source": {
"type": "String",
"ordinal": "2"
},
"sink": {
"name": "CampaignID",
"type": "String"
}
}
]
}
}

CloudFormation - Access Output of Parent Stack in Child Nested stack

I have a master Cloudformation template which invokes two child templates. I have my first template run and the Outputs captured in the Outputs section of the resource. I have given lot of tries in using the ChildStack01 Output values in the Second Template which is nested and I am not sure why I get Template format error: Unresolved resource dependencies [XYZ] in the Resources block of the template. Here is my master template.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"LambdaStack": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "https://s3.amazonaws.com/bucket1/cloudformation/Test1.json",
"TimeoutInMinutes": "60"
}
},
"PermissionsStack": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "https://s3.amazonaws.com/bucket1/cloudformation/Test2.json",
"Parameters": {
"LambdaTest": {
"Fn::GetAtt": ["LambdaStack", "Outputs.LambdaTest"]
}
},
"TimeoutInMinutes": "60"
}
}
}
}
Here is my Test1.json Template
{
"Resources": {
"LambdaTestRes": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Description": "Testing AWS cloud formation",
"FunctionName": "LambdaTest",
"Handler": "lambda_handler.lambda_handler",
"MemorySize": 128,
"Role": "arn:aws:iam::3423435234235:role/lambda_role",
"Runtime": "python2.7",
"Timeout": 300,
"Code": {
"S3Bucket": "bucket1",
"S3Key": "cloudformation/XYZ.zip"
}
}
}
},
"Outputs": {
"LambdaTest": {
"Value": {
"Fn::GetAtt": ["LambdaTestRes", "Arn"]
}
}
}
}
Here is My Test2.json which has to use the output of Test1.json.
{
"Resources": {
"LambdaPermissionLambdaTest": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"Action": "lambda:invokeFunction",
"FunctionName": {
"Ref": "LambdaTest"
},
"Principal": "apigateway.amazonaws.com",
"SourceArn": {
"Fn::Join": ["", ["arn:aws:execute-api:", {
"Ref": "AWS::Region"
}, ":", {
"Ref": "AWS::AccountId"
}, ":", {
"Ref": "TestAPI"
}, "/*"]]
}
}
}
},
"Parameters": {
"LambdaTest": {
"Type": "String"
}
}
}
It is not enough to just have output, you need to export that output.
Look here: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-exports.html
So you need something like:
"Outputs": {
"LambdaTest": {
"Value": {
"Fn::GetAtt": ["LambdaTestRes", "Arn"]
}
"Export": {
"Name": "LambdaTest"
}
}
}
You have two unresolved Ref resource dependencies in Test2.json, one to LambdaTest and one to TestAPI.
For LambdaTest, it looks like you're trying to pass this as a parameter from the parent stack, but you haven't specified it as an input Parameter in the child Test2.json template. Add an entry in Test2.json's Parameters section, like this:
"Parameters": {
"LambdaTest": {
"Type": "String"
}
},
Regarding TestAPI, this reference doesn't seem to appear anywhere else in your templates, so you should either specify this as a fixed string directly, or add another input Parameter in your Test2.json stack (see above) and then provide it from the parent stack.
The error is coming from test1.json(LambdaStack).
Logical ID
An identifier for the current output. The logical ID must be alphanumeric (a-z, A-Z, 0-9) and unique within the template.
It seems you have two logical ID with the same name "LambdaTest", one in resource section and other in output section.