Fetch data using where clause on relational model - mongodb

In loopback 3.0, I am trying to fetch data from multiple relational models while applying where clause on a relation.
Below is the example JSON for the models:
modelA.json
{
"properties": {
"id": {
"type": "string",
"required": true,
"length": 20
},
"modelBid": {
"type": "string"
},
...
}
"relations": {
"modelB-rel": {
"type": "belongsTo",
"model": "modelB",
"foreignKey": "modelBid",
"primaryKey": "id"
},
...
}
}
modelB.json
{
"properties": {
"id": {
"type": "string",
"required": "false",
"length": "20"
},
"name": {
"type": "string",
"length": "255"
},
...
}
}
SQL equivalent of query which I want to perform:
SELECT * FROM modelA a
LEFT JOIN modelB b
ON a.modelBid = b.id
WHERE b.name = 'abcd'
Loopback filter object I am using, which is not providing fetching the intended results:
modelA.find({
include: [
{
relation: 'modelB-rel',
scope: {
fields: ['id', 'name'],
},
}
],
where: {
modelB-rel: {
name: 'abcd',
},
}
}
...
Please help me to correct the filter object to get the same results as of the mentioned SQL equivalent.

In order to apply filters on related models, you have to put the filter inside the scope of the included model. You can use something like this:
modelA.find({
include: [
{
relation: 'modelB-rel',
scope: {
fields: ['id', 'name'],
where: {name: 'abcd'}
},
}
],
})

Related

Swagger API specs Request object design

I have written an api specs following OpenAPI/Swagger Specification -
{
"post": {
"tags": [
"UserController"
],
"operationId": "getUsers",
"parameters": [
{
"name": "accountID",
"in": "path",
"required": true,
"schema": {
"type": "number"
}
},
{
"name": "sortKey",
"in": "query",
"required": false,
"schema": {
"type": "string"
}
},
{
"name": "sortOrder",
"in": "query",
"required": false,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "default response",
"content": {
"*/*": {
"schema": {
"$ref": "#/components/schemas/UserResponse"
}
}
}
}
}
}
}
The API Request takes accountId, sortKey and sortOrder. Should they should be wrapped in a Top level request object (getUsersRequest) ? What is the best practice?
{
"GetUsersRequest": {
"accountID": "String",
"sortKey": "String",
"sortOrder": "String"
}
}
vs
{
"accountID": "String",
"sortKey": "String",
"sortOrder": "String"
}
Usually just use the properties. Using a "wrapper" object can be useful if the parameters belong to multiple groups.
For example if you have an api with paging:
/query?filter=findme&page=5&size=5
I see two groups of parameters.
the filter to limit the query result, that is the main purpose of the api.
the page & size parameters, which are more a technical help to limit the amount of results.
you can use an (wrapper) object to easily communicate that two of the three parameters belong together and are used for paging.
as yaml:
/query:
get:
description: ...
parameters:
- name: filter
description: filters the data by the given value
in: query
schema:
type: string
- name: paging
description: page selection
in: query
required: false
schema:
$ref: '#/components/schemas/Paging'
components:
schemas:
Paging:
type: object
properties:
page:
type: integer
size:
type: integer
So in your example you could group sortKey & sortOrder as a view group while accountId is the main parameter of the api.

How do I add custom queries in GraphQL using Strapi?

I'm using graphQL to query a MongoDB database in React, using Strapi as my CMS. I'm using Apollo to handle the GraphQL queries. I'm able to get my objects by passing an ID argument, but I want to be able to pass different arguments like a name.
This works:
{
course(id: "5eb4821d20c80654609a2e0c") {
name
description
modules {
title
}
}
}
This doesn't work, giving the error "Unknown argument \"name\" on field \"course\" of type \"Query\"
{
course(name: "course1") {
name
description
modules {
title
}
}
}
From what I've read, I need to define a custom query, but I'm not sure how to do this.
The model for Course looks like this currently:
"kind": "collectionType",
"collectionName": "courses",
"info": {
"name": "Course"
},
"options": {
"increments": true,
"timestamps": true
},
"attributes": {
"name": {
"type": "string",
"unique": true
},
"description": {
"type": "richtext"
},
"banner": {
"collection": "file",
"via": "related",
"allowedTypes": [
"images",
"files",
"videos"
],
"plugin": "upload",
"required": false
},
"published": {
"type": "date"
},
"modules": {
"collection": "module"
},
"title": {
"type": "string"
}
}
}
and the
Any help would be appreciated.
Referring to Strapi GraphQL Query API
You can use where with the query courses to filter your fields. You will get a list of courses instead of one course
This should work:
{
courses(where: { name: "course1" }) {
name
description
modules {
title
}
}
}

JSON Schema - can array / list validation be combined with anyOf?

I have a json document I'm trying to validate with this form:
...
"products": [{
"prop1": "foo",
"prop2": "bar"
}, {
"prop3": "hello",
"prop4": "world"
},
...
There are multiple different forms an object may take. My schema looks like this:
...
"definitions": {
"products": {
"type": "array",
"items": { "$ref": "#/definitions/Product" },
"Product": {
"type": "object",
"oneOf": [
{ "$ref": "#/definitions/Product_Type1" },
{ "$ref": "#/definitions/Product_Type2" },
...
]
},
"Product_Type1": {
"type": "object",
"properties": {
"prop1": { "type": "string" },
"prop2": { "type": "string" }
},
"Product_Type2": {
"type": "object",
"properties": {
"prop3": { "type": "string" },
"prop4": { "type": "string" }
}
...
On top of this, certain properties of the individual product array objects may be indirected via further usage of anyOf or oneOf.
I'm running into issues in VSCode using the built-in schema validation where it throws errors for every item in the products array that don't match Product_Type1.
So it seems the validator latches onto that first oneOf it found and won't validate against any of the other types.
I didn't find any limitations to the oneOf mechanism on jsonschema.org. And there is no mention of it being used in the page specifically dealing with arrays here: https://json-schema.org/understanding-json-schema/reference/array.html
Is what I'm attempting possible?
Your general approach is fine. Let's take a slightly simpler example to illustrate what's going wrong.
Given this schema
{
"oneOf": [
{ "properties": { "foo": { "type": "integer" } } },
{ "properties": { "bar": { "type": "integer" } } }
]
}
And this instance
{ "foo": 42 }
At first glance, this looks like it matches /oneOf/0 and not oneOf/1. It actually matches both schemas, which violates the one-and-only-one constraint imposed by oneOf and the oneOf fails.
Remember that every keyword in JSON Schema is a constraint. Anything that is not explicitly excluded by the schema is allowed. There is nothing in the /oneOf/1 schema that says a "foo" property is not allowed. Nor does is say that "foo" is required. It only says that if the instance has a keyword "foo", then it must be an integer.
To fix this, you will need required and maybe additionalProperties depending on the situation. I show here how you would use additionalProperties, but I recommend you don't use it unless you need to because is does have some problematic properties.
{
"oneOf": [
{
"properties": { "foo": { "type": "integer" } },
"required": ["foo"],
"additionalProperties": false
},
{
"properties": { "bar": { "type": "integer" } },
"required": ["bar"],
"additionalProperties": false
}
]
}

Ingesting multi-valued dimension from comma sep string

I have event data from Kafka with the following structure that I want to ingest in Druid
{
"event": "some_event",
"id": "1",
"parameters": {
"campaigns": "campaign1, campaign2",
"other_stuff": "important_info"
}
}
Specifically, I want to transform the dimension "campaigns" from a comma-separated string into an array / multi-valued dimension so that it can be nicely filtered and grouped by.
My ingestion so far looks as follows
{
"type": "kafka",
"dataSchema": {
"dataSource": "event-data",
"parser": {
"type": "string",
"parseSpec": {
"format": "json",
"timestampSpec": {
"column": "timestamp",
"format": "posix"
},
"flattenSpec": {
"fields": [
{
"type": "root",
"name": "parameters"
},
{
"type": "jq",
"name": "campaigns",
"expr": ".parameters.campaigns"
}
]
}
},
"dimensionSpec": {
"dimensions": [
"event",
"id",
"campaigns"
]
}
},
"metricsSpec": [
{
"type": "count",
"name": "count"
}
],
"granularitySpec": {
"type": "uniform",
...
}
},
"tuningConfig": {
"type": "kafka",
...
},
"ioConfig": {
"topic": "production-tracking",
...
}
}
Which however leads to campaigns being ingested as a string.
I could neither find a way to generate an array out of it with a jq expression in flattenSpec nor did I find something like a string split expression that may be used as a transformSpec.
Any suggestions?
Try setting useFieldDiscover: false in your ingestion spec. when this flag is set to true (which is default case) then it interprets all fields with singular values (not a map or list) and flat lists (lists of singular values) at the root level as columns.
Here is a good example and reference link to use flatten spec:
https://druid.apache.org/docs/latest/ingestion/flatten-json.html
Looks like since Druid 0.17.0, Druid expressions support typed constructors for creating arrays, so using expression string_to_array should do the trick!

Loopback indexes - how to specify different index types in model definition?

In Loopback (v3), when defining indexes in my model.json files, how do I specify different types of indexes (such as a BRIN)? Also, how do I specify index conditions (such as if I want to create a partial index)? I'm using postgres for the database, if that's relevant.
You can configure the index type via type field.
{
"name": "MyModel",
"properties": {
// ...
},
"indexes": {
"myindex": {
"columns": "name, email",
"type": "BRIN",
// ...
}
}
}
I am afraid LoopBack does not support index conditions (partial indexes) yet. Feel free to open a new issue in https://github.com/strongloop/loopback-connector-postgresql/issues.
i was trying to add in Lb4. Its pretty straightforward there (should be same for lb3 as well i hope)
#model({
name: 'tablename',
settings: {
indexes: {
idx_tablename: {
columnA : '',
columnB : '',
columnC: ''
}
}
}
})
once the build is done, the index name idx_tablename with 3 columns will get created
In PostgreSQL and Loopback 3 you can specify an index for multi-column like this.
The following loopback JSON code creates index in Postgres with fields message and type are unique together.
{
"name": "notification",
"base": "PersistedModel",
"idInjection": true,
"options": {
"validateUpsert": true
},
"properties": {
"message": {
"type": "string",
"required": true
},
"type": {
"type": "string",
"required": true
},
"seen": {
"type": "boolean",
"required": true,
"default": false
}
},
"validations": [],
"relations": {},
"acls": [],
"methods": {},
"indexes": {
"message_type_index": {
"keys": "message, type",
"options": {"unique": true}
}
}
}