Given a resource like:
GET: /api/examples/1
{
"id": 1,
"direction": "North"
}
Which also supports POST, PUT, how should the possible values for "direction" be specified?
Additionally, is there a solution which allows the consumer to know which values will be available if those values are contextual? e.g. if the example is made more complicated:
GET: /api/examples/
{[
{
"id": 1,
"startLocation": "Kentucky, USA",
"direction": "North"
}
{
"id": 2,
"startLocation": "North Pole",
"direction": "South"
}
}]
(with something vaguely like):
"options": [
{
"value": "North",
"validWhen": "startLocation !== `North Pole`"
},
{
"value": "East",
"validWhen": "true"
},
...
]
Is there a better solution than another resource linked from each example which returns the currently valid options? If not, how does the consumer know that changing "startLocation" changes the valid set of values for "direction"?
I think what you might be looking for is a JSON-Schema. This allows you to strictly describe what options are available in your JSON document, and you can link to the document using a describedBy link.
To expand on what #Justas said in his comment, if I understand your requirements correctly, your resource might look something like:
GET /examples/1
{
"startLocation": "Kentucky, USA",
...
"_links": {
"travel-north": "/some/url",
...
}
}
Related
I would like to create custom attribute list of list of objects in Infobip by API.
I have try to sent POST but in doesn't work.
Can you give me please the example of POST request to create the list of the list.
Thx!
Off Doc: https://www.infobip.com/docs/api#customer-engagement/people/create-a-custom-attribute
Request examples:
1.{
"name": "payrollData",
"dataType": "LIST_OF_OBJECTS",
"objectSchema": {
"DATE": "DATE",
"Transactions": "LIST_OF_OBJECTS"
}
}
2.{
"name": "payrollData",
"dataType": "LIST_OF_OBJECTS",
"objectSchema": {
"DATE": "DATE",
"Transactions":{
"Input": "DECIMAL",
"Output": "DECIMAL"
}
}
}
Responses:
1.{
"errorCode": 40001,
"errorMessage": "Bad Request",
"validationErrors": [
{
"code": 501,
"message": "Custom attribute schema is invalid.",
"propertyPath": "valueSchema"
}
]
}
2.{
"requestError": {
"serviceException": {
"messageId": "BAD_REQUEST",
"text": "Bad request"
}
}
}
At this point is not possible to achieve what you are trying via using Infobip API.
The functionality for creating custom attributes is limited only to top layer and does not support nesting of the list objects.
The team in charge will include that explanation in API docs description. But, there are no plans on including additional complexity layers on this endpoint in the short future.
So generally, you can use LIST_OF_OBJECTS as described on the endpoint URL you also linked to. But you are not able to layer it furthermore.
Valid example for LIST_OF_OBJECTS usage:
{
"name": "ShoppingCartList",
"dataType": "LIST_OF_OBJECTS",
"objectSchema": {
"productName": "STRING",
"productPrice": "DECIMAL",
"productCategory": "STRING",
"productImage": "STRING"
}
}
I know it does not resolve your issue as you would have hoped for, but it is a correct answer. :)
I am creating a strict validator for a complex JSON file and want to re-use various definitions in order to keep the schema manageable and easier to update.
According to the documentation it is necessary to use allOf to extend a definition to add more properties. This is exactly what I've done, but I find that without use of additionalProperties set to false validation doesn't prevent arbitrary other properties being added.
The following massively cut-down schema demonstrates what I'm doing:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://example.com/schema/2021/02/example.json",
"description": "This schema demonstrates how VSCode's JSON schema mechanism fails with allOf used to extend a definition",
"definitions": {
"valueProvider": {
"type": "object",
"properties": {
"example": {
"type": "string"
},
"alternative": {
"type": "string"
}
},
"oneOf": [
{
"required": [
"example"
]
},
{
"required": [
"alternative"
]
}
]
},
"selector": {
"type": "object",
"allOf": [
{
"$ref": "#/definitions/valueProvider"
},
{
"required": [
"operator",
"value"
],
"properties": {
"operator": {
"type": "string",
"enum": [
"IsNull",
"Equals",
"NotEquals",
"Greater",
"GreaterOrEquals",
"Less",
"LessOrEquals"
]
},
"value": {
"type": "string"
}
}
}
],
"additionalProperties": false
}
},
"properties": {
"show": {
"properties": {
"name": {
"type": "string"
},
"selector": {
"description": "This property does not function correctly in VSCode",
"allOf": [
{
"$ref": "#/definitions/selector"
},
{
"additionalProperties": false
}
]
}
},
"additionalProperties": false
}
}
}
This works a treat in IntelliJ IDEA's JSON editor (2020.3.2 ultimate edition) when editing JSON against this schema (using a schema mapping). For example, the file ex-fail.json's content of:
{
"show": {
"name": "a",
"selector": {
"example": "a",
"operator": "IsNull",
"value": "false",
"d": "a"
}
}
}
Is correctly validated, simply highlighting "d" as not allowed, thus:
However, when I use the very same schema and JSON file with VSCode (1.53.2) with vanilla configuration (except for a schema mapping) VSCode erroneously marks "example", "operator", "value" and "d" as not allowed. It looks like this in the VSCode editor:
If I remove the additionalProperties definition from the show.selector property, both IDEA and VSCode indicate that all is well, including allowing the "d" property - in doing this I can simplify that property definition to:
"selector": {
"description": "This property does not function correctly in VSCode",
"$ref": "#/definitions/selector"
}
What can I do to the schema to support both IDEA and VSCode whilst disallowing additional properties where they should not appear?
PS: The schema mapping in VSCode is simply along the lines of:
{
"json.schemas": [
{
"fileMatch": [
"*/config/ex-*.json"
],
"url": "file:///C:/my/path/to/example-schema.json"
}
]
}
You cannot do what you ask with JSON Schema draft-07 or prior.
The reason is, when $ref is used in a schema object, all other properties MUST be ignored.
An object schema with a "$ref" property MUST be interpreted as a
"$ref" reference. The value of the "$ref" property MUST be a URI
Reference. Resolved against the current URI base, it identifies the
URI of a schema to use. All other properties in a "$ref" object MUST
be ignored.
https://datatracker.ietf.org/doc/html/draft-handrews-json-schema-01#section-8.3
We changed this to not be the case for draft 2019-09.
It sounds like VSCode is merging the properties in applicators upwards to the nearest schema object (which is wrong), and IntelliJ IDEA is doing something similar but in a different way (which is also wrong).
The correct validation result for your schema and instance is VALID. See the live demo here: https://jsonschema.dev/s/C6ent
additionalProperties relies on the values of properties and patternProperties within the SAME schema object. It cannot "see through" applicators such as $ref and allOf.
For draft 2019-09, we added unevaluatedProperties, which CAN "see through" applicator keywords (although it's a little more complex than that).
Update:
After reviewing your update, sadly the same is still true.
One approach makes it sort of possible but involves some duplication, and only works when you control the schemas you are referencing.
You would need to redefine your selector property like this...
"selector": {
"description": "This property did not function correctly in VSCode",
"allOf": [
{
"$ref": "#/definitions/selector"
},
{
"properties": {
"operator": true,
"value": true,
"example": true,
"alternative": true
},
"additionalProperties": false
}
]
}
The values of a property object are schema values, and booleans are valid schemas. You don't need (or want to) deal with their validation here, only say these are the allowed ones, followed by no additionalProperties.
You'll also need to remove the additionalProperties: false from your definition of selector, as that is preventing ALL properties (which I now guess is why you saw that issue in one of the editors).
It involves some duplication, but is the only way I'm aware of that you can do this for draft-07 or previous. As I said, not a problem for draft 2019-09 or above due to new kewords.
additionalProperties is problematic because it depends on the properties and patternProperties. The result is that "additionalProperties": false effectively blocks schema composition. #Relequestual showed one alternative approach, here is another approach that is a little less verbose, but still requires duplication of property names.
draft-06 and up
{
"allOf": [{ "$ref": "#/definitions/base" }],
"properties": {
"bar": { "type": "number" }
},
"propertyNames": { "enum": ["foo", "bar"] },
"definitions": {
"base": {
"properties": {
"foo": { "type": "string" }
}
}
}
}
I have an endpoint that may contain a large number of resources. They are returned in a paginated list. Each resource has a unique id, a rank field and some other data.
Semantically the resources are ordered with respect to their rank. Users should be able to change that ordering. I am looking for a RESTful interface to change the rank field in many resources in a large collection.
Reordering one resource may result in a change of the rank fields of many resources. For example consider moving the least significant resource to the most significant position. Many resources may need to be "shifted down in their rank".
The collection being paginated makes the problem a little tougher. There has been a similar question before about a small collection
The rank field is an integer type. I could change its type if it results in a reasonable interface.
For example:
GET /my-resources?limit=3&marker=234 returns :
{
"pagination": {
"prevMarker": 123,
"nextMarker": 345
},
"data": [
{
"id": 12,
"rank": 2,
"otherData": {}
},
{
"id": 35,
"rank": 0,
"otherData": {}
},
{
"id": 67,
"rank": 1,
"otherData": {}
}
]
}
Considered approaches.
1) A PATCH request for the list.
We could modify the rank fields with the standard json-patch request. For example the following:
[
{
"op": "replace",
"path": "/data/0/rank",
"value": 0
},
{
"op": "replace",
"path": "/data/1/rank",
"value": 1
},
{
"op": "replace",
"path": "/data/2/rank",
"value": 2
}
]
The problems I see with this approach:
a) Using the array indexes in path in patch operations. Each resource has already a unique ID. I would rather use that.
b) I am not sure to what the array index should refer in a paginated collection? I guess it should refer to the global index once all pages are received and merged back to back.
c) The index of a resource in a collection may be changed by other clients. What the current client thinks at index 1 may not be at that index anymore. I guess one could add test operation in the patch request first. So the full patch request would look like:
[
{
"op": "test",
"path": "/data/0/id",
"value": 12
},
{
"op": "test",
"path": "/data/1/id",
"value": 35
},
{
"op": "test",
"path": "/data/2/id",
"value": 67
},
{
"op": "replace",
"path": "/data/0/rank",
"value": 0
},
{
"op": "replace",
"path": "/data/1/rank",
"value": 1
},
{
"op": "replace",
"path": "/data/2/rank",
"value": 2
}
]
2) Make the collection a "dictionary"/ json object and use a patch request for a dictionary.
The advantage of this approach over 1) is that we could use the unique IDs in path in patch operations.
The "data" in the returned resources would not be a list anymore:
{
"pagination": {
"prevMarker": 123,
"nextMarker": 345
},
"data": {
"12": {
"id": 12,
"rank": 2,
"otherData": {}
},
"35": {
"id": 35,
"rank": 0,
"otherData": {}
},
"67": {
"id": 67,
"rank": 1,
"otherData": {}
}
}
}
Then I could use the unique ID in the patch operations. For example:
{
"op": "replace",
"path": "/data/12/rank",
"value": 0
}
The problems I see with this approach:
a) The my-resources collection can be large, I am having difficulty about the meaning of a paginated json object, or a paginated dictionary. I am not sure whether an iteration order can be defined on this large object.
3) Have a separate endpoint for modifying the ranks with PUT
We could add a new endpoint like this PUT /my-resource-ranks.
And expect the complete list of the ordered id's to be passed in a PUT request. For example
[
{
"id": 12
},
{
"id": 35
},
{
"id": 67
}
]
We would make the MyResource.rank a readOnly field so it cannot be modified through other endpoints.
The problems I see with this approach:
a) The need to send the complete ordered list. In the PUT request for /my-resource-ranks we will not include any other data, but only the unique id's of resources. It is less severe than sending the full resources but still the complete ordered list can be large.
4) Avoid the MyResource.rank field and the "rank" be the order in the /my-collections response.
The returned resources would not have the "rank" field in them and they will be already sorted with respect to their rank in the response.
{
"pagination": {
"prevMarker": 123,
"nextMarker": 345
},
"data": [
{
"id": 35,
"otherData": {}
},
{
"id": 67,
"otherData": {}
},
{
"id": 12,
"otherData": {}
}
]
}
The user could change the ordering with the move operation in json-patch.
[
{
"op": "test",
"path": "/data/2/id",
"value": 12
},
{
"op": "move",
"from": "/data/2",
"path": "/data/0"
}
]
The problems I see with this approach:
a) I would prefer the freedom for the server to return to /my-collections in an "arbitrary" order from the client's point of view. As long as the order is consistent, the optimal order for a "simpler" server implementation may be different than the rank defined by the application.
b) Same concern as 1)b). Does index in the patch operation refer to the global index once all pages are received and merged back to back? Or does it refer to the index in the current page ?
Update:
Does anyone know further examples from an existing public API ? Looking for further inspiration. So far I have:
Spotify's Reorder a Playlist's Tracks
Google Tasks: change order, move
I would
Use PATCH
Define a specialized content-type specifically for updating the order.
The application/patch+json type is pretty great for doing straight-up modifications, but I think your use-case is unique enough to warrant a useful, minimal specialized content-type.
If you look at the 'Tags' option when you post a question, it gives an auto-complete/intellisense-assist when we type a tag, say SQL Server, for example. I need to get the whole list of technologies, the itemsSource, of this intellisense-assist. You may see a similar use-case in LinkedIn when we try to add a technology/skill that we learnt, to our profile. Can someone please tell me the source from which we may get this whole list of technolgies?
You mean like
http://api.stackoverflow.com/1.1/tags
which gives:
{
"total": 35690,
"page": 1,
"pagesize": 70,
"tags": [
{
"name": "c#",
"count": 576794,
"fulfills_required": false
},
{
"name": "java",
"count": 554383,
"fulfills_required": false
},
{
"name": "javascript",
"count": 523194,
"fulfills_required": false
},
...
Have a look at Stack Exchange API
Is this example of nesting generally accepted as good or bad practice (and why)?
A collection called users:
user
basic
name : value
url : value
contact
email
primary : value
secondary : value
address
en-gb
address : value
city : value
state : value
postalcode : value
country : value
es
address : value
city : value
state : value
postalcode : value
country : value
Edit: From the answers in this post I've updated the schema applying the following rules (the data is slightly different from above):
Nest, but only one level deep
Remove unneccesary keys
Make use of arrays to make objects more flexible
{
"_id": ObjectId("4d67965255541fa164000001"),
"name": {
"0": {
"name": "Joe Bloggs",
"il8n": "en"
}
},
"type": "musician",
"url": {
"0": {
"name": "joebloggs",
"il8n": "en"
}
},
"tags": {
"0": {
"name": "guitar",
"points": 3,
"il8n": "en"
}
},
"email": {
"0": {
"address": "joe.bloggs#example.com",
"name": "default",
"primary": 1,
"il8n": "en"
}
},
"updates": {
"0": {
"type": "news",
"il8n": "en"
}
},
"address": {
"0": {
"address": "1 Some street",
"city": "Somecity",
"state": "Somestate",
"postalcode": "SOM STR",
"country": "UK",
"lat": 49.4257641,
"lng": -0.0698241,
"primary": 1,
"il8n": "en"
}
},
"phone": {
"0": {
"number": "+44 (0)123 4567 890",
"name": "Home",
"primary": 1,
"il8n": "en"
},
"1": {
"number": "+44 (0)098 7654 321",
"name": "Mobile",
"il8n": "en"
}
}
}
Thanks!
In my opinion above schema not 'generally accepted', but looks like great. But i suggest some improvements thats will help you to query on your document in future:
User
Name
Url
Emails {email, emailType(primary, secondary)}
Addresses{address, city, state, postalcode, country, language}
Nesting is always good, but two or three level nesting deep can create additional troubles in quering/updating.
Hope my suggestions will help you make right choice of schema design.
You may want to take a look at schema design in MongoDB, and specifically the advice on embedding vs. references.
Embedding is preferred as "Data is then colocated on disk; client-server turnarounds to the database are eliminated". If the parent object is in RAM, then access to the nested objects will always be fast.
In my experience, I've never found any "best practices" for what a MongoDB record actually looks like. The question to really answer is, "Does this MongoDB schema allow me to do what I need to do?"
For example, if you had a list of addresses and needed to update one of them, it'd be a pain since you'd need to iterate through all of them or know which position a particular address was located. You're safe from that since there is a key-value for each address.
However, I'd say nix the basic and contact keys. What do these really give you? If you index name, it'd be basic.name rather than just name. AFAIK, there are some performance impacts to long vs. short key names.
Keep it simple enough to do what you need to do. Try something out and iterate on it...you won't get it right the first time, but the nice thing about mongo is that it's relatively easy to rework your schema as you go.
That is acceptable practice. There are some problems with nesting an array inside of an array. See SERVER-831 for one example. However, you don't seem to be using arrays in your collection at all.
Conversely, if you were to break this up into multiple collections, you would have to deal with a lack of transactions and the resulting race conditions in your data access code.