How to duplicate Contentful content model via https://github.com/contentful/contentful-migration? It can definitely be done via Web UI as shown below
Seems like there is no such an option in management API https://www.contentful.com/developers/docs/references/content-management-api/
Indeed it appears there is no single request that will duplicate a content model.
You would need instead to make two requests, first fetching the content model you want to duplicate, then creating the new model derived from it:
Fetch the source model:
GET /spaces/{space_id}/environments/{env_id}/content_types/{content_type_id}
Prepare the request body to create the duplicate model: from the response to step 1, remove sys and update name to differentiate from the source model
Create the new model, using the request body from step 2:
PUT /spaces/{space_id}/environments/{env_id}/content_types/{new_content_type_id}
Related
With the Copy Data transformation it is possible to retrieve data from a REST call (array with flat json objects, similar to Odata) and copy the contents to a flat table keeping the data types from the source but without the necessity to set the schema for that specific data.
When I try to recreate this with Data Flow, I can't get this to work. When I check the Data Preview of my Source I get a hierarchy with a body (with my odata like data) and a header. And if I send that to my sink (Avro) it will be saved in this same hierarchical structure (including the header). I know I can fix this manually by using a Select operation (body.column1, body.column2, etc.), but I want to make my Data Flow dynamic so I'm able to use it with multiple tables/endpoints.
So I receive it like this with my REST source:
link
And I want it to be like this at my Sink without hardcoding my schema:
link
The only work around I can come up with is retrieving the data using Copy Data, put it somewhere temporarily and then use my data flow to further transform the data. Is there a more easy way to do this? I cannot imagine that I'm the only one that has this issue.
Hopefully it's clear and somebody is able to help. Thank you very much in advance.
Data flow projection will get schema from API including body and header. Hence, when you use auto mapping everything going to be saved.
Below work arounds you can think of,
As you mentioned, using copy data first and then data flow to further transform.
Use select or derived column transformations and transform your data to get all column names and then finally use sink. For this you can opt with Column pattern matching syntax. So that one condition can be meet with multiple columns to transform.
Check below link to know about column pattern mappings.
https://learn.microsoft.com/en-us/azure/data-factory/concepts-data-flow-column-pattern
We are designing WebAPI for our software for managing ecommerce product information. We want to provide (among many others) two operations:
Simple one: allow user to add/modify existing product information:
don't create new product if it not exists
don't delete any information from existing product which was not provided in this request
In my opinion HTTP PATCH method is proper way to handle this scenario (with json-patch or json-merge-ptach) with URL like this: /products/{ID}
Harder one: allow user to add/modify existing product or create one
create product if not exists in DB
don't delete any information from existing product which was not provided in this request (same behaviour as in first case)
I'm struggling with designing REST endpoint for this second use case. I have few options but none of them fits perfectly for me in the REST principles:
a) Add custom HTTP header to the endpoint designed for first case (patch) to allow a caller to control of "not found behaviour" eg. create-entity-when-not-exists: true/false - but in my opinion PATCH shouldn't be used for creating resources.
b) Design new endpoint using PUT with special header "preserve-not-provided-data" - this on the other hand violates for me PUT principles because PUT is create-or-replace not create-or-update method
c) Create PATCH for /products URL (without {ID} at the end) - in this case we are updating whole collection(resource) of products - so if product exists we can update it or create new one if not exists.
For now c) solution looks fine for me with one exception: If in the future we would like to support batch operations (for both use cases: 1 and 2) we would like to use /products URL and it will conflict with URL from solution c)
What do you think ? Do you have any other ideas ?
PUT and PATCH have differing message semantics, but the core context ("remote authoring") is the same. In both cases, the client request is "Please, server, make your representation of this resource match my local copy".
For example, I GET a JSON document from the server. I make local edits to it. Now I want to "save" my changes on the server. If the document is modest in size, I might just send the entire revised document over the network. If the document is very large, and my changes are modest, then I might instead send the patch instead.
If you imagine using HTTP to publish edits of HTML web pages to a server, then you've got the right frame of reference. There's not a lot of practical difference between "please patch the title of your copy of the document" and "here is a complete new copy of the document, with my edit to the title". The bytes on disk are going to be the same in either case.
Given that, it would be very odd if those two methods for publishing a new revision of the document were to have vastly different side effects.
Your third approach, based on modifying /products, is potentially fine for both your individual and batch. The server gets the new representation of /products (or the patch document describing the changes), decides whether to accept the changes, and if so computes what it needs to do to its own database to make things work.
Note:
A PUT request applied to the target resource can have side effects on other resources.
The HTTP specification is relatively strict about what the message means, but offers the server a lot of leeway in how it behaves in response.
I'm creating a web API and I have a scenario where users will want to load a bunch of data in bulk, which would then be loaded into the database as multiple separate entries. This data could be brand new and thus created, or data may already exist and thus be updated. The definitions for POST and PUT seem to expect to work on only a single piece of data at a time, and the created status code reflects that in providing a location.
I already have methods that allow for a single piece of data to be created or updated. Should I write additional methods to facilitate the creation and updating of this bulk data or should I expect the user to make individual calls (perhaps hundreds of thousands of times) to load their data? What should I be returning as far as status codes and other data is concerned? Which request verbs should define these bulk calls?
The RESTful way to create multiple items inside a collection, is to use PUT on the whole collection.
This way you are making a request to replace the whole collection, so you need to pass both old and new items, but the new ones will be created by the server.
Suppose you had only one item in the /items collection called "old item". Here you request to update a collection so that it has two new items.
PUT /items
[{ Name: "old item"}, { Name: "new item 1"}, { Name: "new item 1"}]
You don't need to return any content inside a successful PUT response because success in this case means that the exact state you requested was applied. So it leaves status code 204.
And since you are updating a whole collection resource, you don't return 201 regardless of whether new items were created or not.
There are 2 models: Entity and Subentity. Entity can have many connected Subentities (one:many relation).
There is a method on server that returns new Subentity (let's call it GetEmptySubentity). Point is, when you want to create new Subentity, you press a button, and model comes from server with some fields pre-filled. Some of those Subentity pre-filled values depend on according Entity, so I need to pass an Entity id in this request.
So should the correct url to get the empty Subentity be like /Entity/{id}/Subentity/empty? Or I am getting something wrong?
Yes you are. According to the uniform interface / hateoas constraint you should send hyperlinks to your REST clients and they should use the API by following those hyperlinks. In order to do this you need a hypermedia format, for example HTML, ATOM+XML, HAL+JSON, LD+JSON & Hydra, etc... (use google). So by HTML the result should contain a HTML form with input fields having default values, etc... You should add semantics to that for with RDFa and so by processing the HTML your REST client will know, that the link is about creating a new resource. Ofc it is easier to parse the other hypermedia formats. By them you can use the same concept with RDF (by JSON-LD or ATOM for example), or you can use link relations with vendor specific MIME types (by HAL or ATOM for example), or your custom solution which describes those input fields. So you usually get the necessary information with the hyperlink, and you don't have to send another request to get the default values.
If you want to make things complicated, then you can send a request for the default values to the entity itself in order to send the values of properties, and not to send a form with input fields. Optionally you can send a request which returns the entire link, for example GET /Entity/{id}/SubEntity/offset=0&count=0 can return an empty array of subentities and the form for creation. You can use additional query or path parameters if that form is really big, and you don't want to send it with every response related to the SubEntity collection. The URL specification says only that the path should contain the hierarchical part and the query should contain the non-hierarchical part of the URL.
Btw. REST is just a delivery method, you don't have to map it to your database entities. The REST resource and URL structure can be completely different from your database, since you can use any type of data storage mechanisms with REST, even the file system...
Can a custom REST service be used as a data source for a dojo data grid? I am needing to combine data from three different databases into one data grid. The column data will need to be sort-able. The response from the REST service looks to be correct. I have having trouble with binding the JSON data to the dojo grid columns.
Very interesting -- I tested and saw the same thing with a custom REST service -- it doesn't work when referenced as the storeComponentId of the grid.
I got it to work with the following steps:
Include two dojo modules in the page resources to set up the data store
A pass-thru script tag with code to set up a JSON data store for the grid (uses the dojo modules that the resources specify)
The grid’s store property is set to the variable set up for the data source in the tag. (storeComponentId needs an XPages component name)
Here are some snippets that show the changes:
<xp:this.resources>
<xp:dojoModule name="dojo.store.JsonRest"></xp:dojoModule>
<xp:dojoModule name="dojo.data.ObjectStore"></xp:dojoModule>
</xp:this.resources>
...
<xe:restService id="restService1" pathInfo="gridData">
...
<script>
var jsonStore = new dojo.store.JsonRest(
{target:"CURRENT_PAGE_NAME_HERE.xsp/gridData"}
);
var dataStore = dojo.data.ObjectStore({objectStore: jsonStore});
</script>
...
<xe:djxDataGrid id="djxDataGrid1" store="dataStore">
There's more information and a full sample here:
http://xcellerant.net/dojo-data-grid-33-reading-custom-rest-service/
The easiest way is to start with the extension library. There's a sample for a custom JSON-Rest service. While it pulls data from one source, it is easy to extend to pull data from more than one. I strongly suggest you watch out for all over performance.
What I would do:
create a bean that spits out the JSON to the grid
test it with one database
learn about threads in XPages and here
use one thread each for the databases, cuts down your load time
use a ConcurrentSkipListMap with a comparator so you have the initial JSON in the sort order most useful to the user (or the one from the preferences or the last run)
Memento bene: the Java Collections Framework is your friend (a sometimes difficult one).
Let us know how it goes!