Paggination on api data in flutter - flutter

i get data from API like demo:-
{draw: 0, records Total: 210, records Filtered: 210, data: [,…], input: []}
data: [,…]
draw: 0
input: []
records Filtered: 210
records Total: 210
I want to apply paggination on API data (grid view or list view)
How can i do that ?

Your json structure needs to look like this to apply pagination with ease.
Use current page, next page and last page to determine the next, current and remaining pages.
You can directly use the next link to get the data of the next page.
{
"meta": {
"page": {
"current-page": 2,
"per-page": 15,
"from": 16,
"to": 30,
"total": 50,
"last-page": 4
}
},
"links": {
"first": "http://localhost/api/v1/posts?page[number]=1&page[size]=15",
"prev": "http://localhost/api/v1/posts?page[number]=1&page[size]=15",
"next": "http://localhost/api/v1/posts?page[number]=3&page[size]=15",
"last": "http://localhost/api/v1/posts?page[number]=4&page[size]=15"
},
"data": [...]
}

Related

Update and append decoded JSON into struct?

I'm receiving some paginated JSON response through my API that looks something like this:
page 1:
{
"count": 96,
"next": "http://127.0.0.1:8000/id=5/?page=2",
"previous": null,
"results": [
{
"id": 10,
"book_name": "Book name",
"book_id": 70,
"chapter_number": 1,
"verse": "Verse title here",
"verse_number": 1,
"chapter": 96
},
{
"id": 198,
"book_name": "Book Name",
"book_id": 70,
"chapter_number": 8,
"verse": "Text here",
"verse_number": 5,
"chapter": 103
}
]
}
As I move through the paginated result, ie calling: http://127.0.0.1:8000/id=5/?page=2
I'll get new values in the results array.
page 2:
{
"count": 96,
"next": "http://127.0.0.1:8000/id=5/?page=3",
"previous": "http://127.0.0.1:8000/id=5/",
"results": [
{
"id": 206,
"book_name": "Book on Second page",
"book_id": 70,
"chapter_number": 8,
"verse": "yadda yadda",
"verse_number": 13,
"chapter": 103
},
{
"id": 382,
"book_name": "Book on second page..",
"book_id": 70,
"chapter_number": 15,
"verse": "Verse here",
"verse_number": 12,
"chapter": 110
}
]
}
How would I structure my struct/fix my decoding so I can append the values of results from the JSON while still updating count, next and previous as I go through ..?page=3, ..?page=4 etc
Currently, my struct looks like this:
struct VersesSearchResult: Decodable {
var results: [Verse]
var count: Int
var next: String?
var previous: String?
}
When doing the decoding, I'm not sure what the syntax should be like to append results to the struct, and update next, previous, and count. So far I got this
...
let verses = try JSONDecoder().decode(VersesSearchResult, from: safeData)
DispatchQueue.main.async {
// somehow need to get this to work - not entirely sure how
// where results gets appended, while the rest in the JSON values gets updated
// obviously I can't do as below:
self.verseSearchResult.append(contentsOf: verses.results)
self.verseSearchResult = verses
}
...
Decode to a temprary instance of the type, then process those properties into your main instance. Depending how you want to process the data, it'll look something like this
extension VersesSearchResult {
mutating func update(with new: VersesSearchResult) {
results.append(new)
count = new.count
next = new.next
prev = new.prev
}
}
and then in your completion block
let newVerses = try JSONDecoder().decode(VersesSearchResult, from: safeData)
self.verses.update(with: newVerses)
If needs be force the update onto the main thread.
You should also handle failure in the 'try' either with a do...catch or other approach.

gatling - Saved whole response data in previous API call but how to Extract value from response variable in next API call

In Gatling using scala scripting.. I need to extract choiceId's for each question according to the questionTypes and questionId's from the previous response and submit one choice randomly in the next API call.
From an API_01 of the response I am saving the questionTypes, questionId's and also saving the whole response data in Resp using json path.
Now according using he no. of question ID's which is an array the loop iterates (if 4 question Id's then there are 4 question types and ll iterate 4 times) and also checks the question type for each iteration and switch is used to perform the function according to the question type.
After going inside the question type I need to answer for which I need ChoiceId's which will differ from question to question (which will have 3 choices or 2 choices or 5 choices)
So I have saved the whole response in the previous API_01 call now I need to extract it and add to API_02 call POST request json file; I have QuestionId and QuestionType
API_01 to get the questionID, questionType and ChoiceID
private val Assessment = exec(http("assessment")
.post("/Test-graphql")
.headers(commonHeaders)
.body(ElFileBody("/createAssessment.json"))
.check(jsonPath("""$.data.assessment.items[*].question.id""").findAll.saveAs("questionIds"))
.check(jsonPath("""$.data.assessment.items[*].question.type""").findAll.saveAs("questionType"))
.check(jsonPath("""$.data.assessment""").findAll.saveAs("ResponseData"))
)
Response of Above API Call
{
"data": {
"assessment": {
"id": "sample01",
"items": [
{
"question": {
"id": "sample02",
"code": "E_01",
"version": 1,
"type": "MULTIPLE_SELECTION",
"language": "E",
"body": {
"choices": {
"minChoice": 1,
"maxChoice": 7,
"choiceItems": [
{
"choiceId": 2, --> How to get these choiceID for answer Submission
},
{
"choiceId": 3, --> How to get these choiceID
},
{
"choiceId": 115, --> How to get these choiceID
},
{
"choiceId": 196, --> How to get these choiceID
}
]
},
},
},
"submissions": [
],
},
{
"question": {
"id": "sample02",
"code": "E_01",
"version": 1,
"type": "FILL_IN_THE_BLANK",
"language": "E",
"body": {
"choices": {
"minChoice": 1,
"maxChoice": 7,
"choiceItems": [
{
"choiceId": 20, --> How to get these choiceID
},
{
"choiceId": 15,
},
{
"choiceId": 11,
},
{
"choiceId": 156,
}
]
},
},
},
"submissions": [
],
}
]
} } }
Method during Answer submission:
private val Answers = foreach("${questionId}", "id","i") {
doSwitch("${questionTypes(i)}")(
"MULTIPLE_CHOICE" -> answerMCQ,
"MULTIPLE_SELECTION" -> answerMSQ
"FILL_IN_THE_BLANK" -> answerFIB
)
Sample Answer Function
private val answerFIB = exec(
http("Submit Answer")
.post("/submitanswer-grapql")
.headers(commonHeaders)
.body(ElFileBody("data/AnswerFIB.json"))
.check(status.is(200))
.check(jsonPath("$.data.assessmentAnswerSubmit.id").is("${assessmentId}"))
)
Sample AnswerFIB.json for answer submission:
"answer": {
"ChoiceId": 354,--> Here I need to provide Choice ID which changes for each question answer
"answer": "Sample01",
"type": "FILL_IN_THE_BLANK"
}

How to implement a RESTful API for order changes on large collection entries?

I have an endpoint that may contain a large number of resources. They are returned in a paginated list. Each resource has a unique id, a rank field and some other data.
Semantically the resources are ordered with respect to their rank. Users should be able to change that ordering. I am looking for a RESTful interface to change the rank field in many resources in a large collection.
Reordering one resource may result in a change of the rank fields of many resources. For example consider moving the least significant resource to the most significant position. Many resources may need to be "shifted down in their rank".
The collection being paginated makes the problem a little tougher. There has been a similar question before about a small collection
The rank field is an integer type. I could change its type if it results in a reasonable interface.
For example:
GET /my-resources?limit=3&marker=234 returns :
{
"pagination": {
"prevMarker": 123,
"nextMarker": 345
},
"data": [
{
"id": 12,
"rank": 2,
"otherData": {}
},
{
"id": 35,
"rank": 0,
"otherData": {}
},
{
"id": 67,
"rank": 1,
"otherData": {}
}
]
}
Considered approaches.
1) A PATCH request for the list.
We could modify the rank fields with the standard json-patch request. For example the following:
[
{
"op": "replace",
"path": "/data/0/rank",
"value": 0
},
{
"op": "replace",
"path": "/data/1/rank",
"value": 1
},
{
"op": "replace",
"path": "/data/2/rank",
"value": 2
}
]
The problems I see with this approach:
a) Using the array indexes in path in patch operations. Each resource has already a unique ID. I would rather use that.
b) I am not sure to what the array index should refer in a paginated collection? I guess it should refer to the global index once all pages are received and merged back to back.
c) The index of a resource in a collection may be changed by other clients. What the current client thinks at index 1 may not be at that index anymore. I guess one could add test operation in the patch request first. So the full patch request would look like:
[
{
"op": "test",
"path": "/data/0/id",
"value": 12
},
{
"op": "test",
"path": "/data/1/id",
"value": 35
},
{
"op": "test",
"path": "/data/2/id",
"value": 67
},
{
"op": "replace",
"path": "/data/0/rank",
"value": 0
},
{
"op": "replace",
"path": "/data/1/rank",
"value": 1
},
{
"op": "replace",
"path": "/data/2/rank",
"value": 2
}
]
2) Make the collection a "dictionary"/ json object and use a patch request for a dictionary.
The advantage of this approach over 1) is that we could use the unique IDs in path in patch operations.
The "data" in the returned resources would not be a list anymore:
{
"pagination": {
"prevMarker": 123,
"nextMarker": 345
},
"data": {
"12": {
"id": 12,
"rank": 2,
"otherData": {}
},
"35": {
"id": 35,
"rank": 0,
"otherData": {}
},
"67": {
"id": 67,
"rank": 1,
"otherData": {}
}
}
}
Then I could use the unique ID in the patch operations. For example:
{
"op": "replace",
"path": "/data/12/rank",
"value": 0
}
The problems I see with this approach:
a) The my-resources collection can be large, I am having difficulty about the meaning of a paginated json object, or a paginated dictionary. I am not sure whether an iteration order can be defined on this large object.
3) Have a separate endpoint for modifying the ranks with PUT
We could add a new endpoint like this PUT /my-resource-ranks.
And expect the complete list of the ordered id's to be passed in a PUT request. For example
[
{
"id": 12
},
{
"id": 35
},
{
"id": 67
}
]
We would make the MyResource.rank a readOnly field so it cannot be modified through other endpoints.
The problems I see with this approach:
a) The need to send the complete ordered list. In the PUT request for /my-resource-ranks we will not include any other data, but only the unique id's of resources. It is less severe than sending the full resources but still the complete ordered list can be large.
4) Avoid the MyResource.rank field and the "rank" be the order in the /my-collections response.
The returned resources would not have the "rank" field in them and they will be already sorted with respect to their rank in the response.
{
"pagination": {
"prevMarker": 123,
"nextMarker": 345
},
"data": [
{
"id": 35,
"otherData": {}
},
{
"id": 67,
"otherData": {}
},
{
"id": 12,
"otherData": {}
}
]
}
The user could change the ordering with the move operation in json-patch.
[
{
"op": "test",
"path": "/data/2/id",
"value": 12
},
{
"op": "move",
"from": "/data/2",
"path": "/data/0"
}
]
The problems I see with this approach:
a) I would prefer the freedom for the server to return to /my-collections in an "arbitrary" order from the client's point of view. As long as the order is consistent, the optimal order for a "simpler" server implementation may be different than the rank defined by the application.
b) Same concern as 1)b). Does index in the patch operation refer to the global index once all pages are received and merged back to back? Or does it refer to the index in the current page ?
Update:
Does anyone know further examples from an existing public API ? Looking for further inspiration. So far I have:
Spotify's Reorder a Playlist's Tracks
Google Tasks: change order, move
I would
Use PATCH
Define a specialized content-type specifically for updating the order.
The application/patch+json type is pretty great for doing straight-up modifications, but I think your use-case is unique enough to warrant a useful, minimal specialized content-type.

Algolia AND search through an array

I am looking for a way to search in Algolia a record where at least one element of an array meets several conditions.
As an example, imagine this kind of record:
{
"name": "Shoes",
"price": 100,
"prices": [
{
"start": 20160101,
"end": 20160131,
"price": 50,
},
{
"start": 20160201,
"end": 20160229,
"price": 80,
}
]
}
I am looking for a way to do a query like the following:
prices.price<60 AND prices.start<=20160210 AND prices.end>=20160210
(A product where the price is less than 60 for the given date)
That query should not return anything because the price condition is not met for that date but the record is returned anyway. Probably because the condition is met "globally" among all prices.
I am a beginner with Algolia and trying to learn. Is there a way I can do the desired request or will I have to go for a separate index for prices and use multiple queries?
Thanks.
When a facetFilter or tagFilter is applied on an array, Algolia's engine checks if any element of the array matches and then goes to the next condition.
The reason it behaves that way and not the way you expected is simple: let's assume you have an array of strings (like tags):
{ tags: ['public', 'cheap', 'funny', 'useless'] }
When a user wants to find a record that is "useless" and "funny", this user is not looking for a tag that is both "useless" and "funny" at the same time, but for a record containing both tags in the array.
The solution for this is to denormalize your object in some way: transforming a record with an array of objects to multiple records with one object each.
So you would transform
{
"name": "Shoes",
"price": 100,
"prices": [
{ "start": 20160101, "end": 20160131, "price": 50 },
{ "start": 20160201, "end": 20160229, "price": 80 }
]
}
into
[
{
"name": "Shoes",
"default_price": 100,
"price": { "start": 20160101, "end": 20160131, "price": 50 }
},
{
"name": "Shoes",
"default_price": 100,
"price": { "start": 20160201, "end": 20160229, "price": 80 }
}
]
You could either do this in the same index (and use distinct to de-duplicate), or have one index per month or day. It really depends on your use-case, which makes it a bit hard to give you the right solution.

Confluence content REST api result size mismatch issue

trying confluence rest api to read comments for given page - from below rest calls but its giving different result when add expand (_expandable) body.view
Total : comment for given ID (some page id) is 28
Request 1) able to get 25 in first and rest 3 with next link
Request 2) not able to get 25 in first call
Request 1:
/rest/api/content/ID/child/comment
Results:
{
results : [ from 1 to 25 ],
"start": 0,
"limit": 25,
"size": 25,
"_links": {
"self": "/rest/api/content/ID/child/comment?limit=25",
"next": "/rest/api/content/ID/child/comment?limit=25&start=25",
"base": "baseurl",
"context": ""
}
Request 2: only changed to add expand
/rest/api/content/ID/child/comment?expand=body.view
Results:
{
results : [ from 1 to 0],
"start": 0,
"limit": 25,
"size": 20,
"_links": {
"self": "/rest/api/content/ID/child/comment?limit=25",
"next": "***0/***rest/api/content/ID/child/comment?limit=25&start=25",
"base": "baseurl",
"context": ""
}
Any suggestion why ?