Parse.com REST API Relation Query - rest

If this has already been answered please feel free to direct to a relevant Q&A. The only answer I have found is what I am already trying and I am not getting the result I am hoping for. Also looked over the REST API documentation for Parse and no luck in figuring this out. That being said...
I have two classes in Parse.com: Invites and Events
Each Invite has a relation to an Event. Ideally I would like to query the Invite but also receive event_name and event_address from the Events object related to the invite.
This is the curl query I have been trying (pulled from the other answer I mentioned and Parse.com docs):
curl -X GET \
-H "X-Parse-Application-Id: xxxxx" \
-H "X-Parse-REST-API-Key: xxxxxx" \
--data-urlencode 'where={"$relatedTo":{"object":{"__type":"Pointer","className":"Events","objectId":"Yp04m8QT2N"},"key":"event_name"}}' \
https://api.parse.com/1/classes/Invites/XXQ0191KSv
Yp04m8QT2N is the related 'Events' objectId, event_name is in the Events class
I am getting results from the invite class but was expecting event_name to be included in the results. What am I missing? I haven't gotten as far as including multiple columns from events so if you could save me another question it would be much appreciated!
Response:
{"createdAt":"2014-06-07T20:26:04.877Z","event_objectId":{"__type":"Relation","className":"Events"},"invite_email":"test#test.com","invite_firstname":"Steven","invite_lastname":"Carlton","invite_opendate":{"__type":"Date","iso":"2014-06-08T19:16:00.000Z"},"invite_phone":"1234567890","invite_sentdate":{"__type":"Date","iso":"2014-06-07T20:26:00.000Z"},"objectId":"XXQ0191KSv","updatedAt":"2014-06-08T19:17:17.980Z"}
Thanks for your help!

if it really is a pointer, then as docs say in 'retreiving objects' section , you can just do
--data-urlencode 'include=$pointer'
where $pointer is the field name in Invites whose value is the pointer to Events.
If you are useing a relation type instead of a pointer then you can not flatten the query in this way.

Related

Why prometheus got history data?

I have grabbed some metrics via prometheus, but it seems like I got some history data.
I used the command curl -X GET $APISERVER/metrics --header "Authorization: Bearer $TOKEN" --insecure | grep apiserver_flowcontrol_dispatched_requests_total three times in a row, result showed in the picture.
The result of the second command shows that there is no priority_level="global-default" data, which is indicated by red underline in the result of other two commands. The number of priority_level="global-default" which is indicated by yellow underline is counter data type, but the result of the second command is less than the first one.
I guess my prometheus got the history data.
How can I resolve this problem?
I'm not sure this is a problem.
Yes. Prometheus is a time series database, meaning that every time you query a metric (to get a value) you get the value of that specific metric in the particular time where the query happened.
More details: https://prometheus.io/docs/introduction/overview/

Metrics responsed in random order on querying CrUX Report API

I queries to CrUX Report API, as dev docs show.
Instead of origin I use url to get data for certain URLs, so my query looks like:
curl https://chromeuxreport.googleapis.com/v1/records:queryRecord?key=API_KEY \
--header 'Content-Type: application/json' --data '{"url":"https://www.spiegel.de/schlagzeilen/"}'
I do this one by one for different urls.
My problem: responses are coming in different order: for the first query CLS comes as first metric, for the second query - FID and so on.
This issue doesn't depend on the kind I run queries: cURL in terminal, by Postman, or by Google App script in Google Sheets.
I tried to setup an explicit metrics order in the request, like
curl https://chromeuxreport.googleapis.com/v1/records:queryRecord?key=API_KEY \
--header 'Content-Type: application/json' --data '{"url":"https://www.spiegel.de/schlagzeilen/","metrics":["cumulative_layout_shift","first_contentful_paint","first_input_delay","largest_contentful_paint"]}'
but responses come still in random order.
Q: is there a possibility to force a metrics order in the response I wish to have?
While the metrics input parameter does allow you to list the metrics that get output in the results, it doesn't control the ordering of the metrics. There is no other input mechanism to enforce a particular metric ordering.
That said, the metrics response is a JSON object, which is an inherently unordered data structure. The ordering of the object keys may affect how the object is iterated, for example Object.fromEntries(response.record.metrics) will iterate over the metrics in the order they appear.
If the order is critical to your application, I would recommend deterministically looping through a constant array of metric IDs rather than iterating over the object keys. For example:
const METRICS = ['first_contentful_paint', 'largest_contentful_paint', 'first_input_delay', 'cumulative_layout_shift'];
const cruxData = METRICS.map(metric => response.record[metric]);
I see you're using cURL to issue the requests, so you can adapt this strategy to whichever programming language you use to parse the results.

Finding all the users in Jira using the REST API

I'm trying to list all the users in Jira using the REST API, I'm currently using the search user feature using GET : https://docs.atlassian.com/jira/REST/server/#api/2/user-findUsers
The thing is it says that the result will by default display the 50 first result and that we can expand that result up to 1000. Compared to other features available in the REST API, the pagination here is not specified.
An example is the group member feature : https://docs.atlassian.com/jira/REST/server/#api/2/group-getUsersFromGroup
Thus I did a test and with my test Jira filled with 2 members, tried to get only one result and see if there was some sort of indication referring to the rest of my result.
The response provided will only give the results and no ways to get to know if there was more thatn 1000 (or 1 in my example), it's maybe logical but in the case of an organization with more than 1000 members, listing all the users doing this : http://jira/rest/api/2/user/search?username=.&maxResults=1000&includeInactive=true will only give at most 1000 results.
I'm getting all the users no matter what their name are using . as the matching character.
Thanks for your help!
What you can do, is to calculate manually the number of users.
Let's say you have 98 users in your system.
First search will give you 50 users. Now you have an array and you can get the length of that array which is 50.
Since you do not know if there are 50 or 51 users, you execute another search with the parameter &startAt=50.
This time the array length is 48 instead of 50 and you know that you've reached all the users in the system.
From speaking to Atlassian support, it seems like the user/search endpoint has a bug where it will only ever return the first 1,000 results at most.
One possible other way to get all of the users in your JIRA instance is to use the Crowd API's /rest/usermanagement/1/search endpoint:
curl -X GET \
'https://jira.url/rest/usermanagement/1/search?entity-type=user&start-index=0&max-results=1000&expand=user' \
-H 'Accept: application/json' -u username:password
You'll need to create a new JIRA User Server entry to create Crowd credentials (the username:password parameter above) for your application to use in its REST API calls:
Go to User Management.
Select JIRA User Server.
Add an application.
Enter the application name and password that the application will use when accessing your JIRA server application.
Enter the IP address, addresses, or IP CIDR block of the application, and click Save.

Can firebase server timestamps be written without making two requests?

The Firebase REST API describes how to write server values (currently only timestamps are supported) at a location, but it appears that one must submit a separate request in order to do this. Is there (or has there been planned) any way of setting timestamps (like createdAt) at the same time one submits other data? Seems like this would really help reduce traffic and improve performance.
Sure, this is possible. The documentation is admittedly a little unclear, but all you need to do is include the {".sv": "timestamp"} object as part of your JSON payload. Here's an example that saves it to a key timestamp.
curl -X PUT -d '{"something":"something", "timestamp":{".sv": "timestamp"}}' https://abc.firebaseio-demo.com/.json

Limit Number of Posts coming from /feed Facebook Graph API

When I use /{page_id}/feed?access_token=xxxx, this give me all the posts on the page, both by user and page. I want to limit and control the posts. I want to put constraints like:
Timestamp (that is to get posts after a particular timestamp)
Post id (to get post after a particular post)
Since getting all the posts from feed is irrelevant and in-effective. Is there any way to accomplish this ?
You can use
GET /{page_id}/feed?limit={nr_of_posts_to_return}&since={timestamp}
to be able to limit the number of results and specify the starting timestamp. Have a look at the reference here:
https://developers.facebook.com/docs/graph-api/using-graph-api/v2.0#paging
For your second Use Case you'd need to use the Batch API imho, because with a single Graph API request you can't filter on specific Posts. Instead, you need to use the Batch API to split this in two queries as described here:
https://developers.facebook.com/docs/graph-api/making-multiple-requests/#operations
The request would then look like this:
curl \
-F 'access_token={your_access_token}' \
-F 'batch=[{ "method":"GET","name":"get-post","relative_url":"{your_post_id}?fields=created_time"},{"method":"GET","relative_url":"{your_page_id}/feed?since={result=get-post:$.created_time}&limit={nr_of_posts_to_return}"}]' \
https://graph.facebook.com/
In Graph Explorer, you have to change the HTTP method to Post, then add a new field called batch. Leave the URL blank so far. Paste this as batch value:
[{ "method":"GET","name":"get-post","relative_url":"​293088074081904_400071946716849?fields=created_time"},{"method":"GET","relative_url":"293088074081904/feed?since={result=get-post:$.created_time}&limit=1"}]
This works at least for me.
For others looking for a solution, it appears the 'since' done at the 'comment' and 'reply' levels are ignored. Which means this is not a solution for me.
The query Tobi provided will provide all the posts after the first 'since' but every comment and reply in those posts, regardless of that you set their 'since' to.
Further to this, if you wish to search for new comments , regardless of the age of the post, this fails as well. For example:remove the first 'since' and change to limit=1000 and only request comments as a fields using 'since' , this will return the last 1000 posts and all comments for all of those 1000.
That said, thank you Tobi for your time and showing me how to get everything I need in a single function call. I may experiment parsing the complete recordset every time. ( maybe too much traffic though!)