Getting 200 OK abd 400 bad request on the same request from Google Drive REST API - rest

My Google Drive app checks on start if its settings files(the files are all on Drive) has changed. Thus on every start sends a files.list request to the Google Drive API to get the files metadata. The app has been used for more then 4 months with no suspicious beahaviour from this request. But the last few days a strange thing is happening. The problem is that from time to time(every 1 out of 10-20 starts) the request gets this response
{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalid",
"message": "Invalid Value",
"locationType": "parameter",
"location": "q"
}
],
"code": 400,
"message": "Invalid Value"
}
In all other cases the request gets an adequate response(200 OK mostly :)).
This is very strange because every time the request is the same and the query string( the q parameter) of course is the same i.e.
GET https://www.googleapis.com/drive/v2/files?corpus=DEFAULT&q=(title%3D'%5BClosed+Captions%5D.profile'+or+title%3D'Profiles'+or+title%3D'%5BGeneral%5D.profile'+or+title%3D'%5BTeletext%5D.profile'+or+title%3D'ezt5RoamingSettings.ini'+or+title%3D'ezt5ProjectSettings.ini'+or+title%3D'Project+Templates'+or+title%3D'ezt5IOPresets.ini'+or+title%3D'dicAutoCorrect.xml'+or+title%3D'dicShortForms.xml'+or+title%3D'dicDictionaries.xml')+and+trashed%3Dfalse&fields=items(fileSize%2C+id%2C+title%2C+modifiedDate%2C+md5Checksum%2C+parents(id%2C+isRoot))&key={MY_API_KEY}
Any ideas why is this happening will be much appreciated...

Related

Getting error 400 on the API end point runReport, the front end doesn't give me errors with same dimensions and metrics

I’m trying to use the API for GA4. Before December 5 I was able to do the following call: "url": "https://analyticsdata.googleapis.com/v1beta/properties/307276000:runReport", "method": "POST", "body": "{\n"dimensions":[\n{\n"name":"date"\n},\n{\n"name":"deviceCategory"\n},\n{\n"name":"pageLocation"\n},\n{\n"name":"pageTitle"\n},\n{\n"name":"sessionCampaignName"\n},\n{\n"name":"sessionSourceMedium"\n}\n],\n\ "metrics":[\n{\n"name":"screenPageViews"\n},\n{\n"name":"newUsers"\n},\n{\n"name":"engagedSessions"\n}\n],\n"dateRanges":[\n{\n"startDate":'2022-12-02',\n"endDate":'2022-12-12'\n}\n], \n"limit":"10000",\n"offset":0,\n"keepEmptyRows":true\n}" } And it was successful, but starting December 5, now I get the following error: { "error": { "code": 400, "message": "Please remove pageLocation to make the request compatible. The request's dimensions & metrics are incompatible. To learn more, see https://ga-dev-tools.web.app/ga4/dimensions-metrics-explorer/", "status": "INVALID_ARGUMENT" } } I get the results from the front end. Trying to understand what is the problem.
The front end gives me the results without an error, I keep getting the error since December 5.

How do I call create permissions on Google Drive API via Delphi

I'm trying to use Delphi to call Google Drive Permissions API to create a permission on a team drive. All goes well with the setting on the Google Try It section, but my Delphi code isn't working.
reqPermissions.Body.Add('{role: "organizer",type: "user",emailAddress: "it_admin#mycompany.com"}');
I've tried supplying the type field as a parameter in the Request component, but this also doesn't work. Driving me nuts at the mo, so any help much appreciated.
I always get the following response:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "required",
"message": "The permission type field is required.",
"locationType": "other",
"location": "permission.type"
}
],
"code": 400,
"message": "The permission type field is required."
}
}

Facebook Graph IP filtering

I've read few posts around about the limitations on 600 requests in 600 seconds that Facebook Graph Api sets on requests.
This question is about getting some clarification in the issue I'm facing.
I'm doing requests, quite simple to the FB Graph:
So, from my home I run:
curl https://graph.facebook.com/v2.0/?id=https://www.example.com/article/the-name-of-the-article/
(Having the trail slash is not trivial)
which gives me empty results:
{
"share": {
"comment_count": 0,
"share_count": 605
},
"og_object": {
"id": "XXXXX6ZZ70301002",
"description": "text",
"title": "title",
"type": "article",
"updated_time": "2019-03-09T00:15:06+0000"
},
"id": "https://www.example.com/article/the-name-of-the-article"
}
I took the url from js code in the website.
Instead, running the Scrapy crawler, on the same url, still from home network, gives me the same as above:
{
"share": {
"comment_count": 0,
"share_count": 605
},
"og_object": {
"id": "XXXXX6ZZ70301002",
"description": "text",
"title": "title",
"type": "article",
"updated_time": "2019-03-09T00:15:06+0000"
},
"id": "https://www.example.com/article/the-name-of-the-article"
}
Which is more than fine for now and the js-code-scraping system seems to be working. The results contain all the information from js calls to FB Graph.
Hands on server side, the crawler runs as expected, but having a closer look at the results, information coming from js code execution is not there.
I've checked the whole code, in other url which also fires js actions to provide html content and the code actually works fine.
Then, repeating the simple:
curl https://graph.facebook.com/v2.0/?id=https://www.example.com/article/the-name-of-the-article
this time from the server ip, it replies:
{
"error": {
"message": "(#4) Application request limit reached",
"type": "OAuthException",
"is_transient": true,
"code": 4,
"fbtrace_id": "ErXXXXZZrOn"
}
}
Regarding ip-blocks, the code wasn't able of delivering more than 600 requests. Actually it sent less than 10 requests to the graph api.
Obviously, the information coming from js requests to the Fb Graph Api from server side is missing.
I tried from different servers, from different providers, to check if there was a Ip filter on Cloud providers, but it seems that is not the case, as in every server the results are the same.
What is going on here?
Why the js requests do not get valid response data when they are fired from server ip addresses? (as it gives the error OAuthException:Application request limit reached also using the curl command)
Thanks for any clue

Uber Api - Wrong response status code when deleting current ride

I am getting response code 204 when call DELETE https://sandbox-api.uber.com/v1.2/requests/current Api to delete my current ride. I am getting same response code (204) even when there is no current Trip. here is the response of GET https://sandbox-api.uber.com/v1.2/requests/current
{
"meta": {},
"errors": [
{
"status": 404,
"code": "no_current_trip",
"title": "User is not currently on a trip."
}
] }
so if there is no current trip, DELETE https://sandbox-api.uber.com/v1.2/requests/current should return 404 status code. is there any mistake from my side or it's a bug in Api?
In order to clean up and delete specific requests, it is recommended to use following endpoint:
DELETE https://sandbox-api.uber.com/v1.2/requests/{request_id}
In this case, you will delete specific request identified by 'ID' and there will not be confusion is there current trip in place or not. Check the documentation for more info here.

Google BigQuery Error: "code": 501, "message": "Not Implemented: Cannot cancel a job of this type"

I'm getting the following error when trying to cancel a (not too big, 2.6 GB copy) job in BigQuery:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "notImplemented",
"message": "Not Implemented: Cannot cancel a job of this type"
}
],
"code": 501,
"message": "Not Implemented: Cannot cancel a job of this type"
}
}
The status of the job is the following:
"status": {
"state": "RUNNING"
}
I also can't delete the table it's copying from (what kind of makes sense), but without an error message. I can click 'Delete table', but nothing happens.
Did someone encounter the same problem or has someone a solution for this? It's running for an hour now and I need to go on.
Copy table jobs cannot be cancelled (this is what error message is saying). But BigQuery is going to implement support for copying table cancellation soon. In the meantime, if you could share the job id of the query, the BigQuery team can investigate why this particular one got stuck.