API error when requesting full size data from Alphavantage - alpha-vantage

I am working on analysing some historical stock market data for Australian shares. I am using Alphavantage as my API to get the actual data.
My problem relates specifically to the TIME_SERIES_DAILY function with FULL outputsize. For some shares, I receive an error message in response to an API call:
https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=SUL.AUS&outputsize=full&apikey=XXXXXXXXXXXXXX
{
"Error Message": "Invalid API call. Please retry or visit the documentation (https://www.alphavantage.co/documentation/) for TIME_SERIES_DAILY."
}
If I change the outputsize argument to 'compact' it works but only returns a subset of data I am after.
The bizarre thing is that the full size response works for about 60% of the stocks I am after. After a bit of trial and error, I deduce that the API returns an error for specific shares everytime and not others.
I presume that there may be some feature about these specific shares that causes it to fail - I just don't know what.

This is a known error with Alpha Vantage. See here for more details.
When there is an issue with data past 100 days, instead of returning bad data, it scraps the return as to not throw off algos.

Related

What HTTP status code should I use for a REST response that only contains part of the answer?

Say I am building an API that serves the content of old magazines. The client can request the content of the latest edition prior to a given date, something like:
GET /magazines/<some_magazine_id>?date=2015-03-15
The response includes general data about the magazine, such as its name, country of distribution, the date of creation..., and if my database has content available for an edition prior to the date given, it returns it as well.
What HTTP status code should I use if the client requests data for a date before anything I have available? I might have data in the future, when I expand the content of my database. So this is sort of a temporary failure, but it's unclear how long it may take before it is fixed.
Based on other questions, I feel like:
404 is not right: in some cases, I have no data at all about a magazine, in which case I'd return a 404. But this is different. I would like the user to get the partial data, but with an indication that it's only partial.
4xx are client-side errors. I feel like the client didn't do anything wrong.
206 Partial Content seems indicated when returning a range of the content, but not all of it.
30x I thought about using a 302 or similar, and point to the closest edition available, but again, I am not sure that this is right, because I am now pointing to something semantically different from the question asked.
5xx would be errors, and I think should not contain any data.
My best guess would be something like a 2xx No Details Available (Yet) indicating that the request was "somewhat successful", but I can't find anything that seems correct in the list.
I would go with a 200 OK. You did find the magazine and you are returning data about it. While your data is not as complete as it might have been, it is a complete response that can be understood. Presumably you are returning an empty array or a nil reference where the edition(s) would have been?
The problem with many of the more specific responses are that they are really intended for something more low-level. You are not returning partial content, you are returning the full content. It is just that the higher-level application data is not as complete as you might have wished (no old edition found). On the REST/HTTP level the response is just fine.

Facebook API - reduce the amount of data you're asking for, then retry your request for 1 row

I have the following logic for my ad insights request:
If Facebook asks me to reduce the amount of data I'm requesting, I half the date range. If the date range is the same, I half the limit.
It gets to the point I send this request:
https://graph.facebook.com/v3.2/{account}/insights?level=ad&time_increment=1&limit=1&time_range={"since":"2019-03-29","until":"2019-03-29"}&breakdowns=country&after=MjMwNwZDZD
But I still get that error:
Please reduce the amount of data you're asking for, then retry your request
There is no more reducing I can do.
Note, that this only happens sometimes.
One way to avoid the error is when you only request 1 item (limit=1) to start splitting the fields and request half the fields in each request.
Another way is to run an async report, which should not have such a low time limit.
Official Facebook API team response:
It looks like you are requesting a lot of fields, this is likely the
cause of this error. This will cause the request to time-out.
Could you try using asynchronous requests as described here:
https://developers.facebook.com/docs/marketing-api/insights/best-practices/#asynchronous?
Async requests have a much longer time limit, this will likely resolve
your issue.

In which case will this error occurs "Restricted dimension(s): ga:userAgeBracket, ga:userGender can only be queried under certain conditions"?

I'm using Google Analytics Core Reporting API v4. When I query using the dimensions: ga:userAgeBracket & ga:userGender, I get the following error:
Restricted dimension(s): ga:userAgeBracket, ga:userGender can only be queried under certain conditions
Can someone tell me why this error occurs?
Not all dimensions and metrics can be queried together. This can be for several reasons it may not make sense to have them mixed. It may also be that a relation between them does not exist.
My guess would be that there is no relation between ga:userAgeBracket, ga:userGender. Gender came from double click cookie.

Which response code for resources max. limit in REST API?

I am designing a REST API for registering enrollments to classes. In my of my endpoints, I can POST an enrollment:
POST to http://my-api/class/learn-rest/enrollment
This creates a new enrollment. However, in this case, there can only be a fixed number of enrollments, let's say 5.
Which HTTP response code should I return when the user tries to add the 6th enrollment?
Good question. Not sure why it's down-voted.
I'd suggest 400. It's the one you should use if you can't find a specific and appropriate status code for your 4xx errors.
409: it's inappropriate because it's usually retriable. But certainly retrying in your case would resolve the problem.
429: it's also retriable.
Did more research (some practices used by well-known api providers)
LimitExceededException: Returned if the request results in one of the following limits being exceeded, a vault limit, a tags limit, or the provisioned capacity limit. 400 Bad Request
https://docs.aws.amazon.com/amazonglacier/latest/dev/api-error-responses.html
Unless a more specific error status is appropriate for the given request, services SHOULD return "400 Bad Request" and an error payload conforming to the error response guidance provided in the Microsoft REST API Guidelines.
https://github.com/Microsoft/api-guidelines/blob/vNext/Guidelines.md#1521-error-response
While suggesting some specific HTTP code may be an opinion based answer, there is one things that you should keep in mind - this is should be a 4xx Client error:
4xx Client errors: This class of status code is intended for situations in which the error seems to have been caused by the client.
Among existing errors, the following looks like the most suitable for you:
409 Conflict: Indicates that the request could not be processed because of conflict in the request, such as an edit conflict between multiple simultaneous updates.
I think so cause there is a next possible scenario: let's say you set 5 as the limit of enrollments, 4 already exist in system and server receives 2 requests at the same time to create a new enrollment. In this case, only one of the requests (the first one for server) is OK.

Facebook Graph API v1.0 data size limit for JSON return object?

Does Facebook's Graph API have some sort of limit on the size of the JSON object that is returned from its queries?
When I request a lot of a user's friends information, I sometimes get an error code of 1 - unknown error. This happens when I run the following query for a user that has a lot of Facebook friends (200 and up)
me/friends/?fields=id,name,gender,birthday,cover,significant_other,languages,education,work,
checkins.limit(1).fields(place,id,created_time),
likes.limit(5).fields(id,name,created_time),
statuses.limit(5).fields(message,updated_time),
movies.limit(5).fields(name,created_time,id),
music.limit(5).fields(name,created_time,id),
books.limit(5).fields(name,created_time,id),
games.limit(5).fields(name,created_time,id),
interests.limit(5).fields(name)
I tried this on the Graph Explorer and it returned this error
{
  "error": "Request failed"
}
If I run the same request with fewer friends (125 or so), I get back all the data I expect.
It seems like the error is happening because the number of bytes in the JSON that is returned is larger than some threshold, but I haven't seen anything in the docs to corroborate this.
Would what cause this error to happen? Has anyone faced this issue before? Any ideas of how to mitigate this?
Solutions I've Considered
Limit the number of friends returned, and if the error still occurs, lower that limit for the next batch, and if still the error occurs, lower the limit again, etc - this solution isn't ideal but will probably work for most cases
Split up the queries into multiple requests - this approach would increase the API calls significantly (risking throttling) since it is no longer part of one paged request
Use FQL instead of Graph API - I haven't done enough research into this, but I believe that I would have to query each entity (likes, checkins, etc) one at a time which would increase the API calls significantly and risk throttling
In the end, all of these solutions are still subject to same Unknown Error to some degree since I can't predict the size of the object that will be returned (a status message could be a few words or a few paragraphs). It would be ideal to get a handle as to why this error is happening before going off and implementing a work around.
Thanks in advance!