I saw on Stocktwits API documentation that they only allow to get mose recent 30 messages from stream. However, I need to extract messages by searching for certain user or certain symbols from stocktwits by specifying certain periods, etc. Jan,2017-Jan,2019. For example i want to extract all the messages sent by A user from 2017-2019 or all the messages with AAPL tag from 2017-2019. Is it possible?
Related
According to GitHub's REST API events documentation (https://docs.github.com/en/rest/activity/events), I should be getting events that have been made by a user in the past 90 days (max 300 events). But for some usernames, I am not able to get all the 300 events even though it is within 90 days.
A minimum working example is as follows:
https://api.github.com/users/github-actions[bot]/events?per_page=100&page=1 - gives 100 events
https://api.github.com/users/github-actions[bot]/events?per_page=100&page=2 - gives 100 events
https://api.github.com/users/github-actions[bot]/events?per_page=100&page=3 - gives 85 to 95 events (rarely 100)
The time difference between the first event on page 1 and the last event on page 3 is less than 5 minutes. At this rate of the account's activity, I should be able to get the latest 300 events, but I am not getting it.
Kindly let me know if anyone knows a reason for this and/or a workaround to get all the events.
Thank you.
I don't see an option in documentation for just "subscribers" but I can take subscribersGained and subtract subscribersLost. However, the number calculated is a bit lower than the actual result. Is there a reason for this or is there a way to actually get raw sub count?
As of writing (June 2021), there is no metric to gather the total number of subscribers via the YouTube Analytics API, although, as you have already noted, it is possible to collect the number of people that have subscribed and unsubscribed. There is currently no metric or dimension that allows this.
A workaround to get the total number of subscribers would be to collect all subscribers and those that have unsubscribed (using the subscribersGained and subscribersLost metrics), from when a channel was created, and subtracting the one from the other to get the total. eg. -
Total number of subscribers = subscribersGained - subscribersLost
I am looking at http://stocktwits.com/developers/docs/parameters and am wondering if anyone has used pagination before.
The doc says there is a limit of 800 messages, how does that interact with the request limit? Could I in theory query 200 different stock tickers every hour and get back (up to) 800 messages?
If so that sounds like a great way to get around the 30 message limit.
The documentation is unclear on this and we are rolling out new documentation that explains this more clearly.
Every stream request will have a default and max limit of 30 messages per response, regardless of whether the cursor params are present or not. So you could query 200 different stock streams every hour and get up to 6,000 messages or 12,000 if sending your access token along with the request. 200 request per hour for non authenticated requests and 400 for authenticated requests.
I am researching Atom feeds as a way of distributing event data as part of our organisation's internal REST APIs. I can control the feeds and ensure:
there is a "head" feed containing time-ordered events with an etag which updates if the feed changes (and short cache headers).
there are "archive" feeds containing older events with a fixed etag (and long cache headers).
the events are timestamped and immutable, i.e. they happened and can't change.
The question is, what must the consumer remember to be sure to synchronize itself with the latest data at any time, without double processing of events?
The last etag it processed?
The timestamp of the last event it processed?
I suppose it needs both? The etag to efficiently ask the feed if there's been any changes, (using HTTP If-None-Match) and if so, then use the datestamp to apply only the changes from that updated feed that haven't already been processed...
The question is nothing particularly to do with REST or the technology used to consume the feed. It would apply for anyone writing code to consume an Atom based RSS feed reader, for example.
UPDATE
Thinking about it - some of the events may have the same timestamp, as they get "detected" at the same time in batches. Could be awkward then for the consumer to rely on the timestamp of the last event successfully processed in case its processing dies half way through processing a batch with the same timestamp... This is why I hate timestamps!
In that case does the feed need to send an id with every event that the consumer has to remember instead? Wouldn't that id have to increment to eternity, and never ever be reset? What are the alternatives?
Your events should all carry a unique ID. A client is required to track those IDs, and that it is enough to prevent double-processing.
In that case does the feed need to send an id with every event that the consumer has to remember instead?
Yes. An atom:entry is required to have an atom:id that is unique. If your events are immutable, uniqueness of the ID is enough. In general, entries aren't required to be immutable. atom:updated contains the last significant change:
the most
recent instant in time when an entry or feed was modified in a way
the publisher considers significant
So a general client would need to consider the pair of id and updated.
Is there a limit on how ofter a user can remove and re-add their song to a group (or just the general number of connections in general), say per minute/hour/day etc... I ask as I have created a script which automatically removes and re-adds all 5 of my songs within the same 75 groups, however before 1 cycle completes I get the 429 error and seem to be blocked for the day.
Yes there is a limit. The HTTP 429 status code indicates:
The user has sent too many requests in a given amount of time.