Bittrex API - delay in candle (OHLC) data - rest

(originally asked in bitcoin.stackexchange but didn't get much traction there ... )
I'm noticing that the Bittrex API (both v1.1 and v2.0) have a 3 to 4 minute delay when getting data, i.e. GetLatestTick: https://bittrex.com/api/v2.0/pub/market/GetLatestTick?marketName=BTC-NEO&tickInterval=hour
So if you make a request say at 8:00PM, it will not bring the 7:00 to 8:00PM candle (OHLC) data until 8:03PM, some times 8:04PM ...
I thought about constructing my own candles querying the API every few seconds with /getticker - but for a few cryptos it will get lengthy and might get banned for the amount of requests per second ...
Contacted Bittrex, but never had a response back.
Anyone knows any other method to get candle data sooner?
Thanks!

ClueDex is what I'm using to get up to the second candle OHLCV data. They claim to be between 2 and four minutes faster than the Bittrex API. That matters a lot when you're trying to integrate the data into a trading platform or create a high frequency trading bot, particularly with the Parabolic SAR metric, which is a high confidence trading signal.

Try Enigmas Catalyst - you get minute data from Bitfinex, Bittrex and Poloniex for backtesting and livetrading
https://enigma.co/

Related

binance-api - how to get close price and trade volume for last minute?

I went through several binance APIs and all examples I saw seem pretty complex to what I need. I would like to get each minute close price and trade volume for last minute. I there an example that I could use please? By strean or HTTP GET? Thank you very much!

Grafana - Alert based on a minimum of requests

I hope you can help me.
What I have is some metric data I am collecting. I want to send out an alert, when I reach a specific error-rate on these metrics.
To make clear, my data looks something like this:
Timestamp
value (the runtime of a query)
state (error, success)
api-endpoint called
I have a grafana-Board doing some calculations, drops out something linke this:
error-rate
api-endpoint
number of calls made to the api endpoint
Fine for now - as I can read out on my grafana, I am able to send some error-messages/warnings, if the error-rate is too high. Works like a charm. But now comes the point:
If the first two (e.g.) calls to a specific api fail, I will instantly receive an alarm send by my grafana. I do not wan't that!
Is it possible - and if: how? - to alert me ONLY if this specific request was executed at least 5 times? It is no problem if this is a generic alert like "hey, something is wrong!" - but I need to figure out if the request triggering the alarm with 50-100% error-Rate was at least executed a specific amount of time before alarming.
It has to be done based on tags/fields, I do not want to add a single query for all of my 35+ APIs (number growing).
Any Idea anybody?
Using Grafana 8.0
Using InfluxDb 1.8 (with Flux enabled)
Just to make clear, if everyone ever want's to do the same: FluxQL is king - you can use filter functionality in there and only base data on datasets where value count is greater than X.
Yes: FluxQL is damn hot. I love it since a few month.

How to make sense of the micrometer metrics using SpringBoot 2, InfluxDB and Grafana?

I'm trying to configure a SpringBoot application to export metrics to InfluxDB to visualise them using a Grafana dashboard. I'm using this dashboard as an example which uses Prometheus as a backend.
For some metrics I have no problem figuring out how to create graphs for them but for some others I don't know how to create the graphs or even if it's possible at all. So I enumerate the things I'm not really sure about in the following points:
Is there any documentation where a value unit is described? The application I'm using as an example doesn't have any load on it so sometimes I don't know whether the value is a bit, a byte, a second, a millisecond, a count, etc.
Some measurements contain the tag 'metric_type = histogram' with fields 'count', 'sum', 'mean' and 'upper'. Again, here I don't know what the value units are, what upper means or how I'm suppose to plot them. Examples of this are 'http_server_requests' or 'jvm_gc_pause'.
From what I see in the Grafana dashboard example, it seems I should use these measurements of type histogram to create both a graph with counts and graphs with duration. For example I see I should be able to create a graph with the number of requests and another one with their duration. Or for the garbage collector, I should be able to provide a graph for the number of minor and major GCs and another for their duration.
As an example of measures I get inserted into InfluxDB:
time count exception mean method metric_type outcome status sum upper uri
1625579637946000000 1 None 0.892144 GET histogram SUCCESS 200 0.892144 0.892144 /actuator/health
or
time action cause count mean metric_type sum upper
1625581132316000000 end of minor GC Allocation Failure 1 2 histogram 2 2
I agree the documentation for micrometer is not great. I've had to dig through the code to find answers...
Regarding your questions about jvm_gc_pause, it is a Timer and the implementation is AbstractTimer which is a class that wraps a Histogram among other components. This particular metric is registered by the JvmGcMetrics class. The various measurements that are published to InfluxDB are determined by the InfluxMeterRegistry.writeTimer(Timer timer) method:
sum: timer.totalTime(getBaseTimeUnit()) // The total time of recorded events
count: timer.count() // The number of times stop has been called on the timer
mean: timer.mean(getBaseTimeUnit()) // totalTime()/count()
upper: timer.max(getBaseTimeUnit()) // The max time of a single event
The base time unit is milliseconds.
Similarly, http_server_requests appears to be a Timer as well.
I believe you are correct that the sensible thing is to chart on two separate Grafana panels: one panel for GC pause seconds using sum (or mean or upper), and one panel for GC events using count.

Reactive systems - Reacting to time passing

Let's say we have a reactive sales forecasting system.
Every time we make a sale we re-calculate our Forecast for future sales.
This works beautifully if there are lots of sales triggering our re-forecasting.
What happens however if sales go from 100 events per second, to 0. And stay 0 for a long time?
The forecast we published back when sales were good stays being the most up to date forecast.
How would you model in this situation an event that represents 'No sales happening' without falling back to some batch hourly/minutely/arbitrary time segment event that says 'X time has passed'.
This is a specific case of a generic question - How do you model time passing with nothing happening in an event based system - without using a ticking clock style event which would wake everyone up to reconsider their current values [an implementation which would not scale].
The only option I have considered that makes sense:
Every time we take a sale, we also schedule a deferred event 2 hours in the future that asks us to reconsider our assessment of that sale.
In handling that deferred event we may then choose to schedule further deferred events for re-considering.
Considering this is a very generic scenario, you've made a rather large assumption that it's not possible to come up with a design for re-evaluating past sales in a scalable way unless it's done one sale at a time.
There are many different scale related numbers in the scenario, and you're only looking at the one whereby a single scheduled forecast updater may attempt to process a very large number of past sales at the same time.
Other scalability issues I can think of:
Reevaluating the forecast for every single new sale doesn't sound great if you're expecting 100s of sales per second. If you're talking about a financial forecasting model for accounting, it's unlikely it needs to be updated every single time the organisation makes a sale, if the organisation is making hundreds of sales a second.
If you're talking about a short term predictive engine to be used for financial markets (ie predicting how much cash you'll need in the next 10 seconds, or energy, or other resources), then it sounds like you have constant volatility and you're not really likely to have a situation where nothing happens for hours. And if you do need forecasts updated very frequently, waiting a couple of hours before triggering a re-update is not likely to get you the kind of information you need in the way you need it.
With your approach, you will end up with one future scheduled event per product (which could be large), and every time you make a sale, you'll be dropping the old scheduled event and scheduling a new one. So for frequently selling products, you'll be doing repetitive work to constantly kick the can down the road a bit further, when you're not likely to ever get there.
What constitutes a good design is going to be based on the real scenario. The generic case is interesting to think about, but good designs need to be shaped to their circumstances.
Here are a few ideas I have that might be appropriate:
If you want an updated forecast per product when that product has a sale, but some products can sell very frequently, then a good approach may be to throttle or buffer the sales on a per product basis. If a product is selling 50 times a second, you can probably afford to wait 1 second, 10 seconds, 2 hours, whatever and evaluate all those sales at once, rather than re-forecasting 50 times a second. Especially if your forecasting process is heavy, doing it for every sale is likely to cause high load for low value, as the information will be outdated almost straight away by the next sale.
You could also have a generic timer that updates forecasts for all products that haven't sold in the last window, but handle the products in buffers. For example, every hour you could pick the 10 products with the oldest forecasts and update them. This prevents the single timer from taking on re-forecasting the entire product set in one hit.
You could use only the single timer approach above and forget the forecast updates on every sale if you want something dead simple.
If you're worried about load from batch forecasting, this kind of work should be done on different hardware from the ones handling sales.

How to get P&L on a trade through Interactive Brokers TWS Java API

Is there any way to get Profit and Loss(daily & total till date) on a particular trade made on IB TWS through its Java API?
You can, but not in the way you seem to be asking. All profit and loss in the API is calculated by you until the trade is closed and then you can use the commissionReport method of the wrapper. A commissionReport is sent after every execDetails. API doc
You can always check your statements for previous profits and losses.
The flow is like this.
place trade and get fill price from execDetails
get opening commission from commissionReport
on every tick calculate open position profit, use bid/ask for realism, but it's all forex has anyway
close trade and get price from execDetails
get commission from commissionReport again
calculate closed trade profit/loss
also note that commissionReport has a field m_realizedPNL you can use, but I've never tried it.
In the TWS v9.72+ API there is a reqPnl method on the EClient which can be used to subscribe to real-time PnL (unrealized and realized) updates for a full portfolio via the associated method on the EWrapper
https://interactivebrokers.github.io/tws-api/classIBApi_1_1EClient.html#a0351f22a77b5ba0c0243122baf72fa45
Additionally, for a single contractID, you can use: reqPnLSingle on the Client.
https://interactivebrokers.github.io/tws-api/interfaceIBApi_1_1EWrapper.html#aebeb008f2b763d7bed2969b66bbd1b33
you may presubmit the order, to see all calculations, like the commission and margin impact of the order.
to do that, set whatIf=True in the order definition.
you'll then receive openOrder events, with all the calculations made for you.