So i'm trying to fetch historical stock data from IQFeed. I have a list of symbols I want to fetch data for. The problem is that the IQFeed timeseries function returns data asynchronously, so I can't just use a simple for loop to fetch all the data.
I assume there is a way to do this using an event handler, but looking at the default one, it goes way above my head.
Try using IQML (Matlab connector to IQFeed), which runs in Matlab and connects directly to IQFeed. IQML supports both blocking (synchronous snapshot) and non-blocking (asynchronous streaming) queries.
In answer to the OP question, here's an example of fetching historic IQFeed data synchronously (i.e., blocking) into Matlab using IQML:
>> data = IQML('history', 'symbol','IBM', 'dataType','day')
data =
100×1 struct array with fields:
Symbol
Datestamp
Datenum
High
Low
Open
Close
PeriodVolume
OpenInterest
>> data(1)
ans =
Symbol: 'IBM'
Datestamp: '2017-10-10'
Datenum: 736978
High: 148.95
Low: 147.65
Open: 147.71
Close: 148.5
PeriodVolume: 4032601
OpenInterest: 0
IQML supports the entire IQFeed API, including:
Both blocking (synchronous snapshot) and non-blocking (asynchronous streaming) data queries
Live Level1 top-of-book market data (quotes and trades)
Live Level2 market-depth data
Historic, intra-day and live market data (individual ticks or interval bars)
Fundamental info on assets
Options and futures chains lookup (with latest market data and Greeks)
Symbols and market codes lookup
News headlines, story-counts and complete news stories, with user-specified filters
Ability to attach user-defined Matlab callback functions to IQFeed messages and market events
User-defined custom alerts on streaming market events (news/quotes/interval-bar/regional triggers)
Connection stats and programmatic connect/disconnect
Users can combine all of the above functionality for a full-fledged end-to-end automated trading system using plain Matlab.
IQML works on all recent Matlab/IQFeed releases and platforms (Windows, Linux, Mac).
It is reliable, easy-to-use, and lightning-fast (including optional parallelization). IQML comes with a detailed User Guide packed with usage examples, sample Matlab scripts, and implementation tips.
IQML needs only the core Matlab to run - no toolboxes are required (parallelization uses the Parallel Computing Toolbox, but IQML runs well even without it).
Yair Altman
IQML.net, https://UndocumentedMatlab.com/IQML, https://github.com/altmany/IQML
Related
We've been using kdb to handle a number of calculations focused more on traditional desktop sources. We have deployed our web application and are looking to make the leap as to how best to pick up data changes and re-calculate them in kdb to render a "real-time" view of the data as it changes.
From what I've been reading, the use of data loaders(feed handlers) into our own equivalent of a "ticker plant" as a data store is the most documented ideal solution. So far, we have been "pushing" data into kdb directly and calculating as part of a script so we are trying to make the leap from calculation-on-demand to a "live" calculation as data inputs are edited by user.
I'm trying to understand how to manage the feed handlers and timing of updates. We really only want to move data when it changes (web-front end so trying to figure out how best to "trigger" when things change (such as save or lost focus on an editable data grid for example.) We are also thinking our database as the "ticker plant" itself which may minimize feedhandlers.
I found a reference below and it looks like its running a forever-loop which feels excessive but understand the original use case for kdb and streaming data.
Feedhandler - sending data to tickerplant
Does this sound like a solid workflow?
Many thanks in advance!
Resources we've referencing:
Official Manual -https://github.com/KxSystems/kdb/blob/master/d/tick.htm
kdb+ Tick overview: http://www.timestored.com/kdb-guides/kdb-tick-data-store
Source code: https://github.com/KxSystems/kdb-tick
There's a lot to parse here but some general thoughts/ideas:
Yes, most examples of feedhandlers are set up as forever loops but this is often just for convenience for demoing.
Ideally a live data flow should work based on event handling, aka on-event triggers. Kdb/q has this out of the box in the form of the .z handlers. Other languages should have similar concepts of event handling
Some more examples of python/java feeders are here: https://github.com/exxeleron
There's also some details on the official Kx site: https://code.kx.com/q/wp/capi/#publishing-to-a-kdb-tickerplant
It still might be a viable option to have a forever loop, or at least a short timer in the event you want to batch data.
Depending on the amount of dataflow a tickerplant might be overkill for your use-case, but a tickerplant is still useful for (a) separating your processing from the processing of dataflow (i.e. data can still flow through the tickerplant while another process is consuming/calculating) and (b) logging data for recovery purposes.
I'm doing x86-64 binary obfuscation research and fundamentally one of the key challenges in the offense / defense cat and mouse game of executing a known-bad program and detecting it (even when obfuscated) is system call sequence analysis.
Put simply, obfuscation is just achieving the same effects on the system through a different sequence of instructions and memory states in order to minimize observable analysis channels. But at the end of the day, you need to execute certain system calls in a certain order to achieve certain input / output behaviors for a program.
Or do you? The question I want to study is this: Could the intended outcome of some or all system calls be achieved through different system calls? Let's say system call D, when executed 3 times consecutively, with certain parameters can be heuristically attributed to malicious behavior. If system calls A, B, and C could be found to achieve the same effect (perhaps in addition to other side-effects) desired from system call D, then it would be possible to evade kernel hooks designed to trace and heuristically analyze system call sequences.
To determine how often this system call outcome overlap exists in a given OS, I don't want to use documentation and manual analysis for a few reasons:
undocumented behavior
lots of work, repeated for every OS and even different versions
So rather, I'm interested in performing black-box analysis to fuzz system calls with various arguments and observing the effects. My problem is I'm not sure how to measure the effects. Once I execute a system call, what mechanism could I use to observe exactly which changes result from it? Is there any reliable way, aside from completely iterating over entire forensic snapshots of the machine before and after?
I'm looking for some ideas/hints for streaming protocol (similar to video/audio streaming) to send any data in so called real-time.
In simple words:
I'm producing some data each second (let's say one array with 1MB of data per second) and I'm sorting that data from most important to not so important (like putting them to priority queues or similar)
I would like to keep streaming those data via some protocol and in perfect case I would like to send all of it
If not possible (bandwidth, dropping packets etc.) I would like to send from each produced array as much as possible (first n-bytes) just to keep data going (it is important to start sending new produced array each second).
And now - I'm looking for such protocol/library that will handle adaptive bit rate stuff for any data. I would expect from it to tell me how much data I can send (put into send buffers or similar approach). The most similar thing is video/audio streaming when in poor network conditions (en)coder is changing quality depending on network conditions.
It is also OK if I miss some send data (so UDP deep down of this stuff is OK) but preferably I would like to send as much data as possible per second without loosing anything (from those first n-bytes send).
Do you have any ideas of what protocol/libraries I could use for client/server? (hopefully some libs in python, C or C++).
I think IPFIX (the generic NetFlow standard) has everything you need.
You can avoid a timestamp per sample by sending a samplingInterval update every time you change your rate. You can also add other updating the change in sampling asynchronously.
As for where to put your data. You can create a new field or just use an existing one with that has a datatype you want. IE: if you are just sending uint64 sample values then it might be easier to use packetDeltaCount then create your own field definition.
There are plenty of IPFIX libraries.
I am using OMNeT++ as my simulation engine for an arbitrary network topology simulation. I have created different custom OMNeT modules to simulate different entities in my simulation. I am also using OMNeT signals and statistics for result gathering.
I am wondering whether I can collect data originating from different modules with separate signals but to be gathered, processed, and recorded in the output file by the same statistic?
I know I could probably get away with just registering and using separate statistics per module but as the documentation states that the resulting collection and recording is happening on a higher level in the OMNeT inheritance hierarchy and thus across different instances of a module, I am thinking that this should be possible.
So it turns out, I can get the intended result by retrieving a reference to the module instance that has created the statistic and signal and emitting the value that I want, even when handling an event on a different module.
A relevant code snippet below:
auto ref = (ModuleClass *)getParentModule()->getSubmodule("ModuleName");
if (ref == NULL)
{
//check successful instance retrieval
}
ref->emit(ref->relaventSignal, ValueToEmit);
I've tested Google Cloud NL API v1.0 these days. I mainly use Traditional Chinese(a.k.a. zh-Hant) data. After the testing, I find the quality is not satisfactory, classification is not right, too many one-character terms (many of them should be stop words), the worst quality is for unknown word recognition.
Also, some analysis methods (e.g. entity-sentimental) don't support zh-Hant (that I can only use 'en' to run zh-Hant data, pitty).
Does anyone know if NL API provides any way, e.g. set configuration, set parameters, or run some process, so as to improve training result?
Does anyone actually have experience on using NL API generated result to add some value-added feature on a business product or service?
Also, if I want to feed high-volume data, is there a library or SDK, that I can use to write code to carry out batch-in-batch-out processing?