Alternative to MQL5 - matlab

I am starting with Expert Advisors on MetaTrader Terminal software and I have many algorithms to use with it. These algorithms were developed in MATLAB using its powerfull built in functions ( e.g. svd, pinv, fft ).
To test my algorithms I have some alternatives:
Write all the algorithms in MQL5.
Write the algorithms in C++ and than make a DLL to call by MQL5.
Write the algorithms in Python to embed in C and than make a DLL.
Convert the MATLAB source code to C and than make a DLL.
About the problems:
Impracticable because MQL5 does not have built in functions so I will have to implement one by one by hand.
I still did not try this, but I think it will take a long time to implement the algorithms ( I wrote some algorithms in C but took a good time and the result wasn't fast like MATLAB ).
I am getting a lot of errors when compiling to a DLL but if I compile to an executable there is no error ( this would be a good alternative since to convert MATLAB to python is quite simple and fast to do ).
I am trying this now, but I think there is so much work to do.
I researched about other similar pieces of software, like MetaTrader Terminal but I didn't found a good one.
I would like to know, if there is a simplest ( and fast ) way to embed other language in some way to MQL5 or some alternative to my issue.
Thanks.

Yes, there is alternative ... 5 ) Go Distributed :
having a similar motivation for using non-MQL4 code for fast & complex mathematics in external quantitative models for FX-trading, I have started to use both { MATLAB | python | ... } and MetaTrader Terminal environments in an interconnected form of a heterogeneous distributed processing system.
MQL4 part is responsible for:
anAsyncFxMarketEventFLOW processing
aZmqInteractionFRAMEWORK setup and participation in message-patterns handling
anFxTradeManagementPOLICY processing
anFxTradeDetectorPolicyREQUESTOR sending analysis RQST-s to remote AI/ML-predictor
anFxTradeEntryPolicyEXECUTOR processing upon remote node(s) indication(s)
{ MATLAB | python | ... } part is responsible for:
aZmqInteractionFRAMEWORK setup and participation in message-patterns handling
anFxTradeDetectorPolicyPROCESSOR receiving & processing analysis RQST-s to from remote { MQL4 | ... } -requestor
anFxTradeEntryPolicyREQUESTOR sending trade entry requests to remote { MQL4 | other-platform | ... }-market-interfacing-node(s)
Why to start thinking in a Distributed way?
The core advantage is in re-using the strengths of MATLAB and other COTS AI/ML-packages, without any need to reverse engineer the still creeping MQL4 interfacing options ( yes, in the last few years, DLL-interfaces had several dirty hits from newer updates ( strings ceased to be strings and started to become a struct (!!!) etc. -- many man*years of pain with a code-base under maintenance, so there is some un-forgettable experience what ought be avoided ... ).
The next advantage is to become able to add failure-resilience. A distributed system can work in ( 1 + N ) protected shading.
The next advantage is to become able to increase performance. A distributed system can provide a pool of processors - be it in a { SEQ | PAR }-mode of operations ( a pipeline-process or a parallel-form process execution ).
MATLAB node just joins:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% MATLAB script to setup
zeromq-matlab
clear all;
if ~ispc
s1 = zmq( 'subscribe', 'ipc', 'MATLAB' ); %% using IPC transport on <localhost>
else
disp( '0MQ IPC not supported on Windows.' )
disp( 'Setup TCP transport class instead' )
disp( 'Setting up TCP') %% using TCP transport on <localhost>
s1 = zmq( 'subscribe', 'tcp', 'localhost', 5555 );
end
recv_data1 = []; %% setup RECV buffer
This said, one can preserve strengths on each side and avoid any form of duplications of already implemented native, high-performance tuned, libraries, while the distributed mode of operations also adds some brand new potential benefits for Expert Advisor modus operandi.
one may add a remote keyboard interface to an EA automation and use some custom-specific commands ( CLI )
a fast, non-blocking, distributed remote logging
GPU / GPU-grid computing being used from inside MetaTrader Terminal
may like to check other posts on extending MetaTrader Terminal programming models
A Distributed System, on top of a Communication Framework:
MATLAB has already available port of ZeroMQ Communication Framework, the same that MetaTrader Terminal has, thanks to Austin CONRAD's wrapper ( though the MQH is interfacing to a ver 2.1.11 DLL, the services needed work like a charm ), so you are straight ready to use it on each side, so these types of nodes are ready to join their respective roles in any form one can design into a truly heterogeneous distributed system.
My recent R&D uses several instances of python-side processes to operate AI/ML-predictor, r/KBD, r/RealTimeANALYSER and a centralised r/LOG services, that are actively used, over many PUSH/PULL + XREQ/XREP + PUB/SUB Scalable Formal Communication Patterns, from several instances of MetaTrader Terminal-s by their respective MQL4-code.
MATLAB functions could be re-used in the same way.

Related

Porting word2vec to RISC-V.. potential proxy kernel issue?

We are trying to port word2vec to RISC-V. Towards this end, we have compiled word2vec with a cross compiler and are trying to run it on Spike.
The cross compiler compiles the standard RISC-V benchmarks and they run without failure on Spike, but when we use the same setup for word2vec, it fails with "bad syscall #179!". We tried two different versions, both fail around the same place a minute or two into the run while executing these instructions. After going through the loop several 100k times, we see C1, C2 printed an then the crash. We are thinking this is more of a spike/pk issue than a word2vec issue.
Has anyone had similar experiences when porting code to RISC-V? Any ideas on how we might track down whether it's the proxy kernel?
A related question is about getting gdb working with Spike.. will post that separately.
Thank you.
The riscv-pk does not support all possible syscalls. You'll need to track down which syscall it is and whether you can implement it in riscv-pk or if you need to move to running it on a different kernel. For example, riscv-pk does not support any threading-related syscalls as multithreaded kernel support is an explicitly riscv-pk non-goal.
I would also be wary of using riscv-pk in general. It's a very simple, thin kernel which is great for running newlib user applications in the beginning, but it lacks rigorous testing and validation efforts against it, so running applications that stress virtual memory systems, rely on lots of syscalls (iotcl and friends), or are expecting more glibc-like environments may prove problematic.

Specman beginner's questions

I am new to Specman.
I have a couple of questions:
I am trying to use the agent methodology. After writing the env,agent,bfm etc - what is the recommended way to create clock and reset? by writing a tb.v (calling the top verilog module) or is there a better way?
How do I link the specman env file to the tb (or maybe its just enough to link the ports of the different specman files with a signals_map to the verilog files?
Most important how do I run the environment with irun?
I was thinking of creating a file listing all the verilog files, e.g. - veri.lst
the specman top shall import all the specman files, e.g - spec_top.e
irun -access +wrc veri.lst spec_top.e
should be ok?
should I mention the top level module in the command?
Should I put the test name in a special way in the command?
Thanks alot for all the help!!
Cadence recommends driving clocks from inside an HDL testbench (i.e. written in Verilog in your case). This is because every time the simulator yields control to Specman to execute it wastes processor time. You want to minimize the number of switches as much as possible.
Linking the env to the TB is done by connecting the Verilog signals of interest to the corresponding Specman ports (using hdl_path()).
W.r.t. running it, there are 2 things to keep in mind. e code can be executed in compiled or in interpreted mode. Also, compiled code is faster, but can't be debugged. You have to tell irun what you want compiled and what you want interpreted:
irun -f veri.lst \
compiled_top.e \
-snload interpreted_top.e
What you typically compile are files which you don't expect to change (verification components that you buy or reuse from other projects, for example). The rest of your files you'd load interpreted to be able to easily debug.
Adding to Tudor's great answer -
First - yes, connecting The e TB to the DUT is done using hdl_path(), and connecting the ports to external. You usually would have one unit designated for the interface, so configuring it would look something like this:
extend signal_map {
// name of the instance of the verilog module you interface
keep hdl_path() == "sub_system_a";
keep bind (sig_clock, external);
// name of the clock signal
keep sig_clock.hdl_path == "clk";
};
Please take a look in the IES release, at the UVM Examples.
They are in
specman/uvm/uvm_examples
For example, check out the specman/uvm/uvm_examples/xserial/e/xserial_collector_h.e:
And about the clock -
Connecting a clock in the e TB to the design is very simple. Something like this -
unit synch {
sig_clock : in simple_port of bit is instance;
keep bind(sig_clock, external);
event clock is rise(sig_clock$) #sim;
// can define also on fall or change
};
Now the clock event can be used as sampling event for TCMs and Temporals. This is a simple fast way for using the clock in the TB.
Another way to use the clock, is more "acceleration ready". In this methodology, you would implement a clock agent in verilog, and it will provide "clock services" to the TB. According to this methodology, the TB will not have any "wait cycles" in it. instead - it will call the Clock Agent task "wait_cycles()" - and wait for indication that required number of clock cycles passed.
This is a rather new methodology, oriented to be Acceleration Ready.
It will be demonstrated in the UVM Examples in next IES release, 15.1.
/efrat

Fake microphone input for MATLAB

I'm working on a project on MATLAB that records sound and processes. I'm completely fed up of playing the same sounds over and over again during the development.
Is there some kind of way to "fake" the microphone i.e. playing a file on my computer and getting it in MATLAB with the same code I use to record with my mic?
Thanks for any help.
PS: I'm on Mac OS X Yosemite
It depends how you've implemented your code - if you post the relevant sections you'd be able to get more specific suggestions - but in general you might be able to replace the part of the code that captures input from the microphone with a call that reads a file from disk - wavread would be useful for this (http://uk.mathworks.com/help/matlab/ref/wavread.html).
If you're doing realtime stuff then it may or may not work, but if not then you could play the sound file in a third party application and use something to internally rewire the output to the input. Soundflower is one tool that can do this, there are others.
There are more pieces of the puzzle to address.
If just an asynchronous mode of work is possible
If you just wish to work in silence and MATLAB process under development does not require synchronisation with the sound-replay ( not dependent on where the sound-sample starts & just needs "some" sound-related data to be input once the MATLAB code gets ready ), than the easiest way would be to plug a jack-connector into MIC and have the sound re-played in an endless loop by an external device ( MP3 player et al ) and enjoy the silence.
In case a synchronous mode of operations is needed
In case your MATLAB code requires synchronised processing, aligned with the start of the sound-sample and terminating the re-play process once MATLAB code is finished, then you need something a bit more complex than just a re-wired ( be it done physically or virtually ) sound delivery.
There are ways how to allow MATLAB communicate with external processes and thus allow triggering the synchronised events on the remote side ( sending a message alike HeyPythonProcess.startTheSoundREPLAY() ) and make the whole sound-processing both silent ( for example, the python audioservices can move sound-bytes into respective audiomixer paths under your full ( i.e. programmable ) control ) and fully synchronous ( via an event-driven, messaging layer, like ZeroMQ allows )
thus keeping the process as needed.
If this sounds complicated? Yes, it is complicated, but both realistic and possible. MATLAB allows inter-process communications / messaging in a fully autonomous multi-agent manner ( no subordination, indeed a fully autonomous mode of work ) and that gives you an immense power for the future, perhaps, once entering into distributed cloud/grid processing Projects.
Use a side-effect of a bridged mode of MATLAB operations
There is also another synchronous way, to use python-MATLAB bridge, where python side "enforces" synchronicity ( controls the experiment ) and starts / stops the MATLAB part of the work ( thus aligning as a side-effect the replay with the MATLAB processing )
from pymatbridge import Matlab as aMATLAB # get ready
''' #
mlab = aMATLAB() # a class instance ( empty )
'''
mlab = aMATLAB( matlab = '...aMatlabCODE' ) # a class instance ( initialised )
mlab.start() # True once connected.
#
# start playing sound here
# ... and make MATLAB-beyond-the-bridge process it
# ...
# ....>>> |||||||||||||
# vvvvvvvvvvvvv
results = mlab.run_code('a=1;') # process code / vars

trying to know more about verilog language, vhdl,and assembly language

I would like to know what is the difference between verilog and assembly language.
Next semester we will be working with micro-controllers, but I would like to learn a little bit about it before the semester begins. I've been doing a lot of research about low-level programming, and so far I have gained a good understanding in assembly language, but I get confused trying to understand Verilog and VHDL?
Verilog and VHDL are completely different languages for describing hardware, for purposes of programming FPGAs.
FPGAs are devices that can be on-the-fly programmed to implement any sort of digital logic (and sometimes analog too).
So using verilog or VHDL, I can design a circuit that creates a couple latches, some twos-complement adders, a mux, and a clock source, and suddenly you've just designed a circuit that can calculate. You could then take the output from the VHDL compiler (or whatever its called), "download" it to the FPGA, and now you actually have some hardware that can be used to do calculation.
Of course, you can use FPGAs to implement all sorts of complicated stuff - even a full custom CPU. One uses verilog and VHDL to design the circuits that are programmed to FPGAs. Those circuits could implement something simple like a ripple counter, or something more complex like a LCD driver, or something even more complex like a USB transceiver. You can go from as simple as a few latches to as complicated as a fully operating CPU; as long as its digital hardware, you can make whatever you want with VHDL and some FPGAs.
To clarify further -
"Assembly language" typically refers to raw instructions given to some sort of CPU. Of course, there are many different types of CPUs (x86, ARM, SPARC, MIPS) and further many different variants of those types of CPUs. Each CPU has its own instruction set.
Machine code is complete, fully specified, ready to be executed instructions. Assembly languages allow you type instructions from your CPU's instruction set in plain text, use labels and such, and describe the memory layout structure of the program. Put the assembly through an assembler and out comes machine code in your CPUs machine instruction set.
You could design your own CPU from scratch using VHDL. As you're designing the CPU, you would have it implement your own custom instruction set. From there, you could take the VHDL for your CPU, compile it, write it to an FPGA and have your own custom CPU. Then you could start writing programs for your made-up CPU using your custom instruction set by writing a custom assembler. Some friends of mine in college did this for giggles.
For example, you know how most CPUs are load-store, register based CPUs? Instructions tend to go something like this:
Load the value '1' into register A
Load the value '2' into register B
Add register A and register B, storing result in register A
(You just added 1 + 2! Heh)
That sort of model of computation happens to be the most popular, but it's not the only way you could do computation. What if you had a stack based CPU, where you push values onto a hardware stack, and then computations work with the values on the top of the stack, pushing results back onto the stack.
For instance:
Push 1 onto the stack (stack current contains: 1)
Push 2 onto the stack (stack current contains: 2 1)
Push 3 onto the stack (stack currently contains: 3 2 1 )
Add
'Add' takes the top two elements on the stack, adds them together, and pushes the result on the top of the stack.
Stack now contains: 5 1
Add
Stack now contains: 6
Neat isn't it? As far as a computation model goes, it has its advantages - operands tend to be short, and need fewer bits. Smaller instructions means that the CPU can be faster.
The problem is that no such processor like this exists anymore.
But if you knew what you were doing, you could design one in VHDL, program it to an FGPA, and suddenly you have one of the only operating stack-based processors in existence.
Say, if you were doing a masters thesis, for instance, you might dig around and find out that virtual-machine-based programming languages like C# and Java compile down to a bytecode for a CPU that doesn't really exist, but the model for that CPU proves useful for making code portable. You might find out that the imaginary machines used by these languages are based on stack-based processor models. If you were looking for something interesting to do, perhaps you write in VHDL a processor that natively implements the Java bytecode language. Now you'd be the only person that has a computer that can directly run Java.
Verilog and VHDL are both HDLs (Hardware description languages) used mainly for describing digital electronics. Their targets may be FPGA or ASIC (custom silicon).
Assembly level on the other hand is using an processors instruction set to perform a series of calculations. Every thing executed on a computer eventually ends up as an assembly level instruction. One example of an instruction set would be the x86 ISA.
Summary: Verilog, VHDL describe hardware. Assembly is the low level program being executed on a processor.

rpc mechanism to use with select driven daemon

I want to add an RPC service to my unix daemon. The daemon is written in C and has an event driven loop implemented using select(). I've looked at a number of RPC implementations but they all seem to involve calling a library routine, or auto generated code, which blocks indefinitely.
Are there any RPC frameworks out there where the library code/autogenerated code doesn't block or start threads. Ideally, I'd like to create the input/output sockets myself and pass them into my select loop.
Regards,
Alex - first time poster! :-)
I'm assuming that you can use C++ Apache Thrift is good - FAST RPC is also useful.
I evaluated a fair few libraries at the start of 2012 and eventually ended up going with ZeroMQ as it was more adaptable and (I found it) easier and a lot more flexible. I did consider using a Google protobuf implementation but ended up using a simpler structured command text approach.
I probably wouldn't consider doing this in C unless I had to, in which case I'd probably start with the standard rpc(3) stuff, for a good overview see this overview of Remote Procedure Calls (RPC).