I'm working on a project that requires a value be passed in real-time between Matlab's Simulink and Labview on networked systems (currently running Matlab 2010b and Labview 7.0). I've been trying to do this with UDP Send/Receive functions in either program, however Labview only seems to deal in Strings with UDP/TCP-IP. Simulink only reads int/double values from UDP ports.
Is there a way for me to convert these values AFTER the read-in operation, or otherwise get around the type restriction? Any advice (or alternative ways to pass a value between the two programs) would really be appreciated. Unfortunate, due to hardware restrictions, I'm stuck with these program versions.
Thanks!
The TCP/UDP functions in LV use strings because it's a convenient way to represent an array of bytes, which is what a TCP stream basically is. You can take the data and convert it so that it's usable. Assuming Simulink encodes values the same way (simple binary for ints, IEEE754 representation for floats), then you can simply use the type cast or flatten to/unflatten from string functions to convert the data. You might need to change the order of the bytes to account for endianess.
You can look at the TCP examples in LV and the documentation on flattened data to understand more on how this works.
As a side point, UDP is lossy and is mainly suitable if you need to broadcast or get data quickly, like when streaming video. If the data is important, you should use TCP.
Related
I've recently started working with audioworklets and am trying to figure out how to determine the pitch(s) from the input. I found a simple algorithm to use for a script processor, but the input values are different than a script processor and doesn't work. Plus each input array is only 128 units. So, how can I determine pitch using an audioworklet? As a bonus question, how do the values relate to the actual audio going in?
If it worked with a ScriptProcessorNode, it will work in an AudioWorklet, but you'll have to buffer the data in the worklet because, as you noted, you only get 128 frames per call. The ScriptProcessor gets anywhere from 256 to 16384.
The values going to the worklet are the actual values that are produced from the graph connected to the input. These are exactly the same values that would go to the script processor, except you get them in chunks of 128.
I am using the following UDP receiver block for the Parrot Mambo drone,
Which outputs an [2x1] array of singles in my case. However, I would like to split the output into two separate signals. What block should I use for this?
It was easier than I thought, but nevertheless hard to find on the internet. The solution is to use the demux block.
this question may come as being too broad, but I will try to make every sub-topic to be as specific as possible.
My setting:
Large binary input (2-4 KB per sample) (no images)
Large binary output of the same size
My target: Using Deep Learning to find a mapping function from my binary input to the binary output.
I have already generated a large training set (> 1'000'000 samples), and can easily generate more.
In my (admittedly limited) knowledge of Neural networks and deep learning, my plan was to build a network with 2000 or 4000 input nodes, the same number of output nodes and try different amounts of hidden layers.
Then train the network on my data set (waiting several weeks if necessary), and checking whether there is a correlation between in- and output.
Would it be better to input my binary data as single bits into the net, or as larger entities (like 16 bits at a time, etc)?
For bit-by-bit input:
I have tried "Neural Designer", but the software crashes when I try to load my data set (even on small ones with 6 rows), and I had to edit the project save files to set Input and Target properties. And then it crashes again.
I have tried OpenNN, but it tries to allocate a matrix of size (hidden_layers * input nodes) ^ 2, which, of course, fails (sorry, no 117GB of RAM available).
Is there a suitable open-source framework available for this kind of
binary mapping function regression? Do I have to implement my own?
Is Deep learning the right approach?
Has anyone experience with these kind of tasks?
Sadly, I could not find any papers on deep learning + binary mapping.
I will gladly add further information, if requested.
Thank you for providing guidance to a noob.
You have a dataset containing pairs of binary valued vectors, with a max length of 4,000 bits. You want to create a mapping function between the pairs. On the surface, that doesn't seem unreasonable - imagine a 64x64 image with binary pixels – this only contains 4,096 bits of data and is well within the reach of modern neural networks.
As your dealing with binary values, then a multi-layered Restricted Boltzmann Machine would seem like a good choice. How many layers you add to the network really depends on the level of abstraction in the data.
You don’t mention the source of the data, but I assume you expect there to be a decent correlation. Assuming the location of each bit is arbitrary and is independent of its near neighbours, I would rule out a convolutional neural network.
A good open source framework to experiment with is Torch - a scientific computing framework with wide support for machine learning algorithms. It has the added benefit of utilising your GPU to speed up processing thanks to its CUDA implementation. This would hopefully avoid you waiting several weeks for a result.
If you provide more background, then maybe we can home in on a solution…
I'm trying to send and receive data through a serial port using simulink (matlab 7.1) and d-space. The values I want to send and receive are doubles. Unfortunately for me the send and receive blocks use uint8 values. My question is how can I convert doubles into an array of uint8 values and vice versa? Are there simulink blocks for this or should I use embedded matlab functions?
Use the aptly named Data Type Conversion block, which does just that.
EDIT following discussion in the comments
Regarding scaling, here's a snapshot of something I did a long time ago. It's using CAN rather than serial, but the principle is the same. Here, it's slightly easier in that the signals are always positive, so I don't have to worry about scaling a negative number. 65535 is the max value for a uint16, and I would do the reverse scaling on the receiving end. When converting to uint16 (or uint8 as in your case, it automatically rounds the value, and you can specify that behaviour in the block mask).
There are pack and unpack blocks in simulink, search for them in simulink library browser. You could need som additional product, not sure which.
I have a client application (iPhone) that collects 10,000 doubles. I'd like to send this double array over HTTP to an appengine server (java). I'm looking for the best way to do this.
Best can be defined as some combination of ease of programming and compactness of representation as the amount of data can be quite high.
My current idea is that I will convert the entire array of doubles to a string representation and send that as a POST parameter, on the server parse the string and convert back to a double array. Seems inefficient though...
Thanks!
I think you kind of answered your own question :) The big thing to beware of is differences between the floating point representation on the device and the server. These days they're both going to be little-endian, (mostly) IEEE-754 compliant. However, there can still be some subtle differences in implementation that might bite, e.g handling of denormals and infinities, but you can likely get away with ignoring them. I seem to recall a few of the edge cases in NEON (used in the iPhone's Cortex A-8) aren't handled the same as x86.
If you do send as a string, you'll end up with a decimal and binary conversion between, and potentially lose accuracy. This isn't that inefficient, though - it's only 10,000 numbers. Unless you're expecting thousands of devices pumping this data at your server non-stop.
If you'd like some efficiency in the wire transfer and on the device side, then one approach is to just send the doubles in their raw binary form. On the server, reparse them to a doubles (Double.longBitsToDouble). Make sure you get the endian-ness right when you grab the data as longs (it'll be fairly obvious when it's wrong).
I guess that there are lots and lots of different ways to do this. If it were me I would probably just serialize to an array of bytes and then base64 encode it, most other mechanisms will significantly increase the volume of data being passed.
10k doubles is 80k binary bytes is about 107k or so characters base64 encoded. Or 3 doubles is 24 binary bytes is 32 base64 characters. There's tons of base64 conversion example source code available.
This is far preferable to any decimal representation conversions, since the decimal conversion is slower and, worse, potentially lossy.
json
for iphone encode with yajl-obj-c
and for java read with jsonarray
If you have a working method, and you haven't identified a performance problem, then the method you have now is just fine.
Don't go trying to find a better way to do it unless you know it doesn't meet your needs.
On inspection it seems that on the java side, a double (64 bytes) will be about 4 characters (16 bytes * 4). Now, when I think of your average double, let's say 10 digits and a decimal point, plus some delimiter like a space of semicolon, you're looking at about 12 characters per decimal. That's only 3x as much.
So you originally had 80k of data, and now you have 240k of data. Is that really that much of a difference?