How to receive serial strings in Simulink - matlab

I am trying to read a stream of strings from serial input (COM Port) and then display the strings. The issue I am having is that the strings are of variable length and Simulink only seems to be able to read a string if I specify its length in the data size field.
For instance, if I specify a data size of [1 18], I only receive those strings which are of length 18 or have lengths greater than 18. If I specify [1 80], I only receive strings of length 80 or greater and so forth. If the string is longer than the specified size then the string is truncated to the specified size.
I have not been able to resolve this issue and the documentation is very light on how to use (and specify) this parameter. The data size is described as "Output data size, or the number of values that should be read at each simulation time step. This parameter is specified as a multidimensional numeric array. The data size does not include the header or terminator values".
Any help in solving this issue is highly appreciated.

Related

Getting values from energy meter to Schneider PLC

I am trying to read the values from an energy meter, and convert them to REAL (32bit float).
In this case I am reading phase 1 voltage.
Each value is read across two registers.
So I have received to WORDS of values 17268 (MSW) and 2456 (LSW) converted them into a DWORD, and then to a REAL value after multiplying by 0.1, but I am not getting the answer I'm expecting.
I should be getting 245.0375 volts.
However I am getting 1.13E+08
Please see snip of structured text with live values.
snip
The problem is that DWORD_TO_REAL is trying to do a type conversion; that is, make the value of a DWORD match the format of a REAL. In your case, the contents of MSW and LSW are simply a IEEE754 value split in half and just need to be forced into the corresponding bits of a REAL variable. With TwinCAT (Beckhoff) I would do a direct memory copy:
MEMCPY(ADR(realValue)+2, ADR(MSW), 2);
MEMCPY(ADR(realValue), ADR(LSW), 2);
I would assume Schneider has a similar command.

Add missing/extra values in data array in Matlab

I have recorded WiFi CSI sensor data 5000 packets in 5 seconds(5000 packets x 57 subcarriers). But due to dynamic hardware configuration sometimes I only receive 4998 x 57. I want to add and estimate 2 rows so that my original design has consistent 5000 rows x 57 columns.
As you can see some data are 5000x57, and some are 4998x57.
You can achieve your desired output using mean()-function combined with the concatenation operator [] and the repmat() like this:
A=randi(100,4998,57);
A=[A;repmat(mean(A),2,1)];
Most of the functions in Matlab that take arrays as an input will calculate for each column except if the input array hast just 1 row. So does the mean function and you can just append means output to your arrays.
If you show me the code that you used to import the data, I might be able to help you create a cleaner data structure and thus be able to automatically process all of your arrays. The way the data is currently designed it's only possible to do this with dynamic variable names which is considered bad programming practice.

converting large numerical value to power 10 format

Is there a way to convert large numerical value to *10 to the power format in sas?
Eg: 88888888383383838383 to 8.8*10^6
Thanks in advance.
You can use the format ew. where the w the number output characters. Using e8. will result in 8.9E+19. But beware that SAS uses floating point to store values internally, with a maximum of 8 bytes. Your example value would be rounded to 88,888,888,383,383,830,528 (no matter how it's formatted).

Is there a way to write a non 8bit aligned data to a binary file in Matlab?

I need to write data to a binary file, while all the data is aligned to be in Bytes, it's made of several fragments which are not alighted to Bytes:
The total size of the data is 96 bits comprised of:
RGB color: 3*8 bit numbers (24bit),
1st property value: 7bit number
2nd property value: 7bit number
number of objects: 26bit number
memory offsert: 32bit number.
totaling to 96bit or 12B
The reason that the data is split this way is that each number has significance and it's easier to create the data by putting the numbers separately in their correct order. I'm using fwrite for this, but the function only allows to write numbers in sizes of Bytes. I found a way around it by using a "hack":
num=red;
num=num*2^8+green;
num=num*2^8+blue;
num=num*2^7+first_prop_val;
num=num*2^7+second_prop_val;
num=num*2^26+number_of_objects;
fwrite(fid, num,'uint64');
fwrite(fid, memory_offset,'uint32');
This works because all the numbers are positive, but it's ugly. Is there a less "hacky" way to accomplish what I need?
*-the property numbers are the size of 7 because they can get values from 0 to 100, and giving them an extra bit just to align the data would mean that I can have less objects as they all need to be counted
For a signed 32bit integer, you can get the binary representation using:
bi=dec2bin(typecast(int32(-23),'uint32'))=='1'
Signed: You heading n bits, if they are equal to the n+1th bit
bw=7
assert all(bi(1:end-bw+1))|all(~bi(1:end-bw+1))
bi=bi(end-bw+1:end)
For a unsigned one, use:
bi=dec2bin(uint32(23))=='1'
Unsigned: You can remove heading zeros.
assert(all(~bi(1:end-bw)))
bi=bi(end-bw+1:end)
Put this into a function, convert all integers, concatenate to one binary array, cut in 32bit parts and write using uint32-format.

Simple compression algorithm in C++ interpretable by matlab

I'm generating ~1million text files containing arrays of doubles, tab delimited (these are simulations for research). Example output below. Each million text files I expect to be ~5 TB, which is unacceptable. So I need to compress.
However, all my data analysis will be done in matlab. And every matlab script will need to access all million of these text files. I can't decompress the whole million using C++, then run the matlab scripts, because I lack the HD space. So my question is, are there some very simple, easy to implement algorithms or other ways of reducing my text file sizes so that I can write the compression in C++ and read it in matlab?
example text file
0.0220874 0.00297818 0.000285954 1.70E-05 1.52E-07
0.0542912 0.00880725 0.000892849 6.94E-05 4.51E-06
0.0848582 0.0159799 0.00185915 0.000136578 7.16E-06
0.100415 0.0220033 0.00288016 0.000250445 1.38E-05
0.101889 0.0250725 0.00353148 0.000297856 2.34E-05
0.0942061 0.0256 0.00393893 0.000387219 3.01E-05
0.0812377 0.0238492 0.00392418 0.000418365 4.09E-05
0.0645259 0.0206528 0.00372185 0.000419891 3.23E-05
0.0487525 0.017065 0.00313825 0.00037539 3.68E-05
If it matters.. the complete text files represent joint probability mass functions, so they sum to 1. And I need lossless compression.
UPDATE Here is an IDIOTS guide to writing binary in C++ and reading it Matlab, with some very basic explanation along the way.
C++ code to write a small array to a binary file.
#include <iostream>
using namespace std;
int main()
{
float writefloat;
const int rows=2;
const int cols=3;
float JPDF[rows][cols];
JPDF[0][0]=.19493;
JPDF[0][1]=.111593;
JPDF[0][2]=.78135;
JPDF[1][0]=.33333;
JPDF[1][1]=.151535;
JPDF[1][2]=.591355;
JPDF is an array of type float that I save 6 values to. It's a 2x3 array.
FILE * out_file;
out_file = fopen ( "test.bin" , "wb" );
To be honest, I don't quite get what the first line is doing. It seems to be making a pointer of type FILE named out_file. The second line fopen says make a new file for writing (the 'w' of the second parameter), and make it a binary file (the 'b' of the wb).
fwrite(&rows,sizeof(int),1,out_file);
fwrite(&cols,sizeof(int),1,out_file);
Here I encode the size of my array (# rows, # cols). Note that we fwrite the reference to the variables rows and cols, not the variables themselves (& is by ref). The second parameter tells it how many bytes to write. Since rows and cols are both ints, I use sizeof(int). The '1' says do this 1 time. I think. And out_file is our pointer to the file we're writing to.
for (int i=0; i<3; i++)
{
for (int j=0; j<2; j++)
{
writefloat=JPDF[j][i];
fwrite (&writefloat , sizeof(float), 1, out_file);
}
}
fclose (out_file);
return 0;
}
Now I'll iterate through my array and write each value in bytes to my file. The indexing is a little backwards looking in that I'm iterating down each column rather than across a column in the inner loop. We'll see why in a sec. Again, I'm writing the reference to writefloat, which takes on the value of the current array element in each iteration. Since each array element is a float, I'm using sizeof(float) here instead of sizeof(int).
Just to be incredibly, stupidly clear, here's a diagram of how I think of the file we've just created.
[4 bytes: rows][4 bytes: cols][4 bytes: JPDF[0][0]][4 bytes: JPDF[1][0]] ...
[4 bytes: JPDF[1][2]]
..where each chunk of bytes is written in binary (0s and 1s).
To interpret in MATLAB:
FID=fopen('test.bin');
sizes=fread(FID,2,'int')
FID sort of works like a pointer here. Secretly, it probably is a pointer. Then we use fread which operates very similarly to C++ fread. FID is our 'pointer' to our file. The 'int' tells the function how many bytes each chunk contains. So sizes=fread(FID,2,'int') says 'open FID in binary, and read 2 chunks of size INT bytes, and return the 2 elements in vector form. Now, sizes(1)=rows, and sizes(2)=cols.
s=fread(FID,[sizes(1) sizes(2)],'float')
The next part wasn't completely clear to me originally, I thought I'd have to tell fread to skip the 'header' of my binary that contains row/col info. However, it secretly maintains a pointer to where you left off. So now I empty the rest of the binary file, using the fact that I know the dimensions of the array. Note, while the second parameter [M,N] is [rows,cols], fread reads in "column order", which is why we wrote the array data in column order.
The one * is that I think I can only use matlab code 'int' and 'float' if the architecture of the C++ program is concordant with matlab (e.g., both are 64-bit, or both are 32-bit). But I'm not sure about this.
The output is:
sizes =
2
3
s =
0.194930002093315 0.111593000590801 0.781350016593933
0.333330005407333 0.151535004377365 0.59135502576828
To do better than four bytes per number, you need to determine to what precision you need these numbers. Since they are probabilities, they are all in [0,1]. You should be able to specify a precision as a power of two, e.g. that you need to know each probability to within 2-n of the actual. Then you can simply multiply each probability by 2n, round to the nearest integer, and store just the n bits in that integer.
In the worst case, I can see that you are never showing more than six digits for each probability. You can therefore code them in 20 bits, assuming a constant fixed precision past the decimal point. Multiply each probability by 220 (1048576), round, and write out 20 bits to the file. Each probability will take 2.5 bytes. That is smaller than the four bytes for a float value.
And either way is way smaller than the average of 11.3 bytes per value in your example file.
You can get better compression even than that if you can exploit known patterns in your data. Assuming that there are any. I see that in your example, on each line the values go down by some factor at each step. If that is real and not just an artifact of the generation of the example, then you can successively use fewer bits for each sample. Also if the first sample is really always less than 1/8, then you can drop the top three bits off that one, since those bits would always be zero. If the second column is always less than 1/32, you can drop the first five bits off all of those. And so on. Assuming that the magnitudes in the example are maximums across all of the data sets (obviously not true, but just using that as an illustrative case), and assuming you need six decimal digits after the decimal point, I could code each row of six values in 50 bits, for an average of a little over one byte per probability.
And for one last smidgen of compression, since the values add to one, you don't have store the last value.
Matlab can read binary files. Why not save your files as binary instead of text?
Saving each number as a float would only require 4 bytes (if you're running 32 bit linux), you could use doubles but it appears that you aren't using the full double resolution. Under your current scheme each digit every number consumes a byte of space. All of your numbers are easily 4+ char longs, some as long as 10 chars. Implementing this change should cut down your file sizes by more than 50%.
Additionally you might consider using a more elegant data format like HDF5 (more here) that both supports compression and is supported by matlab
Update:
There are lots of examples of how to write binary files in C++, just google it.
Additionally to read in a binary file in Matlab simply use fread
The difference between representing a number as ascii vs binary is really simple. All files are written using binary, the difference is in how that information gets interpreted. Text files are generally read using ASCII, which provides a nice mapping between an 8bit word and characters. When you see a string like "255" what you have is a array of bytes where each byte encodes on character in the array. However when you are storing numbers its really wasteful to store each digit of using a different byte. A single byte can store values between 0-255. So why use three bytes to store the string "255" when I can use a single byte to store the value 255.
You can always go ahead and zip everything using a standard library like zlib. Afterwards you could use a custom dll written in C++ that unzips your data in chunks you can manage. So basically:
Data --> Zip --> Dll (Loaded by Matlab via LoadLibrary) --> Matlab