Collect more than 10,000 data points from a Tektronix oscilloscope? - matlab

I am building a MATLAB GUI to do data collection from a Tektronix DPO4104 oscilloscope (MATLAB driver here).
I am playing around with tmtool and with my GUI code and have found that the driver can only collect 10,000 data points, regardless of if the oscilloscope is set to show more than 10k points. I found this post on in CCSM but it hasn't been terribly helpful. (I'm the last post on there if you care to read it.) I am using the DPO4104 driver, whereas this post discusses use of the DPO4100 driver, I believe.
As far as I can tell, the steps are:
Edit driver's readwaveform function to account for the current recordLength - in my case, 100,000 points, say.
Manually edit the driver's MaxNumberPoint from 10,000 to 100,000. (In my case, the default number was 0.. I changed this to 100,000).
Manually edit EndingPoint. I set this to 100,000 also.
Before creating a device object, set(interfaceObj, 'InputBufferLength', 2.5*recordLength), that is, make sure the input buffer can fit more than 100,000 points. It's recommended to use at least double the expected buffer. I used 2.5 just because.
Build device object and waveform object, connect() to it, and readwaveform. Profit.
I am still unable to collect more than 10,000 points, either through tmtool or through my GUI. Any help would be appreciated.

I got ahold of a Tektronix engineer; he basically told me just to use the SCPI commands directly and skip the driver. While annoying, this might be the simplest solution.

Is it possible for you to collect the data points 10,000 at a time, then save them somewhere, collect the next 10,000, append them to the saved points, repeat?
It's a work-around, sure.

I figured it out! I think. Taking a couple weeks to step back and refresh really helped. Here's what I did:
1) Edit the driver's init function to configure a larger buffer size. Complete init code:
function init(obj)
% This method is called after the object is created.
% OBJ is the device object.
% End of function definition - DO NOT EDIT
% Extract the interface object.
interface = get(obj, 'Interface');
fclose(interface);
% Configure the buffer size to allow for waveform transfer.
set(interface, 'InputBufferSize', 12e6);
set(interface, 'OutputBufferSize', 12e6); % Originally is set to 50,000
I originally tried to set the the buffer sizes to 22e6 (I wanted to get 10 million points) but I got out-of-memory errors. Supposedly the buffer should be a little more than double what you expect to get out, plus space for headers. I probably don't need 2 million points worth of "header", but eh.
2) Edit the driver's readwaveform() to first query what the user-settable number of points to collect should be. Then, write SCPI commands to the scope to ensure that the number of data points to be transferred is equal to the number of points the user desires. The following snippet will do the trick in readwaveform:
try
% Specify source
fprintf(interface,['DATA:SOURCE ' trueSource]);
%----------BEGIN CODE TO HANDLE MORE THAN 10k POINTS----------
recordLength = query(interface, 'HORizontal:RECordlength?');
fprintf(interface, 'DATA:START 1');
fprintf(interface, 'DATA:STOP %d', str2num(recordLength));
%----------END CODE TO HANDLE MORE THAN 10k POINTS----------
% Issue the curve transfer command.
fprintf(interface, 'CURVE?');
raw = binblockread(interface, 'int16');
% Tektronix scopes send and extra terminator after the binblock.
fread(interface, 1);
3) In the user code, set a SCPI command to change the record size to the underlying interface object:
% interfaceObj is a VISA object.
fprintf(interfaceObj, 'HORizontal:RECordlength 5000000');
There you have it. Hopefully this helps out anyone else that's trying to figure out this issue.

Here's a bad idea. Start collecting 10,000 points. When you get to 5000 points, start collecting data again (you might need to run that in a new thread). Keep going back and forth until all the data you need are stored in 20 some data structures. Then, combine the structures into one structure by lining up the data points. This might be more work than calling the SCPI commands directly and might have some nasty caveats I haven't thought of. But as I said, it's a bad idea...

Related

Efficient way to update plot data

I'd like to improve the efficiency of my GUI in app designer, even if it involves frontloading figure generation once so as to save time in subsequent views/updates.
I'm trying to update a UIAxes which includes 4 patch() handles, and approximately 10 plot3() handles referencing approximately 30 lines. The goal is to generate the figure, and then have the ability to update the location of all of the data over 120 different timepoints. ("Play" through the results)
The problem is that it takes approximately 1.5seconds to update the figure once. Updating the patch() handles is approximately an order of magnitude faster than the plot3() handles. While my code doesn't need to run instantly, I was hoping it might update much faster (< 0.5 seconds per timepoint).
I'm using the following syntax to update (as an example) one of my plot3 handles, which includes 3 distinct line objects (thus the cell referencing of {'XData'}):
set(p1.foo1,{'XData'},num2cell([foo1.fem.nds(:,1,1) foo1.tib.nds(:,1,1)],2));
set(p1.foo1,{'YData'},num2cell([foo1.fem.nds(:,2,1) foo1.tib.nds(:,2,1)],2));
set(p1.foo1,{'ZData'},num2cell([foo1.fem.nds(:,3,1) foo1.tib.nds(:,3,1)],2));
This takes approximately 0.3 seconds to run, and is only 1 of 5 plot3 handles. I've also tried running the set() command inside a loop to avoid the num2cell call as I assumed it was slower. Unfortunately that slowed things down even more.
So I'm wondering if anyone is familiar with another solution to either:
1) Updating the plot data in a faster more efficient way than I've described here.
2) Frontloading all of these figure assemblies (120 time points, 120 figures), and maybe just placing them into my GUI one at a time as I play through my time series by adding and removing each individual figure from my UIAxes as I cycle through the 120 points. I realize this will take more memory, but I'd rather more memory than more time.
I hope this was clear, any suggestions would be appreciated.
It seems as if you're asking for general advice. If you'd like more specific answers, try creating a minimum reproducible example.
Otherwise, some general tips:
Don't store data in cells. The set() method for line objects can be used with standard numeric arrays: primative line documentation
Struct in MATLAB has some overhead associated with it. It looks like you have multiple nested structs holding numeric arrays. Retrieving this data from that struct might be slow. You can always use tic/toc to see how slow it is. But in general, avoid structs when possible and store the numeric data as its own variable. For more info, see some advice on arrays of structs vs. structs of arrays.

How to plot all the stream lines in paraview?

I am simulating the case "Cavity driven lid" and I try to get all the stream lines with the stream tracer of paraview, but I only get the ones that intersect the reference line, and because of that there are vortices that are not visible. How can I see all the stream-lines in the domain?
Thanks a lot in adavance.
To add a little bit to Mathieu's answer, if you really want streamlines everywhere, then you can create a Stream Tracer With Custom Source (as Mathieu suggested) and set your data to both the Input and the Seed Source. That will create a streamline originating from every point in your dataset, which is pretty much what you asked for.
However, while you can do this, you will probably not be happy with the results. First of all, unless your data is trivially small, this will take a long time to compute and create a large amount of data. Even worse, the result will be so dense that you won't be able to see anything. You will get all those interesting streamlines through vortices, but they will be completely hidden by all the boring streamlines around them.
Thus, you are better off with trying to derive a data set that contains seed points that are likely to trace a stream through the vortices that you are interested in. One thing you might want to try is to compute the vorticity of your vector field (Gradient Of Unstructured Data Set when turning on advanced option Compute Vorticity), find the magnitude of that (Calculator), and then use the Threshold filter to pull out the cells with large vorticity. Then use that as your Seed Source.
Another (probably better) option if your data is 2D or you can extract an interesting surface along the flow of your data is to use the Surface LIC plugin. Details can be found at https://www.paraview.org/Wiki/ParaView/Line_Integral_Convolution.
You have to choose a representative source for your streamline.
You could use a "Sphere Source", so in the StreamTracer properties.
If that fails, you can use a StreamTracerWithCustomSource and use your own source that you will have to create yourself first.

serial input buffer size Matlab

I'm trying to read a lot of data coming from my Arduino, I've set my input buffer to 500000 to make sure that it can handle all these data. My data are 4 sensors readings each samples at 250 Hz. With the default buffer size (712), I used to get snags when I plot the readings in real time and the samples get disordered which makes the plot go crazy. I solved this by increasing the buffer size to 50000. But now, this will work for a while but if I want to run it for 15 minutes, I get the same misbehavior after 5 minutes, with the addition that the plotting gets slower. I do have some processing code along with the live plotting but it shouldn't be like this with such a bi buffer. I want to know whether the buffer will contain all the data from the beginning until it's full or will it keep erasing older data when it gets full (knowing that I already saved it in another vector and plotted it). I truly don't understand why this keeps happening.
kind regards
I.H
When the buffer gets full, once you get new data it erases the old data. The behavior you are seeing is because your processing and your plotting is slower than the flow of the data.
Try to make sure that you optimize you processing
Make sure that for plotting is done by "drawnow". Like this you are sure that if there is anything in the queue it is not executed
Try to avoid saving and keeping all the data
If the problem is still there, you can try to implement a timer to make sure that you are consistent with reading your data

How to write "Big Data" to a text file using Matlab

I am getting some readings off an accelerometer connected to an Arduino which is in turn connected to MATLAB through serial communication. I would like to write the readings into a text file. A 10 second reading will write around 1000 entries that make the text file size around 1 kbyte.
I will be using the following code:
%%%%%// Communication %%%%%
arduino=serial('COM6','BaudRate',9600);
fopen(arduino);
fileID = fopen('Readings.txt','w');
%%%%%// Reading from Serial %%%%%
for i=1:Samples
scan = fscanf(arduino,'%f');
if isfloat(scan),
vib = [vib;scan];
fprintf(fileID,'%0.3f\r\n',scan);
end
end
Any suggestions on improving this code ? Will this have a time or Size limit? This code is to be run for 3 days.
Do not use text files, use binary files. 42718123229.123123 is 18 bytes in ASCII, 4 bytes in a binary file. Don't waste space unnecessarily. If your data is going to be used later in MATLAB, then I just suggest you save in .mat files
Do not use a single file! Choose a reasonable file size (e.g. 100Mb) and make sure that when you get to that many amount of data you switch to another file. You could do this by e.g. saving a file per hour. This way you minimize the possible errors that may happen if the software crashes 2 minutes before finishing.
Now knowing the real dimensions of your problem, writing a text file is totally fine, nothing special is required to process such small data. But there is a problem with your code. You are writing a variable vid which increases over time. That may cause bad performance because you are not using preallocation and it may consume a lot of memory. I strongly recommend not to keep this variable, and if you need the dater read it afterwards.
Another thing you should consider is verification of your data. What do you do when you receive less samples than you expect? Include timestamps! Be aware that these timestamps are not precise because you add them afterwards, but it allows you to identify if just some random samples are missing (may be interpolated afterwards) or some consecutive series of maybe 100 samples is missing.

Bootstrapping in Matlab - how many original data points are used?

I have data sets for two groups, with one being much smaller than the other. For that reason, I am using the MatLab bootstrapping function to estimate the performance of the smaller group. I have code that draws on my original data, and it generates 1000 'new' means. However, it is not clear as to how many of the original data points are used each time. Obviously, if all the original data was used, the same mean would continue to be generated.
Can anyone help me out with this?
Bootstrapping comes from sampling with replacement. You'll use the same number of points as the original data, but some of them will be repeated. There are some variants of bootstrapping which work slightly differently, however. See https://en.wikipedia.org/wiki/Bootstrapping_(statistics).