I'm trying to read a lot of data coming from my Arduino, I've set my input buffer to 500000 to make sure that it can handle all these data. My data are 4 sensors readings each samples at 250 Hz. With the default buffer size (712), I used to get snags when I plot the readings in real time and the samples get disordered which makes the plot go crazy. I solved this by increasing the buffer size to 50000. But now, this will work for a while but if I want to run it for 15 minutes, I get the same misbehavior after 5 minutes, with the addition that the plotting gets slower. I do have some processing code along with the live plotting but it shouldn't be like this with such a bi buffer. I want to know whether the buffer will contain all the data from the beginning until it's full or will it keep erasing older data when it gets full (knowing that I already saved it in another vector and plotted it). I truly don't understand why this keeps happening.
kind regards
I.H
When the buffer gets full, once you get new data it erases the old data. The behavior you are seeing is because your processing and your plotting is slower than the flow of the data.
Try to make sure that you optimize you processing
Make sure that for plotting is done by "drawnow". Like this you are sure that if there is anything in the queue it is not executed
Try to avoid saving and keeping all the data
If the problem is still there, you can try to implement a timer to make sure that you are consistent with reading your data
Related
TL;DR:
Can I get Grafana to show me the previous data point, when the currently selected time period does not have a data point? I have an example which sounds ridiculous, but at least it's simple to understand: I send data every 1 minute, and I wish to zoom into the last 30 seconds, and still see data. You may ask "why not just zoom out to 2 minutes" but the reason is that other data is on the same graph that has updated more often, and I wish to compare with that data. Also, for the more lengthy reasons below.
If not, how can I achieve what I want to achieve, see below?
Context
For a few years, I have been monitoring the water level in three of our basement sumps (which have pumps installed) by sending this data from Node-RED to InfluxDB, then visualising the sump levels in Grafana. I have set up three waterproof ultrasonic distance sensors, each pointed down a pipe that is inserted vertically into each sump. The water fills the pipe and the distance sensor, connected to an Arduino, sends me the reading. The Arduino also has other sensors connected (temp / humidity) and deals with distance calibrations to calculate the percent full of each sump. All this data is sent to Node-RED. In total, I am sending 4 values per sump: distance measurement in mm, percent full, temp, humidity. So that's 12 fields. Data is sent every 2 seconds, because I wished to have a reasonably high resolution to see nice curves in graphs.
Also I decided to store all this data so that I could later troubleshoot issues (we have had sewage floods resulting in water not being able to be pumped away, etc...) and design some warning systems for these issues based on data.
Storing 12 values for every 2 seconds, over the course of a number of years, takes up a lot of space (8GB).
Nature of the data
Storing this resolution of data has also helped me be able to describe the nature of the data. I will do so here.
(1) Non-meaningful NOISE (see below) - the percent-full reading goes up and down by 1 or 2 percent every couple of seconds:
(2) Meaningful DRIFT (see below) - I don't mean sensor drift, I am referring to actual water levels changing slowly over time, e.g. over 1 day or 1 week. Perhaps condensation on the walls drips down into the sump, or water evaporates from the sump, and the value can waver by a few percent over the course of a day. Each sump has slightly different characteristics.
(3) Meaningful MONITORING DATA - during wet weather, depending on rainfall amount, the sumps fill up over the course of say 30 mins to 3 hours. Then the pumps run and the water level drops again, wavers a bit, then the sumps continue to fill up. If the rain stopped, you can see a lovely curve as the water fills in progressively more slowly (see the green line below):
Solution to downsample
I know Influx has its own downsampling possibilities, however because of the nature of the data (which can hardly vary for 2 months but when it does, I really need to capture it in detail), I don't think lowering the sample rate is a great idea.
I have some understanding of digital filters (e.g. low pass etc) but have never programmed one myself. So I have written a basic filter in javascript (a Node-RED function) to filter the data in realtime as follows: only send each reading when it has changed from the previous one by x amount. (And update the previous one, when that occurs.)
This has already vastly reduced the amount of data being stored, and I can vary x to filter out noise shown in my first graph above, at the expense of resolution when the pumps run. Even if I set the x value to 2, it still vastly reduces data over long periods of dry weather.
So - onto my problem! Now data is not being logged to InfluxDB unless there is some meaningful change. Which means that when I zoom in to e.g. 15 minute timeframe of data, there is nothing to see.
Grafana does have the option of "fill (previous)" but this draws a line between points on the existing graph, rather than showing the previous data as if it hasn't changed since that point. Now my grafana dashboard looks a bit sad :(
One proposed solution is, in addition to sending "delta" data, send "summary" data, that is - send a full suite of data every 1 minute regardless of whether data changed or not. But then we get noise back again, and pointless storage.
Any other ideas?
I have some data which is time-stamped by a NMEA GPS string that I decode in order to obtain the single data point Year, Month, Day, etcetera.
The problem is is that in few occasions the GPS (probably due to some signal loss) goes boinks and it spits out very very wrong stuff. This generates spikes in the time-stamp data as you can see from the attached picture which plots the vector of Days as outputted by the GPS.
As you can see, the GPS data are generally well behaved, and the days go between 1 and 30/31 each month before falling back to 1 at the next month. In certain moments though, the GPS spits out a random day.
I tried all the standard MATLAB functions for despiking (such as medfilt1 and findpeaks), but either they are not suited to the task, either I do not know how to set them up properly.
My other idea was to loop over differences between adjacent elements, but the vector is so big that the computer cannot really handle it.
Is there any vectorized way to go down such a road and detect those spikes?
Thanks so much!
you need to filter your data using a simple low pass to get rid of the outliers:
windowSize = 5;
b = (1/windowSize)*ones(1,windowSize);
a = 1;
FILTERED_DATA = filter(b,a,YOUR_DATA);
just play a bit with the windowSize until you get the smoothness you want.
I am getting some readings off an accelerometer connected to an Arduino which is in turn connected to MATLAB through serial communication. I would like to write the readings into a text file. A 10 second reading will write around 1000 entries that make the text file size around 1 kbyte.
I will be using the following code:
%%%%%// Communication %%%%%
arduino=serial('COM6','BaudRate',9600);
fopen(arduino);
fileID = fopen('Readings.txt','w');
%%%%%// Reading from Serial %%%%%
for i=1:Samples
scan = fscanf(arduino,'%f');
if isfloat(scan),
vib = [vib;scan];
fprintf(fileID,'%0.3f\r\n',scan);
end
end
Any suggestions on improving this code ? Will this have a time or Size limit? This code is to be run for 3 days.
Do not use text files, use binary files. 42718123229.123123 is 18 bytes in ASCII, 4 bytes in a binary file. Don't waste space unnecessarily. If your data is going to be used later in MATLAB, then I just suggest you save in .mat files
Do not use a single file! Choose a reasonable file size (e.g. 100Mb) and make sure that when you get to that many amount of data you switch to another file. You could do this by e.g. saving a file per hour. This way you minimize the possible errors that may happen if the software crashes 2 minutes before finishing.
Now knowing the real dimensions of your problem, writing a text file is totally fine, nothing special is required to process such small data. But there is a problem with your code. You are writing a variable vid which increases over time. That may cause bad performance because you are not using preallocation and it may consume a lot of memory. I strongly recommend not to keep this variable, and if you need the dater read it afterwards.
Another thing you should consider is verification of your data. What do you do when you receive less samples than you expect? Include timestamps! Be aware that these timestamps are not precise because you add them afterwards, but it allows you to identify if just some random samples are missing (may be interpolated afterwards) or some consecutive series of maybe 100 samples is missing.
I Have a Sensor (Gyro) that connected to my python program (with socket UDP) and send data to python console in real-time but with 200 Hz frequency.
I want to change this frequency of coming data to my console but could not find a good way to do it.
I was thinking about doing it with filters like Mean an waiting for idea?
If you want to have regular updates, use a windowing mechanism. Take the last n values and store the average. Then, discard the next two values and take the last n values again. This example would yield values with a frequency of 200 Hz/2.
If you only want to see events when changes have occured, store the last value, compare the current value with the last one and emit an event if it has changed, updating the stored value. As you're dealing with sensors (and thus, a little fuzziness), you probably want to implement a hysteresis.
You can even raise the frequency by creating extra values in between the received ones through interpolation. For a steady frequency, you would have to take care about your timing though.
I am building a MATLAB GUI to do data collection from a Tektronix DPO4104 oscilloscope (MATLAB driver here).
I am playing around with tmtool and with my GUI code and have found that the driver can only collect 10,000 data points, regardless of if the oscilloscope is set to show more than 10k points. I found this post on in CCSM but it hasn't been terribly helpful. (I'm the last post on there if you care to read it.) I am using the DPO4104 driver, whereas this post discusses use of the DPO4100 driver, I believe.
As far as I can tell, the steps are:
Edit driver's readwaveform function to account for the current recordLength - in my case, 100,000 points, say.
Manually edit the driver's MaxNumberPoint from 10,000 to 100,000. (In my case, the default number was 0.. I changed this to 100,000).
Manually edit EndingPoint. I set this to 100,000 also.
Before creating a device object, set(interfaceObj, 'InputBufferLength', 2.5*recordLength), that is, make sure the input buffer can fit more than 100,000 points. It's recommended to use at least double the expected buffer. I used 2.5 just because.
Build device object and waveform object, connect() to it, and readwaveform. Profit.
I am still unable to collect more than 10,000 points, either through tmtool or through my GUI. Any help would be appreciated.
I got ahold of a Tektronix engineer; he basically told me just to use the SCPI commands directly and skip the driver. While annoying, this might be the simplest solution.
Is it possible for you to collect the data points 10,000 at a time, then save them somewhere, collect the next 10,000, append them to the saved points, repeat?
It's a work-around, sure.
I figured it out! I think. Taking a couple weeks to step back and refresh really helped. Here's what I did:
1) Edit the driver's init function to configure a larger buffer size. Complete init code:
function init(obj)
% This method is called after the object is created.
% OBJ is the device object.
% End of function definition - DO NOT EDIT
% Extract the interface object.
interface = get(obj, 'Interface');
fclose(interface);
% Configure the buffer size to allow for waveform transfer.
set(interface, 'InputBufferSize', 12e6);
set(interface, 'OutputBufferSize', 12e6); % Originally is set to 50,000
I originally tried to set the the buffer sizes to 22e6 (I wanted to get 10 million points) but I got out-of-memory errors. Supposedly the buffer should be a little more than double what you expect to get out, plus space for headers. I probably don't need 2 million points worth of "header", but eh.
2) Edit the driver's readwaveform() to first query what the user-settable number of points to collect should be. Then, write SCPI commands to the scope to ensure that the number of data points to be transferred is equal to the number of points the user desires. The following snippet will do the trick in readwaveform:
try
% Specify source
fprintf(interface,['DATA:SOURCE ' trueSource]);
%----------BEGIN CODE TO HANDLE MORE THAN 10k POINTS----------
recordLength = query(interface, 'HORizontal:RECordlength?');
fprintf(interface, 'DATA:START 1');
fprintf(interface, 'DATA:STOP %d', str2num(recordLength));
%----------END CODE TO HANDLE MORE THAN 10k POINTS----------
% Issue the curve transfer command.
fprintf(interface, 'CURVE?');
raw = binblockread(interface, 'int16');
% Tektronix scopes send and extra terminator after the binblock.
fread(interface, 1);
3) In the user code, set a SCPI command to change the record size to the underlying interface object:
% interfaceObj is a VISA object.
fprintf(interfaceObj, 'HORizontal:RECordlength 5000000');
There you have it. Hopefully this helps out anyone else that's trying to figure out this issue.
Here's a bad idea. Start collecting 10,000 points. When you get to 5000 points, start collecting data again (you might need to run that in a new thread). Keep going back and forth until all the data you need are stored in 20 some data structures. Then, combine the structures into one structure by lining up the data points. This might be more work than calling the SCPI commands directly and might have some nasty caveats I haven't thought of. But as I said, it's a bad idea...