Finding where data levels off and rise times - matlab

My data looks like this:
I am trying to find the following info about my data:
Rate of rise on the "transient" portion
time to steady state and steady state average
I think that stepinfo is my best bet to do this, but it seems to want take the final value as the steady state value which isn't giving me the best result. And I cannot find the average value of the steady time until I know when it is.... Is there a way to set some bounds on the steady state search? On the picture i linked it could be data within +/- 0.25 for 50 data points is steady state?

What you can do is:
Decide what the slope of the curve should be in the intersection between transient and steady state
Smoothen you signal
Find the difference between each point on the graph
Find the first place where the difference between points is lower then the value you selected in
To do this, keep in mind:
The difference in the beginning is on average zero, thus you have to skip these values.
One way to do this is simply: x(x < 0.1 * max(x)) = []; That way you remove the entire start of the curve. You won't need it for this part anyway. Remember to keep a backup of x.
A simple way to smoothen the signal is: smooth_x = arrayfun(#(t) mean(x(t:t+k)), 1:numel(x-k)). You need to find an appropriate value for k.
Even a smoothed curve will have "bumps", thus you might want to compare points that are not adjacent, for instance check diff(x(k),x(k+10)). If the average incline between those two points are lower than the value you selected in 1, you're happy. A combination of find and diff should do the trick here.
Once you have smoothed it, you can use diff to find the average inclination for both the transient and the stable part.
There is no way for me to tell where the curve goes from transient to stable. That's a decision you need to make. It could for instance be less than 0.2 l/min per 10 seconds.

Related

What's the most efficient way to use large data from Excel in my C# code?

I ran a computer simulation for my Pendulum, to measure time taken to reach the lowest point, for every velocity and every angle.
As you can imagine there is a lot of data, thousands of lines for all angles and velocity.
On every frame, I will be measuring the velocity and angle of the pendulum, and will look for the closest data in my Excel spreadsheet.
How can I go about this to make sure it's not too CPU-intensive?
Should I create a massive array where every element corresponds to a certain angle: for example, myArray[30] will be for all velocities and times for all my data between 30.0 degrees and 30.999. (That way it will be avoid lots of if statements)
Or should I keep everything in my Excel spreadsheet?
Any suggestion?
The best approach in my opinion would be dividing your data into intervals based on distribution since you have to access that data in every frame. Then when you measure the velocity and angle you can go look for the interval and access only that part of your data.
I would find maximum and minimum of your data points while importing to Unity and then divide that part based on (maximum - minimum) / NumOfIntervals. Lets say your interval size is 5 for each Angle. When you got an angle of 17 you can do (int)15/5 = 3(Assuming indexes start from zero) and go for third item in your structure. This can be a dictionary or Array of an Arbitrary class instances based on your data.
I can try to help further if you can share the structure of your data. But in my opinion evenly distribution of data to every interval is important.

Tunning gain table to match two-curves

I have two data set, let us name them "actual speed" and "desired speed". My main objective is to match actual speed with the desired speed.
But for doing that in my case, I need to tune FF(1x10), Integral(10x8) and Proportional gain table(10x8).
My approach till now was as follows:-
First, start the iteration with having 0.1 as the initial value in the first cells(FF[0]) of the FF table
Then find the R-square or Co-relation between two dataset( i.e. Actual Speed and Desired Speed)
Increment the value of first cell(FF[0]) by 0.25 and then again compute R-square or Co-relation of two data set.
Once the cell(FF[0]) value reaches 2(Gains Maximum value. Already defined by the lab). Evaluate R-square and re-write the gain value in FF[0] which gives min. error between the two curve.
Then tune the Integral and Proportional table in the same way for the same RPM Range
Once It is tune then go for higher RPM range and repeat step 2-5 (RPM Range: 800-1000; 1000-1200;....;3000-3200)
Now the problem is that this process is taking way too long time to complete. For example it takes around 1 Hr. time to tune one cell of FF. Which is actually very slow.
If possible, Please suggest any other approach which I can try to tune the tables. I am using MATLAB R2010a and I can't shift to any other version of MATLAB because my controller can communicate with this version only and I can't use any app for tuning since my GUI is already communicating with the controller and those two datasets are being made in real-time
In the given figure, lets us take (X1,Y1) curve as Desired speed and (X2,Y2) curve as Actual speed
UPDATE

how to approximate time-series data

I'm not sure if this is the right term but I think I want to s̶m̶o̶o̶t̶h̶ ̶a̶n̶d̶/̶o̶r̶ approximate a data set. I have 30 data points as it is presented in the chart below (the red line with dots)
I want to approximate the dataset so it can be described with fewer data points. The black line represents what I want to achieve.
I want to be able to define an approximation level which will control how much the result data set will differ from the original one.
The approximated data set should contain a set of data points which I can connect together using straight lines.
What is the right algorithm or a math function to solve this problem? I don't expect an implementation here, but rather some suggestions where to start.
I wrote my implementation of the approximation algorithm. It works in most of the cases, but there are certain situations in which it returns non-optimal data.
The example below shows three dotted lines. Thin red line is the original dataset, a thick red-black dotted line is generated by my algorithm, the green line is what I'd like to achieve.
var previousValue;
return array.map(function (dataPoint, index, fullArray) {
var approximation = dataPoint;
if (index > 0) {
if (Math.abs(previousValue - value) < tolerance) {
approximation = previousValue;
} else {
previousValue = dataPoint;
}
} else {
previousValue = dataPoint;
}
return approximation;
});
There are two options here:
if the shown "glitch" in the data is significant, meaning that you cannot smooth it.
if all data shown can be approximated and the "glitch" is insignificant
In (1) case, you may consider approximate by templates (e.g. wavelet) or use basic differential analysis to detect and keep the "glitch" (e.g. meshes).
In (2) case, you may use MA, ARIMA to fit, where the "glitch" can be analyzed further through the roots
Okay, point of clarification, are you looking to smooth the data or approximate it? If you are going to smooth the data, by definition, it will get rid of the little bumps and dips in the data series. On the other hand if the goal is to accurately portray all those dips and bumps, then you do NOT want smoothing. I'm going to talk about smoothing, you tell me if you want the other.
Okay, the best way I know to smooth data is to use an alpha value. The equation is Tn+1=(1-α)Tn+αDatan+1. What this means is that you set the portion of the next function point which is affected by your series history and the portion which is affected by the current data point.
Example graph with alpha = .5
Take a look at this data. Here the α=.5. So the function conforms to the data, but not a lot. The one below is the same, but the alpha is .25. So the data is followed even less, but the function is a lot smoother. There is also a third option where α decreases over time. Initially it can be very high, so you quickly follow the data, but then as α decreases over time the trend becomes smoother and stays smooth over time. Finally, you can set a hard limit on the minimum α This will ensure that you will always have some minimum responsiveness to the data.
Example graph with alpha = .25

Matlab 'Step' response size doesn't change risetime?

In using Matlab's 'Step' command in finding the step response of a system's transfer function, it's possible to change the step size from the default of 1 to something else (eg 1e-2), like so:
stepOpt = stepDataOptions('StepAmplitude', 1e-2);
step(TF_closed_loop, stepOpt);
In this case the TF is a physical system, eg a motor. However, although the resulting step size is indeed different, the time scale doesn't change at all. Eg if it took 100 seconds to reach 1, it still takes 100 seconds to reach 1e-2...and this is not a reasonable result for a physical system that would take less time to go a shorter distance.
Is there another required setting in Matlab to make this accurate?
It's already accurate. By changing the step amplitude you are only multiplying the input by a constant newA/oldA. The response is the same as in the first case, but multiplied by that same constant. But of course, it is going to take the same amount of time to reach a given percentage of the stationary value.

getting the value of a filter at an arbitrary time

Context: I'm trying to improve the values returned by the iPhone CLLocationManager, although this is a more generally applicable problem. The key is that CLLocationManger returns data on current velocity as and when it feels like it, rather than at a fixed sample rate.
I'd like to use a feedback equation to improve accuracy
v=(k*v)+(1-k)*currentVelocity
where currentVelocity is the speed returned by didUpdateToLocation:fromLocation: and v is the output velocity (and also used for the feedback element).
Because of the "as and when" nature of didUpdateToLocation:fromLocation: I could calculate the time interval since it was last called, and do something like
for (i=0;i<timeintervalsincelastcalled;i++) v=(k*v)+(1-k)*currentVelocity
which would work, but is wasteful of cycles. Especially as I probably want timeintervalsincelastcalled to be measured as 10ths of a second.
Is there a way to solve this without the loop ? i.e. rework (integrate?) the formula so I put an interval into the equation and get the same answer as I would have by iteration ?
If you write your original equation as
v = k*vCurrent + (1-k)*v
you can apply the answer from another SO question.
Instead of iterating, you could just choose the value of k based on the size of the interval. For example, if the interval length is an hour - you'd probably want k to be 0.
It would be easy to precompute k for a variety of interval sizes to give the same answer as the iteration would give. Just compute the change by iterating (you already have code for that), and then compute the value of k that would give you that algebraicly.
It's a common programmer jedi trick to have a table of lookup values in place of expensive calculations. (there, now my answer has something to do with code!)