Implementing runtime-constant persistent "LUT"s in MATLAB functions - matlab

I am implementing (for class, so no built-ins!) a variant of the Hough Transform to detect circles in an image. I have a working product, and I get correct results. In other words, I'm done with the assignment! However, I'd like to take it one step further and try to improve the performance a little bit. Also, to forestall any inevitable responses, I know MATLAB isn't exactly the most performance-based language, and I know that the Hough transform isn't exactly performance-based either, but hear me out.
When generating the Hough accumulation space, I end up needing to draw a LOT of circles (approximately 75 for every edge pixel in the image; one for each search radius). I wrote a nifty function to do this, and it's already fairly optimized. However, I end up recalculating lots of circles of the same radius (expensive) at different locations (cheap).
An easy way for me to optimize this was precalculate a circle of each radius centered at zero once, and just select the proper circle and shift it into the correct position. This was easy, and it works great!
The trouble comes when trying to access this lookup table of circles.
I initially made it a persistent variable, as follows:
[x_subs, y_subs] = get_circle_indices(circ_radius, circ_x_center, circ_y_center)
persistent circle_lookup_table;
% Make sure the table's already been generated; if not, generate it.
if (isempty(circle_lookup_table))
circle_lookup_table = generate_circles(100); %upper bound circ size
end
% Get the right circle from the struct, and center it at the requested center
x_subs = circle_lookup_table(circ_radius).x_coords + circ_x_center;
y_subs = circle_lookup_table(circ_radius).y_coords + circ_y_center;
end
However, it turns out this is SLOW!
Over 200,000 function calls, MATLAB spent on average 9 microseconds each call just to establish that the persistent variable exists! (Not the isEmpty() call, but the actual variable declaration). This is according to MATLAB's built-in profiler.
This added back most of the time gained from implementing the lookup table.
I also tried implementing it as a global variable (similar time to check if the variable is declared) or passing it in as a variable (made the function call much more expensive).
So, my question is this:
How do I provide fast access inside a function to runtime-constant data?
I look forward to some suggestions.

It is NOT runtime-constant data, since your function has the ability of generating the table. So your main problem is to discard this instruction from the function. Before all the calls of this critical function, make sure that this array is generated elsewhere outside the function.
However, there is a nice trick I've read from Matlab's files, and more specifically bwmorph. For each functionality of this particuliar function which requires a LUTs, they have created a function which returns the LUT itself (the LUT is written explicitely in the file). They also add an instruction coder.inline('always') to ensure that this function will be inlined. It seems to be quite efficient!

Related

Efficient way to update plot data

I'd like to improve the efficiency of my GUI in app designer, even if it involves frontloading figure generation once so as to save time in subsequent views/updates.
I'm trying to update a UIAxes which includes 4 patch() handles, and approximately 10 plot3() handles referencing approximately 30 lines. The goal is to generate the figure, and then have the ability to update the location of all of the data over 120 different timepoints. ("Play" through the results)
The problem is that it takes approximately 1.5seconds to update the figure once. Updating the patch() handles is approximately an order of magnitude faster than the plot3() handles. While my code doesn't need to run instantly, I was hoping it might update much faster (< 0.5 seconds per timepoint).
I'm using the following syntax to update (as an example) one of my plot3 handles, which includes 3 distinct line objects (thus the cell referencing of {'XData'}):
set(p1.foo1,{'XData'},num2cell([foo1.fem.nds(:,1,1) foo1.tib.nds(:,1,1)],2));
set(p1.foo1,{'YData'},num2cell([foo1.fem.nds(:,2,1) foo1.tib.nds(:,2,1)],2));
set(p1.foo1,{'ZData'},num2cell([foo1.fem.nds(:,3,1) foo1.tib.nds(:,3,1)],2));
This takes approximately 0.3 seconds to run, and is only 1 of 5 plot3 handles. I've also tried running the set() command inside a loop to avoid the num2cell call as I assumed it was slower. Unfortunately that slowed things down even more.
So I'm wondering if anyone is familiar with another solution to either:
1) Updating the plot data in a faster more efficient way than I've described here.
2) Frontloading all of these figure assemblies (120 time points, 120 figures), and maybe just placing them into my GUI one at a time as I play through my time series by adding and removing each individual figure from my UIAxes as I cycle through the 120 points. I realize this will take more memory, but I'd rather more memory than more time.
I hope this was clear, any suggestions would be appreciated.
It seems as if you're asking for general advice. If you'd like more specific answers, try creating a minimum reproducible example.
Otherwise, some general tips:
Don't store data in cells. The set() method for line objects can be used with standard numeric arrays: primative line documentation
Struct in MATLAB has some overhead associated with it. It looks like you have multiple nested structs holding numeric arrays. Retrieving this data from that struct might be slow. You can always use tic/toc to see how slow it is. But in general, avoid structs when possible and store the numeric data as its own variable. For more info, see some advice on arrays of structs vs. structs of arrays.

How to plot all the stream lines in paraview?

I am simulating the case "Cavity driven lid" and I try to get all the stream lines with the stream tracer of paraview, but I only get the ones that intersect the reference line, and because of that there are vortices that are not visible. How can I see all the stream-lines in the domain?
Thanks a lot in adavance.
To add a little bit to Mathieu's answer, if you really want streamlines everywhere, then you can create a Stream Tracer With Custom Source (as Mathieu suggested) and set your data to both the Input and the Seed Source. That will create a streamline originating from every point in your dataset, which is pretty much what you asked for.
However, while you can do this, you will probably not be happy with the results. First of all, unless your data is trivially small, this will take a long time to compute and create a large amount of data. Even worse, the result will be so dense that you won't be able to see anything. You will get all those interesting streamlines through vortices, but they will be completely hidden by all the boring streamlines around them.
Thus, you are better off with trying to derive a data set that contains seed points that are likely to trace a stream through the vortices that you are interested in. One thing you might want to try is to compute the vorticity of your vector field (Gradient Of Unstructured Data Set when turning on advanced option Compute Vorticity), find the magnitude of that (Calculator), and then use the Threshold filter to pull out the cells with large vorticity. Then use that as your Seed Source.
Another (probably better) option if your data is 2D or you can extract an interesting surface along the flow of your data is to use the Surface LIC plugin. Details can be found at https://www.paraview.org/Wiki/ParaView/Line_Integral_Convolution.
You have to choose a representative source for your streamline.
You could use a "Sphere Source", so in the StreamTracer properties.
If that fails, you can use a StreamTracerWithCustomSource and use your own source that you will have to create yourself first.

Change simulink parameters at runtime from the code/block flow

My initial problem is that I have a continuous transfer function which coefficients change with time.
Currently the TF's coefficients are expressed in function of the block mask parameters. These parameters are tunable, and if I change the value in the mask parameters dialog during a simulation the response seems to react appropriately.
However how can I do just that in the code/block flow? Basically, I
have the block parameter 'maskParam' which is set using the mask
parameters dialog, and in the mask initialization commands:
'param=maskParam'. 'param' is used in the transfer function and I
would like to change it in real time (as param=maskParam*f(t)).
I have already looked around and found relevant solutions but either it's unbelievably complicated; or the only transfer function which we are allowed to modify at runtime is discrete and 1) I would like to avoid z-transforming my quite complex TF (I don't have the control toolbox) 2) The sampling time seems to be fixed.. None uses this "dirty" technique of updating parameters, maybe that's the way around?
To illustrate:
I am assuming that you want to change your sim parameters whilst the simulation is running?
A solution is that you run your simulation for inf period and use/change a workspace variable during the simulation period to make the changes take effect.
for Example:
If you look at the w block, you can set it's value in runtime, by doing this:
set_param('my_model_name/w', 'value', 100); % Will change to 100 immediately
You can do similar things with arrays (i.e. a list of coefficients in your case).
HINT FOR YOU
You are using discrete transfer function block. Try the following:
1) Give your block a name e.g. fcn_1
2) In your script, type set_param('your_model_name/fcn_1', 'numerator', '[1 2]'); This will set the numerator value to [1 2]. Do the same for denominator.
3) You should be able to understand, through this exercise, how to handle the property names etc. so that you can change/get them using set_param/get_param.
I leave you to investigate further.
The short answer is that Simulink blocks are not really designed to do this. By definition, a transfer function is Liner-Time Invariant, meaning its characteristics (read coefficients) do not vary with time.
Having said that, there are some workarounds, such as the ones you mentioned in your question. These are the correct way to approach the problem I'm afraid, other than the set_param method suggested by #ha9u63ar. See also this blog on the subject on the MathWorks web site.

Matlab parametric plotting gui - vary parameters via sliders

I often have function such as:
sin(a*w*t + p)
where:
w = natural frequency
t = time
a,p = parameters (which I can vary)
As you can see if you want to vary a,p, you can do so via the standard interface but it's not very convenient. So I thought I'd look for a GUI which has a slider for each parameter. Does such a thing exist?
I've never seen one so I thought I'd quickly write one. However, I'm worried that due to lack of time and knowledge of matlab I will cause problems such as generating too many plot commands when the slider is moved instead of just one. Of course I also have the problem that I want to specify a field where the user can specify the function e.g. by typing sin(a*w*t +p) in a text field and then specify what each variable means which I currently don't know how to do (it looks like a parsing task). Can I do this or should I go with a predefined set of functions?
You can find similar projects in Matlab File Exchange as example.
For instance:
Integral Tool
Function Parameter Slider
I didn't have a look at the code but according to the screenshots, it should help you.
Regarding the function input feature, you can use the function eval (with a few checks on the input if you need reliability). If you want to allow any parametric variable, it may be harder.

Sane cubic interpolation for "large" data set, alternative to interp1d?

I am working with audio data, so my data sets are usually around 40000 to 120000 points (1 to 3 seconds). Currently I am using linear interpolation for some task and I would like to use cubic interpolation to improve some results.
I have been using interp1d with kind='linear' to generate an interpolation function. This works great and is very intuitive.
However, when I switch to kind='cubic', my computer goes nuts --- the memory starts thrashing, the Emacs window goes dark, the mouse pointer starts moving very slowly, and the harddrive becomes very active. I assume this is because it's using a lot of memory. I am forced to (very slowly) open a new terminal window, run htop, and kill the Python process. (I should have mentioned I am using Linux.)
My understanding of cubic interpolation is that it only needs to examine 5 points of the data set at a time, but maybe this is mistaken.
In any case, how can I most easily switch from linear interpolation to cubic interpolation without hitting this apparent brick wall of memory usage? All the examples of interp1d use very few data points, and it's not mentioned anywhere in the docs that it won't perform well for higher orders, so I have no clue what to try next.
Edit: I just tried UnivariateSpline, and it's almost what I'm looking for. The problem is that the interpolation does not touch all data points. I'm looking for something that generates smooth curves that pass through all data points.
Edit2: It looks like maybe InterpolatedUnivariateSpline is what I was looking for.
I had a similar problem in ND interpolation. My solution was to split the data into domains and construct interpolation functions for each domain.
In your case, you can split your data into bunches of 500 points and interpolate over them depending where you are.
f1 = [0,...495]
f2 = [490,...,990]
f3 = [985,...,1485]
..
.
.
.
and so on.
Also make sure to have an overlap of the intervals of each function. In the example,
the overlap is 5 points. I guess you have to do some experimenting to see what is the optimal overlap.
i hope this helps.