GradientOfUnstructuredDataSet VS PythonCalculator's gradient - paraview

I try to get gradient of a vector field in Paraview. However using GradientOfUnstructuredDataSet will generate different results than PythonCalculator's gradient.
Here is my expression input for python calculator: gradient(inputs[0].PointData['Txj'])

Thanks to Kenneth Moreland here
Try turning on the Faster Approximation option for the Gradient Of Unstructured Data Set filter. When I tried this and I turned that on, the results seemed to match more closely.
When computing gradients on a finite mesh, you are only able to estimate them using finite differences. And depending on how define the finite differences are defined, you can get different answers. By default, the Gradient Of Unstructured Data Set is doing some extra work to make a more accurate gradient.

Related

Is there an effective way to fit the following two datasets with lsqcurvefit?

I have two complex datasets for which I intend to find a suitable function to fit them. The first dataset is presented as follows:
As you can see, although complicated, it seems that this dataset is a combination of rectangle functions. These data describe the relation of 'Amplitude' of complex numbers with time. The second picture looks like this:
And this relation actually describes the 'Phase' of the above complex numbers with time, it seems that they are also combinations of rectangle functions. At first, I want to use combinations of Fourier cosine and sine series to fit the amplitude and phase using
lsqcurvefit
in MATLAB, but it seems that the provided parameters fail to converge to the correct values. (I have tried a number of options, like adjusting FiniteDifferenceStepSize, FiniteDifferenceType, StepTolerance and so on). Despite many failures, I saw someone said we could use Normal cumulative distribution function (CDF) to fit a step function, and I thought that it might be possible if we use the combinations of parameterized CDF and
y = erfc(x)
to achieve successful fitting. So, could anyone provide any solutions or ways to fit the above two relations? Giving some valuable ideas will also be very helpful to me.
PS: For now I don't care any hidden physics inside these data, and all I want to do is to find a mathematical way to fit the above two relations in MATLAB.
Thanks!

Neural Networks: A step-by-step breakdown of the Backpropagation phase?

I have to design an animated visual representation of a neural network that is functional (i.e. with UI that allows you to tweak values etc). The primary goal with it is to help people visualize how and when the different math operations are performed in a slow-motion, real-time animation. I have the visuals set up along with the UI that allows you to tweak values and change the layout of the neurons, as well as the visualizations for the feed forward stage, but since I don’t actually specialize in neural networks at all, I’m having trouble figuring out the best way to visualize the back propagation phase- mainly due to the fact that I’ve had trouble figuring out the exact order of operations during this stage.
The visualization starts by firing neurons forward, and then after that chain of fired neurons reach an output, an animation shows the difference between the actual and predicted values, and from this point I want to visualize the network firing backwards while demonstrating the math that is taking place. But this is where I really am unsure about what exactly is supposed to happen.
So my questions are:
Which weights are actually adjusted in the backpropagation phase? Are all of the weights adjusted throughout the entire neural network, or just the ones that fired during the forward pass?
Are all of the weights in each hidden layer adjusted by the same amount during this phase, or are they adjusted by a value that is offset by their current weight, or some other value? It didn't really make sense to me that they would all be adjusted by the same amount, without being offset by a curve or something of the sort.
I’ve found a lot of great information about the feed forward phase online, but when it comes to the backpropagation phase I’ve had a lot of trouble finding any good visualizations/explanations about what is actually happening during this phase.
Which weights are actually adjusted in the back-propagation phase? Are all of the weights adjusted throughout the entire neural network, or just the ones that fired during the forward pass?
It depends on how you build the neural network, typically you forward-propagate your network first, and then back-propagate, in the back-propagation phase, the weights are adjusted based on the error and Sigmoid derivative. It is up to you to choose which weights are adjusted, as well as the type of structure that you have. For a simple Perceptron network (based on what I know) every weight would be adjusted.
Are all of the weights in each hidden layer adjusted by the same amount during this phase, or are they adjusted by a value that is offset by their current weight, or some other value? It didn't really make sense to me that they would all be adjusted by the same amount, without being offset by a curve or something of the sort.
Back-propagation slightly depends on the type of structure you are using. You usually use some kind of algorithm - usually a gradient descent or stochastic gradient descent to control how much a weight is adjusted. From what I know, in a Perceptron network every weight is adjusted by it's own value.
In conclusion, a back-propagation is just a way to adjust the weights so that the output values are closer to the desired result. It might also help you to look in to gradient descent, or watch a network being built from scratch (I learned how to build neural networks through breaking them down step-by-step).
Here is my own version of a step-by-step break down of back-propagation:
The error is calculated based on the difference between the actual outputs and the expected outputs.
The adjustments matrix/vector is calculated by finding the dot product of the error matrix/vector and the Sigmoid derivative of training inputs.
The adjustments are applied to the weights.
Steps 1 - 3 are iterated many times until the actual outputs are close to the expected outputs.
EXT. In a more complicated neural network you might use stochastic gradient descent or gradient descent to find the best adjustments for the weights.
Edit on Gradient Descent:
Gradient descent, also known as the network derivative, is a method of finding a good adjustment value to change your weights in back-propagation.
Gradient Descent Formulae: f(X) = X * (1 - X)
Gradient Descent Formulae (Programmatic):
Gradient Descent Explanation:
Gradient descent is a method which involves finding the best adjustment to a weight. It is necessary so that the best weight values can be found. During the back-propagation iteration, the further the actual output is from the expected output, the bigger the change to the weights is. You can imagine it as an inverted hill, and in each iteration, the ball rolling down the hill goes faster and then slower as it reaches the bottom.
Credit to Clairvoyant.
Stochastic gradient descent is a more advanced method used when the best weight value is harder to find than in the use case of a standard gradient descent example. This might not be the best explanation, so for a much clearer explanation, refer to this video. For a clear explanation of stochastic gradient descent, refer to this video.

Guided Grad-CAM visualization, weighting of gradients

I implemented Grad-CAM and Guided Backprop as presented in the paper and everything is working as expected. The next step is to combine the class activation map and the gradient map to get the final weighted gradients. In the paper this is done by point-wise multiplication:
In order to combine the best aspects of both, we fuse Guided Backpropagation and Grad-CAM visualizations via pointwise multiplication (Grad-CAM is first up-sampled to the input image resolution using bi-linear interpolation)
The corresponding figure (cropped) is:
My problem is as follows: The class activation map contains mostly 0's, i.e. the blue regions, which will produce 0's when multiplied with the gradients. However, in the image the guided grad-cam map is mostly grey.
I'm aware that the grey area in the gradient map is due to the gradients being 0 in most places and normalization to the range [0,1] will put them somewhere around 0.5 (assuming that we have both positive and negative gradients with a similar magnitude). Still, multiplication with 0 will result in 0, which should be displayed as black.
For comparison my maps look like this:
Can anyone explain what operation is used to combine both maps? Or am I missing something else?
Thanks in advance.
All assumptions are correct. The thing I was missing is that in the case of guided Grad-CAM the weighting of the gradients is done before the normalization to the range [0,1].

Why do I get disjoint data when I try extrapolate after using polynomial regression

I wanted to extrapolate some of the data I had, as shown in the plot below. The blue line is the original data and the red line is the extrapolation that I wanted.
To use regression analysis, I used the function polyfit:
sizespecial = size(i_C);
endgoal = sizespecial(2);
plothelp = 1:endgoal;
reg1 = polyfit(plothelp,i_C,2);
reg2 = polyfit(plothelp,i_D,2);
Where i_C and i_D are the vectors that represent the original data. I extended the data by using this code:
plothelp=1:endgoal+11;
for in = endgoal+1:endgoal+11
i_C(in) = (reg1(1)*(in^2))+(reg1(2)*in)+reg1(3);
i_D(in) = (reg2(1)*(in^2))+(reg2(2)*in)+reg2(3);
end
However, the graph I output now is:
I do not understand why the extra notch is introduced (circled in red). Do not hesitate to ask me to clarify any of the details on this questions and thank you for all your answers.
What I imagine is happening is that you are trying fit a second order polynomial over all your data. My guess is that this polynomial will look a lot like the curve I have drawn in in orange. If you follow Matt's advise from his comment and plot your regressed polynomial over the your original data as well (not just the extrapolated part) you should confirm this.
You might get better results by fitting a higher order polynomial. Your data have two points of inflection so a 3rd order polynomial will probably work quite well. One danger of extrapolating on higher order polynomial however is that they could have fairly dramatic inflections outside of the domain of your data and produce unexpected and wild results.
One way to mitigate against this is by rather performing a linear regression over the final x data points of your series. These are the points highlighted in yellow in the figure. You can tune x as a parameter such that it covers as much of the approximately linear final portion of your curve as makes sense. The red line I have drawn in will be the result of a linear regression performed on only those data (as opposed to the entire data set)
Another option might be to rather fit a spline curve and extrapolate on that. You can use the interp1 function specifying 'spline' or 'pchip' for that.
However which is the best choice will depend largely on the nature of the problem you are trying to solve.

limiting imregister/imregtform to whole-pixel translations (i.e. no subpixels)

I am registering multi-modal MRI slices that are 512x512 greyscale (each normalised to 0..1 range). The slices are of the same object but taken with different sequences and have very different intensities. I am currently finding the translation-only transformation between the two slices using imregister(moving,fixed,'translation',optimizer,metric) where optimizer and metric are from imregconfig('multimodal').
However, the transformation it finds (inspecting tform) is like '2.283' in the x and '-0.019' in the y, and actually I only wish for whole value translations i.e. '2' and '0' in this case.
How to modify imregister (or a similar function) to check only whole-pixel translations? This would save a lot of computation and it suits my needs better.
Without modifying imregister I assume the easiest solution to just round the x and y translations?
I'm not sure how imregister is implemented for the 'multimodal' case, but pure translation estimation for conventional image registration is done using image gradients and taylor apporximation and gives sub-pixel accuracy at the same cost as pixel-level accuracy.
So, in that case limiting yourself to pixel-wise translation does not seems to benefit you in any way.
If you do not want to bother with sib-pixel shifts, I suppose rounding would be the simplest approach.