Fit of data as a cos^2? - matlab

I have collected this data of angles and intensities in lab to show Malus law so I have to fit the I as I=I0*cos^2(theta). I can't succeed with the cftool because it shows a curve totally different from my data, and I can't get a code that works. This is the data I got:
theta = [90, 110, 130, 135, 150, 170, 180, 190, 210, 225, 230, 250, 270, 290, 310,315,330, 350, 365, 370, 390]
I= [0.0030, 0.6240, 1.3060, 1.3320, 0.9610, 0.1900, 0.0160, 0.1970, 1.1250, 1.3480, 1.2900, 0.5660, 0.0030, 0.5750, 1.6170, 1.6760, 1.0850, 0.1380, 0.0940, 0.2250, 1.2340]
Thank you in advance for your help.

Well, I tried to code, but I couldn't get any near perfect match. I considered I0=1:
figure;
plot(theta, I)
hold on;
f = #(theta) cosd(theta)^2;
fplot(f, [0, 400])

Related

why does yolov4 model(darknet) plot alots of bounding boxes around an object in one image?

I'm using yolov4.cfg for training on my dataset.(I'm using this github repository: https://github.com/Abhi-899/YOLOV4-Custom-Object-Detection)
after training 300 iteration, first I didn't get any bounding box for my images.
after searching about this problem, found that I should decrease the threshold. so I changed my 3 yolo layers so:
[yolo]
mask = 6,7,8
anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
classes=3
num=9
jitter=.3
ignore_thresh = 0.07 ############## .7
truth_thresh = 1 ############ 1
random=1
scale_x_y = 1.05
iou_thresh= 0.213 ##############0.213
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
nms_kind=greedynms
beta_nms=0.6
max_delta=5
now, I get alots of bounding boxes!!!
what should I do?
can any one help me?
my output image is in below link:
https://drive.google.com/file/d/1Jm7pAk8a89JgtPPeXLCLhRW_6hpV68l1/view?usp=sharing

moving longitudinal window (moving average?)

I have 12 data points for y and x:-180:30:179. After I plot my data, it looks like a zig-zag pattern and not smooth. To smooth it out, I apply a moving longitudinal window at 30 degrees (i.e., +/-15 deg.). How can I move it forward by one degree at a time, so that the longitudinal window changes like [-15,15], [-14,16], [-13,17], …?
Here is my code so far
%y=data %12datapoint
y=[90, 65, 60, 53, 70, 82, 65, 38, 44, 71, 77, 64];
sum=0;
for x=-180:30:179
for k=1:30
sum=sum+y(x-15+k);
end
avg(x)=sum/30;
sum=0;
end
I might be trying to read between the lines to much. But it sounds like you are not really asking for a moving average. It kind of sounds like you want your "zig-zag" line smoothed out or interpolated. If this is correct you could do something like this:
y=[90, 65, 60, 53, 70, 82, 65, 38, 44, 71, 77, 64];
x = -180:30:179;
newX = -180:1:179; %Every degree
y_spline = interp1(x,y,-180:1:179,'spline');
y_pchip = interp1(x,y,-180:1:179,'pchip');
l(1) = plot(x,y,'Color',[0 0 1],'Marker','s');hold on
l(2) = plot(newX,y_spline,'r');
l(3) = plot(newX,y_pchip,'g');
grid on; legend(l,{'Orig','spline','pchip'});
Take your pick of interpolation methods ... or I could be completely misreading your question.

why the integral-image contains extra row and column of zeros?

I am learning how to use the integral-images using opencv with Java API, and i created a test that displays the grayscale image before using the integral-image and after using it. the grayscale image is 10 x 10, and when i converted it to the integral-image
i found it 11 x 11 with extra rows of zeros and extra column of zeros as shown below in the output.
please let me know why the integral-image contains extra row and column of zeros?
Code:
public static void main(String[] args) {
MatFactory matFactory = new MatFactory();
FilePathUtils.addInputPath(path_Obj);
Mat bgrMat = matFactory.newMat(FilePathUtils.getInputFileFullPathList().get(0));
Mat gsImg = SysUtils.rgbToGrayScaleMat(bgrMat);
Log.D(TAG, "MainClas", "gsImg.dump(): " + gsImg.dump());
Mat integralMat = new Mat();
Imgproc.integral(gsImg, integralMat, CvType.CV_32F);
Log.D(TAG, "MainClas", "sumMat.dump(): " + integralMat.dump());
}
OutPut:
1: Debug: MainClass -> MainClas: gsImg.dump(): [2, 1, 7, 5, 1, 11, 2, 7, 9, 11;
1, 2, 0, 0, 3, 20, 17, 5, 7, 8;
4, 8, 0, 2, 6, 30, 31, 5, 2, 2;
39, 43, 47, 44, 38, 62, 60, 37, 37, 39;
27, 29, 52, 52, 47, 75, 67, 59, 58, 63;
25, 21, 49, 51, 51, 78, 64, 66, 76, 80;
40, 36, 50, 46, 41, 56, 42, 45, 47, 49;
13, 17, 20, 15, 9, 20, 15, 19, 12, 11;
17, 13, 8, 5, 4, 7, 13, 20, 17, 17;
2, 4, 7, 9, 8, 6, 6, 7, 7, 8]
2: Debug: MainClass -> MainClas: sumMat.dump(): [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0;
0, 2, 3, 10, 15, 16, 27, 29, 36, 45, 56;
0, 3, 6, 13, 18, 22, 53, 72, 84, 100, 119;
0, 7, 18, 25, 32, 42, 103, 153, 170, 188, 209;
0, 46, 100, 154, 205, 253, 376, 486, 540, 595, 655;
0, 73, 156, 262, 365, 460, 658, 835, 948, 1061, 1184;
0, 98, 202, 357, 511, 657, 933, 1174, 1353, 1542, 1745;
0, 138, 278, 483, 683, 870, 1202, 1485, 1709, 1945, 2197;
0, 151, 308, 533, 748, 944, 1296, 1594, 1837, 2085, 2348;
0, 168, 338, 571, 791, 991, 1350, 1661, 1924, 2189, 2469;
0, 170, 344, 584, 813, 1021, 1386, 1703, 1973, 2245, 2533]
That's the intended behavior. Note that the integral image sum(X,Y) in OpenCV is defined as (see documentation here) the sum of pixels in the original image with indices LESS THAN, sum_(x < X, y < Y), not less than or equal to, those of the integral image. Thus sum(0,0) for example will be the sum of zero pixels, defined to be 0. This is also why the resulting sum image has one more row and column than the original.
The reason for this is that it makes it easier to compute sums etc. over blocks of the image and handle them in a uniform way when they include the top and/or left borders.
There are 2 reasons.
First one is purely mathematical. Say you have a row of 3 numbers (pixels). How many possible cumulative sums it generates? the answer is 4. You can take the sum of 0 first pixels, 1 pixel, 2 pixels or all the 3 pixels. The amount of different sums is 4: (0,1,2,3). 4 is exactly 3+1. Equivalently image of width 10 will yield 11 sums for each row and size 10x10 will yield 11x11 sums.
The second reason is for programming simplicity. Integral image is used to calculate sum of any possible rectangle in the image with just 4 actions (sum of 2 corners minus the 2 other corners). The distance between the corners is exactly equals to the size of the rectangle you want to sum. For example if your rectangle has width of 5 pixels than you access the integral image at indices im[i][j] and im[i][j+5]. However if your rectangle covers the entire image width or height this may produce an index that falls out of array by 1. That is why the integral image is stored in a size that is by 1x1 larger than the image
Note: it is possible to store the integral image in an array of the same size as the image. But then the access to the array will be much slower because one will need to test the indices for out of bound. Accessing the integral image at index [-1] must be detected and produce sum of 0, and accessing at index > width will automatically return sum of the entire width.
OpenCV implemented the larger integral images mainly due to speed reasons. The calculation of the sum of rectangle requires only 4 + or - operation and 4 pointers deference. No need to test that pointers fall inside the image as long as the requested rectangle has legal coordinates inside the image
There are architectures that allow accessing array out of bounds (at illegal indices). For example GPU shaders. On those architectures integral image can be implemented in different fashion (size of NxN instead N+1xN+1 or even as pyramid of sums)
Can you manually remove the extra column from the integral image in openCV?
I strongly do not recommend doing so! openCV has a built in code to access the integral image in a specific way. If you remove the first column you will probably cause unpredictable calculations.
Moreover - as I explained, this additional row and column increase the running time by a factor of up to x10 faster (since addition and subtraction are performed by the CPU much faster than if() conditions)
Please note that integral image is a completely different representation of the original image. Not only it has a different size (N+1)x(N+1) but also a different depth. Your original image can be grayscale (store one byte per pixel) while integral image will need typically 4 bytes per pixel (because summation of many pixels require much larger numbers). So anyway integral image will take ~4 times more memory than the original image. You cannot fit integral image in the original image due to different BPP (bits per pixels) so why be bothered by different width and height

Plotting x-axis and y-axis scales graphael

I am working on graphael Chart for a Bar Graph as below. I am having difficulty in getting x-axis and y-axis for the below code.
<script type="text/javascript" charset="utf-8">
window.onload = function () {
var r = Raphael("holder");
//r.barchart(x, y, width, height, values(array), opts);
var barChart = r.barchart(100, 100, 300, 200, [100, 50, 50, 50, 50, 50, 20],
{stacked: true,width: 50,
//It is the Space between the bars and the size will be reduced
"gutter":"150%",
colors: [
"000-#d00-#900",
"000-#f64-#c31",
"000-#fc0-#c90",
"000-#3a3-#070",
"000-#2bc-#089",
"000-#00d-#00a",
"000-#808-#404"
]});
};
</script>
The chart for the above code is as below
Now I want to have x and y axis values which are empty.X and Y scales.
I have tried few techniques which haven't worked out.I added axis: "0 0 1 1" to the options and tried other few workaround but in vain.
Thanks for replying.

Cairo Radial Gradient

I'm using a radial gradient in Cairo, but I'm not getting the expected results. The radial gradient I'm getting is much less fuzzy than I'd expect and I can't seem to fiddle with the color stops in order to get the desired results. Here is the code:
cairo_pattern_t *pat;
pat = cairo_pattern_create_radial(100.0, 100.0, 0.0, 100.0, 100.0, 20.0);
cairo_pattern_add_color_stop_rgba(pat, 0, 0, 0, 0, 1);
cairo_pattern_add_color_stop_rgba(pat, 1, 0, 0, 0, 0);
Here is an image of what I'm talking about.
The #cairo IRC channel suggested (Thanks Company!) to use cairo_mask() instead of cairo_paint() to draw the gradient. That results in a squared instead of linear progression.
I did the following in lua. Sorry for the language, but it's easier to prototype something. This maps 1:1 to the C API and shouldn't be hard to translate:
cairo = require("lgi").cairo
s = cairo.ImageSurface(cairo.Format.ARGB32, 200, 100)
c = cairo.Context(s)
c:set_source_rgb(1, 1, 1)
c:paint()
p = cairo.Pattern.create_radial(50, 50, 0, 50, 50, 20)
p:add_color_stop_rgba(0, 0, 0, 0, 1)
p:add_color_stop_rgba(1, 0, 0, 0, 0)
c:save()
c:rectangle(0, 0, 100, 100)
c:clip()
c.source = p
c:paint()
c:restore()
p = cairo.Pattern.create_radial(50, 50, 2, 50, 50, 25)
p:add_color_stop_rgba(0, 0, 0, 0, 1)
p:add_color_stop_rgba(1, 0, 0, 0, 0)
c:translate(100, 0)
c:save()
c:rectangle(0, 0, 100, 100)
c:clip()
c.source = p
c:mask(p)
c:restore()
s:write_to_png("test.png")
To me, the second circle (The one that was cairo_mask()'d with a black source) looks a lot more like what you want: