Linear & rotational extrude at the same time - openscad

I'm trying to draw this (hollow) shape:
The circles are actually different diameters, and I am wanting to neck down the middle of the connecting tube like that (but its not a requirement). I can fake the shape by drawing it segment-by-segment, but I have problems necking it down, and it doesn't feel like how OpenSCAD wants it done (namely the hour-long CSG generation). Any better ways to do this? :
for(i = [0:180]) {
rotate([0,i,0])
translate([26,0,0])
difference() {
cylinder(r=10 + (0.083 * i),h=.1);
cylinder(r=8 + (0.083 * i),h=.1);
}
}

Here is pure openscad version. See scad-utils and list-comprehension-demos for details about modules and its usage.
use <scad-utils/transformations.scad>
use <scad-utils/shapes.scad>
use <skin.scad>
fn=32;
$fn=60;
r1 = 25;
r2 = 10;
R = 40;
th = 2;
module tube()
{
difference()
{
skin([for(i=[0:fn])
transform(rotation([0,180/fn*i,0])*translation([-R,0,0]),
circle(r1+(r1-r2)/fn*i))]);
assign(r1 = r1-th, r2 = r2-th)
skin([for(i=[0:fn])
transform(rotation([0,180/fn*i,0])*translation([-R,0,0]),
circle(r1+(r1-r2)/fn*i))]);
}
}
tube();

first the result:
created by difference of two polyhedrons. The 3d-object is valid, see „simple: yes“ in console
To get the points needed for polyhedron python3 is used:
first an arbitrary number of points on the base circle are constructed( the number of points define the resolution of the 3D-object). The points are rotated and scaled in n steps to 180 degrees and the final scalefactor (n controls the resolution too). Then the points are used, to define the faces of the shape. Finally all is written in openscad-syntax to a file.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import math
# define parameters
rr = 26 # radius of rotation
r1 = 10 # outer radius of tube
r2 = 8 # inner radius of tube
scale = 2 # scalefactor over rotation
segments = 90 # segments of base circle -> resolution
rAngle = 180 # angle of rotation
rSegments = 45 # segments of rotation -> resolution
def polyhedron(rr,radius,scale, segments, rAngle, rSegments):
stepAngle = 360/segments
rotAngle = rAngle/rSegments
points = '[' # string with the vector of point-vectors
faces = '[' # string with the vector of face-vectors
sprs = (scale-1)/rSegments # scale per rSegment
# construct all points
for j in range(0,rSegments+1):
angle = j*rotAngle
for i in range(0,segments):
xflat = (math.sin(math.radians(i*stepAngle))*radius) # x on base-circle
xscaled = xflat*(1 + sprs*j) + rr # x scaled (+ rr -> correction of centerpoint
xrot = math.cos(math.radians(angle))*xscaled # x rotated
yflat = (math.cos(math.radians(-i*stepAngle))*radius) # y on base-circle
yscaled = yflat*(1 + sprs*j) # y scaled
z = math.sin(math.radians(angle))*xscaled # z rotated
string = '[{},{},{}],'.format(xrot,yscaled,z)
points += string
points += ']'
# construct all faces
# bottom
f = '['
for i in range(segments-1,-1,-1):
f += '{},'.format(i)
f += '],'
faces += f # add bottom to faces
# all faces on the side of the tube
for p in range(0, segments*rSegments):
p1 = p
p2 = p + 1 -segments if p%segments == segments-1 else p +1
p3 = p + segments
p4 = p3 + 1 -segments if p%segments == segments-1 else p3 +1
f = '[{},{},{}],'.format(p1,p4,p3)
faces += f
f = '[{},{},{}],'.format(p1,p2,p4)
faces += f
# top
f = '['
for i in range(segments*rSegments,segments*(rSegments+1)):
f += '{},'.format(i)
f += ']'
faces += f # add top to faces
faces += ']'
string = 'polyhedron( points = {}, faces = {});'.format(points,faces)
return string
# output in openscad-file
wobj = open('horn.scad','w') # open openscad-file for writing
wobj.write('difference() {\n')
string = ' '
string += polyhedron(rr,r1,scale,segments,rAngle,rSegments)
string += '\n'
wobj.write(string)
string = ' '
string += polyhedron(rr,r2,scale,segments,rAngle,rSegments)
string += '\n'
wobj.write(string)
wobj.write('}')
wobj.close() # close openscad-file
# finally open openscad-file in openscad
it renders in 34 seconds

Related

Active contours with open ends

Does anyone know how to make an open active contour? I'm familiar with closed contours and I have several Matlab programs describing them. I've tried to change the matrix P but that was not enough: My Matlab code is as follows:
% Read in the test image
GrayLevelClump=imread('cortex.png');
cortex_grad=imread('cortex_grad.png');
rect=[120 32 340 340];
img=imcrop(GrayLevelClump,rect);
grad_mag=imcrop(cortex_grad(:,:,3),rect);
% Draw the initial snake as a line
a=300;b=21;c=21;d=307;
snake = brlinexya(a,b,c,d); % startpoints,endpoints
x0=snake(:,1);
y0=snake(:,2);
N=length(x0);
x0=x0(1:N-1)';
y0=y0(1:N-1)';
% parameters for the active contour
alpha = 5;
beta = 10;
gamma = 1;
kappa=1;
iterations = 50;
% Make a force field
[u,v] = GVF(-grad_mag, 0.2, 80);
%disp(' Normalizing the GVF external force ...');
mag = sqrt(u.*u+v.*v);
px = u./(mag+(1e-10)); py = v./(mag+(1e-10));
% Make the coefficient matrix P
x=x0;
y=y0;
N = length(x0);
a = gamma*(2*alpha+6*beta)+1;
b = gamma*(-alpha-4*beta);
c = gamma*beta;
P = diag(repmat(a,1,N));
P = P + diag(repmat(b,1,N-1), 1) + diag( b, -N+1);
P = P + diag(repmat(b,1,N-1),-1) + diag( b, N-1);
P = P + diag(repmat(c,1,N-2), 2) + diag([c,c],-N+2);
P = P + diag(repmat(c,1,N-2),-2) + diag([c,c], N-2);
P = inv(P);
d = gamma * (-alpha);
e = gamma * (2*alpha);
% Do the modifications to the matrix in order to handle OAC
P(1:2,:) = 0;
P(1,1) = -gamma;
P(2,1) = d;
P(2,2) = e;
P(2,3) = d;
P(N-1:N,:) = 0;
P(N-1,N-2) = d;
P(N-1,N-1) = e;
P(N-1,N) = d;
P(N,N) = -gamma;
x=x';
y=y';
figure,imshow(grad_mag,[]);
hold on, plot([x;x(1)],[y;y(1)],'r');
plot(x([1 end]),y(([1 end])), 'b.','MarkerSize', 20);
for ii = 1:50
% Calculate external force
vfx = interp2(px,x,y,'*linear');
vfy = interp2(py,x,y,'*linear');
tf=isnan(vfx);
ind=find(tf==1);
if ~isempty(ind)
vfx(ind)=0;
end
tf=isnan(vfy);
ind=find(tf==1);
if ~isempty(ind)
vfy(ind)=0;
end
% Move control points
x = P*(x+gamma*vfx);
y = P*(y+gamma*vfy);
%x = P * (gamma* x + gamma*vfx);
%y = P * (gamma* y + gamma*vfy);
if mod(ii,5)==0
plot([x;x(1)],[y;y(1)],'b')
end
end
plot([x;x(1)],[y;y(1)],'r')
I've modified the coefficient matrix P in order to handle cases with open ended active contours, but that is clearly not enough.
My image:

Replicate Photoshop sRGB to LAB Conversion

The task I want to achieve is to replicate Photoshop RGB to LAB conversion.
For simplicity, I will describe what I did to extract only the L Channel.
Extracting Photoshop's L Channel
Here is RGB Image which includes all RGB colors (Please click and download):
In order to extract Photoshop's LAB what I did is the following:
Loaded the image into Photoshop.
Set Mode to LAB.
Selected the L Channel in the Channel Panel.
Set Mode to Grayscale.
Set mode to RGB.
Saved as PNG.
This is the L Channel of Photoshop (This is exactly what seen on screen when L Channel is selected in LAB Mode):
sRGB to LAB Conversion
My main reference is Bruce Lindbloom great site.
Also known is that Photoshop is using D50 White Point in its LAB Mode (See also Wikipedia's LAB Color Space Page).
Assuming the RGB image is in sRGB format the conversion is given by:
sRGB -> XYZ (White Point D65) -> XYZ (White Point D50) -> LAB
Assuming data is in Float within the [0, 1] range the stages are given by:
Transform sRGB into XYZ.
The conversion Matrix is given by RGB -> XYZ Matrix (See sRGB D65).
Converting from XYZ D65 to XYZ D50
The conversion is done using Chromatic Adaptation Matrix. Since the previous step and this are Matrix Multiplication they can be combined into one Matrix which goes from sRGB -> XYZ D50 (See the bottom of RGB to XYZ Matrix). Note that Photoshop uses Bradford Adaptation Method.
Convert from XYZ D50 to LAB
The conversion is done using the XYZ to LAB Steps.
MATLAB Code
Since, for start, I'm only after the L Channel things are a bit simpler. The images are loaded into MATLAB and converted into Float [0, 1] range.
This is the code:
%% Setting Enviorment Parameters
INPUT_IMAGE_RGB = 'RgbColors.png';
INPUT_IMAGE_L_PHOTOSHOP = 'RgbColorsL.png';
%% Loading Data
mImageRgb = im2double(imread(INPUT_IMAGE_RGB));
mImageLPhotoshop = im2double(imread(INPUT_IMAGE_L_PHOTOSHOP));
mImageLPhotoshop = mImageLPhotoshop(:, :, 1); %<! All channels are identical
%% Convert to L Channel
mImageLMatlab = ConvertRgbToL(mImageRgb, 1);
%% Display Results
figure();
imshow(mImageLPhotoshop);
title('L Channel - Photoshop');
figure();
imshow(mImageLMatlab);
title('L Channel - MATLAB');
Where the function ConvertRgbToL() is given by:
function [ mLChannel ] = ConvertRgbToL( mRgbImage, sRgbMode )
OFF = 0;
ON = 1;
RED_CHANNEL_IDX = 1;
GREEN_CHANNEL_IDX = 2;
BLUE_CHANNEL_IDX = 3;
RGB_TO_Y_MAT = [0.2225045, 0.7168786, 0.0606169]; %<! D50
Y_CHANNEL_THR = 0.008856;
% sRGB Compensation
if(sRgbMode == ON)
vLinIdx = mRgbImage < 0.04045;
mRgbImage(vLinIdx) = mRgbImage(vLinIdx) ./ 12.92;
mRgbImage(~vLinIdx) = ((mRgbImage(~vLinIdx) + 0.055) ./ 1.055) .^ 2.4;
end
% RGB to XYZ (D50)
mY = (RGB_TO_Y_MAT(1) .* mRgbImage(:, :, RED_CHANNEL_IDX)) + (RGB_TO_Y_MAT(2) .* mRgbImage(:, :, GREEN_CHANNEL_IDX)) + (RGB_TO_Y_MAT(3) .* mRgbImage(:, :, BLUE_CHANNEL_IDX));
vYThrIdx = mY > Y_CHANNEL_THR;
mY3 = mY .^ (1 / 3);
mLChannel = ((vYThrIdx .* (116 * mY3 - 16.0)) + ((~vYThrIdx) .* (903.3 * mY))) ./ 100;
end
As one could see the results are different.
Photoshop is much darker for most colors.
Anyone knows how to replicate Photoshop's LAB conversion?
Anyone can spot issue in this code?
Thank You.
Latest answer (we know that it is wrong now, waiting for a proper answer)
Photoshop is a very old and messy software. There's no clear documentation as to why this or that happens to the pixel values when you are performing conversions from a mode to another.
Your problem happens because when you are converting the selected L* channel to Greyscale in Adobe Photoshop, there's a change in gamma. Natively, the conversion uses a gamma of 1.74 for single channel to greyscale conversion. Don't ask me why, I would guess this is related to old laser printers (?).
Anyway, this is the best way I found to do it:
Open your file, turn it to LAB mode, select the L channel only
Then go to:
Edit > Convert to profile
You will select "custom gamma" and enter the value 2.0 (don't ask me why 2.0 works better, I have no idea what's in the mind of Adobe's software makers...)
This operation will turn your picture into a greyscale one with only one channel
Then you can convert it to RGB mode.
If you compare the result with your result, you will see differences up to 4 dot something % - all located in the darkest areas.
I suspect this is because the gamma curve application does not appy to LAB mode in the dark values (Cf. as you know, all XYZ values below 0.008856 are linear in LAB)
CONCLUSION:
As far as I know, there is no proper implemented way in Adobe Photoshop to extract the L channel from LAB mode to grey mode!
Previous answer
this is the result I get with my own method:
It seems to be exactly the same result as the Adobe Photoshop one.
I am not sure what went wrong on your side since the steps that you are describing are exactly the same ones that I followed and that I would have advised you to follow. I don't have Matlab so I used python:
import cv2, Syn
# your file
fn = "EASA2.png"
#reading the file
im = cv2.imread(fn,-1)
#openCV works in BGR, i'm switching to RGB
im = im[:,:,::-1]
#conversion to XYZ
XYZ = Syn.sRGB2XYZ(im)
#white points D65 and D50
WP_D65 = Syn.Yxy2XYZ((100,0.31271, 0.32902))
WP_D50 = Syn.Yxy2XYZ((100,0.34567, 0.35850))
#bradford
XYZ2 = Syn.bradford_adaptation(XYZ, WP_D65, WP_D50)
#conversion to L*a*b*
LAB = Syn.XYZ2Lab(XYZ2, WP_D50)
#picking the L channel only
L = LAB[:,:,0] /100. * 255.
#image output
cv2.imwrite("result.png", L)
the Syn library is my own stuff, here are the functions (sorry for the mess):
def sRGB2XYZ(sRGB):
sRGB = np.array(sRGB)
aShape = np.array([1,1,1]).shape
anotherShape = np.array([[1,1,1],[1,1,1]]).shape
origShape = sRGB.shape
if sRGB.shape == aShape:
sRGB = np.reshape(sRGB, (1,1,3))
elif len(sRGB.shape) == len(anotherShape):
h,d = sRGB.shape
sRGB = np.reshape(sRGB, (1,h,d))
w,h,d = sRGB.shape
sRGB = np.reshape(sRGB, (w*h,d)).astype("float") / 255.
m1 = sRGB[:,0] > 0.04045
m1b = sRGB[:,0] <= 0.04045
m2 = sRGB[:,1] > 0.04045
m2b = sRGB[:,1] <= 0.04045
m3 = sRGB[:,2] > 0.04045
m3b = sRGB[:,2] <= 0.04045
sRGB[:,0][m1] = ((sRGB[:,0][m1] + 0.055 ) / 1.055 ) ** 2.4
sRGB[:,0][m1b] = sRGB[:,0][m1b] / 12.92
sRGB[:,1][m2] = ((sRGB[:,1][m2] + 0.055 ) / 1.055 ) ** 2.4
sRGB[:,1][m2b] = sRGB[:,1][m2b] / 12.92
sRGB[:,2][m3] = ((sRGB[:,2][m3] + 0.055 ) / 1.055 ) ** 2.4
sRGB[:,2][m3b] = sRGB[:,2][m3b] / 12.92
sRGB *= 100.
X = sRGB[:,0] * 0.4124 + sRGB[:,1] * 0.3576 + sRGB[:,2] * 0.1805
Y = sRGB[:,0] * 0.2126 + sRGB[:,1] * 0.7152 + sRGB[:,2] * 0.0722
Z = sRGB[:,0] * 0.0193 + sRGB[:,1] * 0.1192 + sRGB[:,2] * 0.9505
XYZ = np.zeros_like(sRGB)
XYZ[:,0] = X
XYZ[:,1] = Y
XYZ[:,2] = Z
XYZ = np.reshape(XYZ, origShape)
return XYZ
def Yxy2XYZ(Yxy):
Yxy = np.array(Yxy)
aShape = np.array([1,1,1]).shape
anotherShape = np.array([[1,1,1],[1,1,1]]).shape
origShape = Yxy.shape
if Yxy.shape == aShape:
Yxy = np.reshape(Yxy, (1,1,3))
elif len(Yxy.shape) == len(anotherShape):
h,d = Yxy.shape
Yxy = np.reshape(Yxy, (1,h,d))
w,h,d = Yxy.shape
Yxy = np.reshape(Yxy, (w*h,d)).astype("float")
XYZ = np.zeros_like(Yxy)
XYZ[:,0] = Yxy[:,1] * ( Yxy[:,0] / Yxy[:,2] )
XYZ[:,1] = Yxy[:,0]
XYZ[:,2] = ( 1 - Yxy[:,1] - Yxy[:,2] ) * ( Yxy[:,0] / Yxy[:,2] )
return np.reshape(XYZ, origShape)
def bradford_adaptation(XYZ, Neutral_source, Neutral_destination):
"""should be checked if it works properly, but it seems OK"""
XYZ = np.array(XYZ)
ashape = np.array([1,1,1]).shape
siVal = False
if XYZ.shape == ashape:
XYZ = np.reshape(XYZ, (1,1,3))
siVal = True
bradford = np.array(((0.8951000, 0.2664000, -0.1614000),
(-0.750200, 1.7135000, 0.0367000),
(0.0389000, -0.068500, 1.0296000)))
inv_bradford = np.array(((0.9869929, -0.1470543, 0.1599627),
(0.4323053, 0.5183603, 0.0492912),
(-.0085287, 0.0400428, 0.9684867)))
Xs,Ys,Zs = Neutral_source
s = np.array(((Xs),
(Ys),
(Zs)))
Xd,Yd,Zd = Neutral_destination
d = np.array(((Xd),
(Yd),
(Zd)))
source = np.dot(bradford, s)
Us,Vs,Ws = source[0], source[1], source[2]
destination = np.dot(bradford, d)
Ud,Vd,Wd = destination[0], destination[1], destination[2]
transformation = np.array(((Ud/Us, 0, 0),
(0, Vd/Vs, 0),
(0, 0, Wd/Ws)))
M = np.mat(inv_bradford)*np.mat(transformation)*np.mat(bradford)
w,h,d = XYZ.shape
result = np.dot(M,np.rot90(np.reshape(XYZ, (w*h,d)),-1))
result = np.rot90(result, 1)
result = np.reshape(np.array(result), (w,h,d))
if siVal == False:
return result
else:
return result[0,0]
def XYZ2Lab(XYZ, neutral):
"""transforms XYZ to CIE Lab
Neutral should be normalized to Y = 100"""
XYZ = np.array(XYZ)
aShape = np.array([1,1,1]).shape
anotherShape = np.array([[1,1,1],[1,1,1]]).shape
origShape = XYZ.shape
if XYZ.shape == aShape:
XYZ = np.reshape(XYZ, (1,1,3))
elif len(XYZ.shape) == len(anotherShape):
h,d = XYZ.shape
XYZ = np.reshape(XYZ, (1,h,d))
N_x, N_y, N_z = neutral
w,h,d = XYZ.shape
XYZ = np.reshape(XYZ, (w*h,d)).astype("float")
XYZ[:,0] = XYZ[:,0]/N_x
XYZ[:,1] = XYZ[:,1]/N_y
XYZ[:,2] = XYZ[:,2]/N_z
m1 = XYZ[:,0] > 0.008856
m1b = XYZ[:,0] <= 0.008856
m2 = XYZ[:,1] > 0.008856
m2b = XYZ[:,1] <= 0.008856
m3 = XYZ[:,2] > 0.008856
m3b = XYZ[:,2] <= 0.008856
XYZ[:,0][m1] = XYZ[:,0][XYZ[:,0] > 0.008856] ** (1/3.0)
XYZ[:,0][m1b] = ( 7.787 * XYZ[:,0][m1b] ) + ( 16 / 116.0 )
XYZ[:,1][m2] = XYZ[:,1][XYZ[:,1] > 0.008856] ** (1/3.0)
XYZ[:,1][m2b] = ( 7.787 * XYZ[:,1][m2b] ) + ( 16 / 116.0 )
XYZ[:,2][m3] = XYZ[:,2][XYZ[:,2] > 0.008856] ** (1/3.0)
XYZ[:,2][m3b] = ( 7.787 * XYZ[:,2][m3b] ) + ( 16 / 116.0 )
Lab = np.zeros_like(XYZ)
Lab[:,0] = (116. * XYZ[:,1] ) - 16.
Lab[:,1] = 500. * ( XYZ[:,0] - XYZ[:,1] )
Lab[:,2] = 200. * ( XYZ[:,1] - XYZ[:,2] )
return np.reshape(Lab, origShape)
All conversions between colour spaces in Photoshop are through CMM, which is sufficiently fast on circa 2000 hardware, and not quite accurate. You can have a lot of 4-bit errors and some 7-bit errors with Adobe CMM if you check "round robin" - RGB -> Lab -> RGB. That may cause posterisation. I always base my conversions on formulae, not on CMMs. However the average deltaE of the error with Adobe CMM and Argyll CMM is quite acceptable.
Lab conversions are quite similar to RGB, only the non-linearity (gamma) is applied at the first step; something like this:
normalize XYZ to white point
bring the result to gamma 3 (keeping shadow portion linear, depends on implementation)
multiply the result by [0 116 0 -16; 500 -500 0 0; 0 200 -200 0]'

Estimate the amount of the area of the hemisphere’s surface which can be seen by the eye?

We suppose that there are one hemisphere and three triangles in a 3D space. The center point of the hemisphere’s bottom is denoted by C. The radius of the hemisphere’s bottom is represented by R. The normal vector to hemisphere’s bottom is denoted by n.
The first triangle is given by the three points V1, V2 and V3. The second triangle is given by the three points V4, V5 and V6. The third triangle is given by the three points V7, V8 and V9. The positions of the points V1, V2, …, V9 are arbitrary. Now, we further assume that an eye is located at the point E. Note that the triangles may block the line of sight from the eye to the hemisphere’s surface; hence some region of the hemisphere’s surface may not be seen by the eye.
Please develop a method to estimate the amount of the area of the hemisphere’s surface which can be seen by the eye?
Here is a code for the rectangle rather than hemisphere:
function r = month_1()
P1 = [0.918559, 0.750000, -0.918559];
P2 = [0.653394, 0.649519, 1.183724];
P3 = [-0.918559, -0.750000, 0.918559];
P4 = [-0.653394, -0.649519, -1.183724];
V1 = [0.989609, -1.125000, 0.071051];
V2 = [1.377838, -0.808013, -0.317178];
V3 = [1.265766, -0.850481, 0.571351];
V4 = [0.989609, -1.125000, 0.071051];
V5 = [1.265766, -0.850481, 0.571351];
V6 = [0.601381, -1.441987, 0.459279];
V7 = [0.989609, -1.125000, 0.071051];
V8 = [1.377838, -0.808013, -0.317178];
V9 = [0.713453, -1.399519, -0.429250];
E = [1.714054, -1.948557, 0.123064];
C = [0,1,0];
Radius = 2;
n = [0,1,0];
%hold on
patches.vertices(1,:)= P1;
patches.vertices(2,:)= P2;
patches.vertices(3,:)= P3;
patches.vertices(4,:)= P4;
patches.vertices(5,:)= V1;
patches.vertices(6,:)= V2;
patches.vertices(7,:)= V3;
patches.vertices(8,:)= V4;
patches.vertices(9,:)= V5;
patches.vertices(10,:)= V6;
patches.vertices(11,:)= V7;
patches.vertices(12,:)= V8;
patches.vertices(13,:)= V9;
patches.faces(1,:)= [5,6,7];
patches.faces(2,:)= [8,9,10];
patches.faces(3,:)= [11,12,13];
patches.faces(4,:)= [1,2,3];
patches.faces(5,:)= [1,3,4];
patches.facevertexcdata = 1;
patch(patches);
shading faceted; view (3);
% dispatch([1,1,1])
hold on
Num = 20;
Sum = 0;
%Srec = norm(cross(P1 - P4, P3 - P4));
for i = 1:Num
x1 = rand;
x2 = rand;
v1 = P1-P4;
v2 = P3-P4;
Samp = P4+v1*x1+v2*x2;
ABC = fun_f(E, Samp, V1,V2,V3)*fun_f(E,Samp, V4, V5, V6)*fun_f(E,Samp, V7,V8,V9);
Sum = Sum + ABC;
% if ABC ==1
% plot3(Samp(1), Samp(2), Samp(3),'r +'), hold on
% else
% plot3(Samp(1), Samp(2), Samp(3),'b +'), hold on
% end
%............................
[x, y, z]= sphere;
patches = surf2patch(x,y,z,z);
patches.vertices(:,3) = abs(patches.vertices(:,3));
patches.facevertexcdata = 1;
patch(patches)
shading faceted; view(3)
daspect([1 1 1])
%............................
end
%r = Sum/Num;
%view(31, 15)
%end
r = Sum/Num*norm(cross (P1-P4,P3-P4));
disp(sprintf('the integration is: %.5f',r));
disp(sprintf('the accurate result is: %.5f',norm(cross(P1-P4,P3-P4)/4)));
function res = fun_f(LineB, LineE, V1, V2, V3)
O = LineB;
Len = norm(LineE-LineB);
v = (LineE-LineB)/Len;
N = cross(V2-V1, V3-V1)/norm(cross(V2-V1, V3-V1));
if dot(N,v)~=0
tp = dot(N, V1-O)/ dot(N,v);
% if tp >=0 && tp <= (1:3);
P = LineB+tp*v(1:3);
A = V1 - P;
B = V2 - P;
Stri1 = norm(cross(A,B))/2;
A = V1 - P;
B = V3 - P;
Stri2 = norm(cross(A,B))/2;
A = V3 - P;
B = V2 - P;
Stri3 = norm(cross(A,B))/2;
A = V1 - V2;
B = V3 - V2;
Stotal = norm(cross(A,B))/2;
if (Stri1 + Stri2 + Stri3)> (Stotal + 1e-8)
res = 1;
else
res = 0;
end
else
res =1;
end
end
end
Take a small element of surface area centered on , dimensions . The area element is given by
The idea is to loop over these elements on the sphere; calculate the center point of the element at , and work out if the line segment between this point and the camera intersects a triangle. More here: https://en.wikipedia.org/wiki/M%C3%B6ller%E2%80%93Trumbore_intersection_algorithm.
Now we need to find the points; trivially this means incrementing by over all of the hemisphere. But this would make the sampling resolution uneven - the factor would make the elements far larger near the apex of the hemisphere than at its edge.
Instead:
Set a fixed number N of rings to loop around, i.e. number of iterations of .
Set the minimum iteration area . The number of iterations of , M is gven by
Where is the total area of the ring at :
And of course from above the increments are given by
Loop over all the rings, being careful that gives the middle line of each ring (So start at ); same concern need not apply to due to the symmetry. For each ring loop over each , and do the line intersection test as mentioned above.
The above method reduces the amount of bias in the area sampling resolution at small .
An even better way would be Fibonacci lattices, but they are more complicated. See this paper: http://geonaut.eu/published/016_Fibonacci_Lattice.pdf
clc
%% Declaration of Initial Variables
C = [0.918559, 0.750000, -0.918559];
R = 10;
n = [1, 2, 1.5];
V1 = [0.989609, -1.125000, 0.071051];
V2 = [1.377838, -0.808013, -0.317178];
V3 = [1.265766, -0.850481, 0.571351];
V4 = [0.989609, -1.125000, 0.071051];
V5 = [1.265766, -0.850481, 0.571351];
V6 = [0.601381, -1.441987, 0.459279];
V7 = [0.989609, -1.125000, 0.071051];
V8 = [1.377838, -0.808013, -0.317178];
V9 = [0.713453, -1.399519, -0.429250];
E = [1.714054, -1.948557, 0.123064];
Num = 10000;
count1 = 0; count2 = 0; count3 = 0;
%% Calculating the triangles Normal and Area
N1 = cross((V2-V1),(V3-V1));
N2 = cross((V5-V4),(V6-V4));
N3 = cross((V8-V7),(V9-V7));
Area1 = norm(N1)/2;
Area2 = norm(N2)/2;
Area3 = norm(N3)/2;
%% Plotting the triangle
patch([V1(1),V2(1),V3(1),V1(1)],[V1(2),V2(2),V3(2),V1(2)],[V1(3),V2(3),V3(3),V1(3)], 'green');
hold on
patch([V4(1),V5(1),V6(1),V4(1)],[V4(2),V5(2),V6(2),V4(2)],[V4(3),V5(3),V6(3),V4(3)],'green');
hold on
patch([V7(1),V8(1),V9(1),V7(1)],[V7(2),V8(2),V9(2),V7(2)],[V7(3),V8(3),V9(3),V7(3)], 'green');
plot3(E(1),E(2),E(3),'ro')
hold on
%% The Loop Section
for i=1:Num
x1 = rand;
x2 = rand;
Px = R*sqrt(x1*(2-x1))*cos(2*pi*x2)+C(1);
Py = R*sqrt(x1*(2-x1))*sin(2*pi*x2)+C(2);
Pz = R*(1 - x1)+C(3);
z = [0,0,1];
if norm(cross(n,z)) ~= 0
v = cross(n,z)/norm(cross(n,z));
u = cross(v,n)/norm(cross(v,n));
w = n/norm(n);
else
u = (dot(n,z)/abs(dot(n,z))).*[1,0,0];
v = (dot(n,z)/abs(dot(n,z))).*[0,1,0];
w = (dot(n,z)/abs(dot(n,z))).*[0,0,1];
end
M = [u(1),u(2),u(3),0;v(1),v(2),v(3),0;w(1),w(2),w(3),0;0,0,0,1]*...
[1,0,0,-C(1);0,1,0,-C(2);0,0,1,-C(3);0,0,0,1];
L = [Px,Py,Pz,1]*M;
Points = [L(1),L(2),L(3)];
Len = norm(E - Points);
plot3(L(1),L(2),L(3),'b.'),hold on
dv = (E - Points)/norm(E - Points);
%% Triangle 1
tp1 = dot(N1,(V1-Points))/dot(N1,dv);
if tp1>=0 && tp1<=Len
R1 = Points + tp1*dv;
A1 = norm(cross((V1-R1),(V2-R1)))/2;
A2 = norm(cross((V1-R1),(V3-R1)))/2;
A3 = norm(cross((V2-R1),(V3-R1)))/2;
if (A1+A2+A3) <= Area1
count1 = count1 + 1;
plot3(L(1),L(2),L(3),'r*')
end
end
%% Triangle 2
tp2 = dot(N2,(V4-Points))/dot(N2,dv);
if tp2>=0 && tp2<=Len
R2 = Points + tp2*dv;
A4 = norm(cross((V4-R2),(V5-R2)))/2;
A5 = norm(cross((V4-R2),(V6-R2)))/2;
A6 = norm(cross((V5-R2),(V6-R2)))/2;
if (A4+A5+A6) <= Area2
count2 = count2 + 1;
plot3(L(1),L(2),L(3),'r*')
end
end
%% Triangle 3
tp3 = dot(N3,(V7-Points))/dot(N3,dv);
if tp3>=0 && tp3<=Len
R3 = Points + tp3*dv;
A7 = norm(cross((V7-R3),(V8-R3)))/2;
A8 = norm(cross((V7-R3),(V9-R3)))/2;
A9 = norm(cross((V8-R3),(V9-R3)))/2;
if (A7+A8+A9) <= Area3
count3 = count3 + 1;
plot3(L(1),L(2),L(3),'r*')
end
end
end
%% Final Solution
AreaofHemi = 2*pi*R^2;
Totalcount = count1 + count2 + count3;
Areaseen=((Num-Totalcount)/Num)*AreaofHemi;
disp(fprintf('AreaofHemi %f, AreaSeen %f ',AreaofHemi,Areaseen))

Different intensity values for same image in OpenCV and MATLAB

I'm using Python 2.7 and OpenCV 3.x for my project for omr sheet evaluation using web camera.
While finding the number of white pixels in around the center of circle,I came to know that the intensity values are wrong, but it shows the correct values in MATLAB when I'm using imtool('a1.png').
I'm using .png image (datatype uint8).
just run the code and in the image go to [360:370,162:172] coordinate and see the intensity values.. it should not be 0.
find the images here -> a1.png a2.png
Why is this happening?
import numpy as np
import cv2
from matplotlib import pyplot as plt
#select radius of circle
radius = 10;
#function for finding white pixels
def thresh_circle(img,ptx,pty):
centerX = ptx;
centerY = pty;
cntOfWhite = 0;
for i in range((centerX - radius),(centerX + radius)):
for j in range((centerY - radius), (centerY + radius)):
if(j < img.shape[0] and i < img.shape[1]):
val = img[i][j]
if (val == 255):
cntOfWhite = cntOfWhite + 1;
return cntOfWhite
MIN_MATCH_COUNT = 10
img1 = cv2.imread('a1.png',0) # queryImage
img2 = cv2.imread('a2.png',0) # trainImage
sift = cv2.SIFT()# Initiate SIFT detector
kp1, des1 = sift.detectAndCompute(img1,None)# find the keypoints and descriptors with SIFT
kp2, des2 = sift.detectAndCompute(img2,None)
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1,des2,k=2)
good = []# store all the good matches as per Lowe's ratio test.
for m,n in matches:
if m.distance < 0.7*n.distance:
good.append(m)
if len(good)>MIN_MATCH_COUNT:
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.LMEDS,5.0)
#print M
matchesMask = mask.ravel().tolist()
h,w = img1.shape
else:
print "Not enough matches are found - %d/%d" % (len(good),MIN_MATCH_COUNT)
matchesMask = None
img3 = cv2.warpPerspective(img1, M, (img2.shape[1],img2.shape[0]))
blur = cv2.GaussianBlur(img3,(5,5),0)
ret3,th3 = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
ret,th2 = cv2.threshold(blur,ret3,255,cv2.THRESH_BINARY_INV)
print th2[360:370,162:172]#print a block of image
plt.imshow(th2, 'gray'),plt.show()
cv2.waitKey(0)
cv2.imwrite('th2.png',th2)
ptyc = np.array([170,200,230,260]);#y coordinates of circle center
ptxc = np.array([110,145,180,215,335,370,405,440])#x coordinates of circle center
pts_src = np.zeros(shape = (32,2),dtype=np.int);#x,y coordinates of circle center
ct = 0;
for i in range(0,4):
for j in range(0,8):
pts_src[ct][1] = ptyc[i];
pts_src[ct][0] = ptxc[j];
ct = ct+1;
boolval = np.zeros(shape=(8,4),dtype=np.bool)
ct = 0;
for j in range(0,8):
for i in range(0,4):
a1 = thresh_circle(th2,pts_src[ct][0],pts_src[ct][1])
ct = ct+1;
if(a1 > 50):
boolval[j][i] = 1
else:
boolval[j][i] = 0

Matlab FFT and home brewed FFT

I'm trying to verify an FFT algorithm I should use for a project VS the same thing on Matlab.
The point is that with my own C FFT function I always get the right (the second one) part of the double sided FFT spectrum evaluated in Matlab and not the first one as "expected".
For instance if my third bin is in the form a+i*b the third bin of Matlab's FFT is a-i*b. A and b values are the same but i always get the complex conjugate of Matlab's.
I know that in terms of amplitudes and power there's no trouble (cause abs value) but I wonder if in terms of phases I'm going to read always wrong angles.
Im not so skilled in Matlab to know (and I have not found useful infos on the web) if Matlab FFT maybe returns the FFT spectre with negative frequencies first and then positive... or if I have to fix my FFT algorithm... or if it is all ok because phases are the unchanged regardless wich part of FFT we choose as single side spectrum (but i doubt about this last option).
Example:
If S is the sample array with N=512 samples, Y = fft(S) in Matlab return the FFT as (the sign of the imaginary part in the first half of the array are random, just to show the complex conjugate difference for the second part):
1 A1 + i*B1 (DC, B1 is always zero)
2 A2 + i*B2
3 A3 - i*B3
4 A4 + i*B4
5 A5 + i*B5
...
253 A253 - i*B253
254 A254 + i*B254
255 A255 + i*B255
256 A256 + i*B256
257 A257 - i*B257 (Nyquyst, B257 is always zero)
258 A256 - i*B256
259 A255 - i*B255
260 A254 - i*B254
261 A253 + i*B253
...
509 A5 - i*B5
510 A4 - i*B4
511 A3 + i*B3
512 A2 - i*B2
My FFT implementation returns only 256 values (and that's ok) in the the Y array as:
1 1 A1 + i*B1 (A1 is the DC, B1 is Nyquist, both are pure Real numbers)
2 512 A2 - i*B2
3 511 A3 + i*B3
4 510 A4 - i*B4
5 509 A5 + i*B5
...
253 261 A253 + i*B253
254 260 A254 - i*B254
255 259 A255 - i*B255
256 258 A256 - i*B256
Where the first column is the proper index of my Y array and the second is just the reference of the relative row in the Matlab FFT implementation.
As you can see my FFT implementation (DC apart) returns the FFT like the second half of the Matlab's FFT (in reverse order).
To summarize: even if I use fftshift as suggested, it seems that my implementation always return what in the Matlab FFT should be considered the negative part of the spectrum.
Where is the error???
This is the code I use:
Note 1: the FFT array is not declared here and it is changed inside the function. Initially it holds the N samples (real values) and at the end it contains the N/2 +1 bins of the single sided FFT spectrum.
Note 2: the N/2+1 bins are stored in N/2 elements only because the DC component is always real (and it is stored in FFT[0]) and also the Nyquyst (and it is stored in FFT[1]), this exception apart all the other even elements K holds a real number and the oven elements K+1 holds the imaginary part.
void Fft::FastFourierTransform( bool inverseFft ) {
double twr, twi, twpr, twpi, twtemp, ttheta;
int i, i1, i2, i3, i4, c1, c2;
double h1r, h1i, h2r, h2i, wrs, wis;
int nn, ii, jj, n, mmax, m, j, istep, isign;
double wtemp, wr, wpr, wpi, wi;
double theta, tempr, tempi;
// NS is the number of samples and it must be a power of two
if( NS == 1 )
return;
if( !inverseFft ) {
ttheta = 2.0 * PI / NS;
c1 = 0.5;
c2 = -0.5;
}
else {
ttheta = 2.0 * PI / NS;
c1 = 0.5;
c2 = 0.5;
ttheta = -ttheta;
twpr = -2.0 * Pow( Sin( 0.5 * ttheta ), 2 );
twpi = Sin(ttheta);
twr = 1.0+twpr;
twi = twpi;
for( i = 2; i <= NS/4+1; i++ ) {
i1 = i+i-2;
i2 = i1+1;
i3 = NS+1-i2;
i4 = i3+1;
wrs = twr;
wis = twi;
h1r = c1*(FFT[i1]+FFT[i3]);
h1i = c1*(FFT[i2]-FFT[i4]);
h2r = -c2*(FFT[i2]+FFT[i4]);
h2i = c2*(FFT[i1]-FFT[i3]);
FFT[i1] = h1r+wrs*h2r-wis*h2i;
FFT[i2] = h1i+wrs*h2i+wis*h2r;
FFT[i3] = h1r-wrs*h2r+wis*h2i;
FFT[i4] = -h1i+wrs*h2i+wis*h2r;
twtemp = twr;
twr = twr*twpr-twi*twpi+twr;
twi = twi*twpr+twtemp*twpi+twi;
}
h1r = FFT[0];
FFT[0] = c1*(h1r+FFT[1]);
FFT[1] = c1*(h1r-FFT[1]);
}
if( inverseFft )
isign = -1;
else
isign = 1;
n = NS;
nn = NS/2;
j = 1;
for(ii = 1; ii <= nn; ii++) {
i = 2*ii-1;
if( j>i ) {
tempr = FFT[j-1];
tempi = FFT[j];
FFT[j-1] = FFT[i-1];
FFT[j] = FFT[i];
FFT[i-1] = tempr;
FFT[i] = tempi;
}
m = n/2;
while( m>=2 && j>m ) {
j = j-m;
m = m/2;
}
j = j+m;
}
mmax = 2;
while(n>mmax) {
istep = 2*mmax;
theta = 2.0 * PI /(isign*mmax);
wpr = -2.0 * Pow( Sin( 0.5 * theta ), 2 );
wpi = Sin(theta);
wr = 1.0;
wi = 0.0;
for(ii = 1; ii <= mmax/2; ii++) {
m = 2*ii-1;
for(jj = 0; jj <= (n-m)/istep; jj++) {
i = m+jj*istep;
j = i+mmax;
tempr = wr*FFT[j-1]-wi*FFT[j];
tempi = wr*FFT[j]+wi*FFT[j-1];
FFT[j-1] = FFT[i-1]-tempr;
FFT[j] = FFT[i]-tempi;
FFT[i-1] = FFT[i-1]+tempr;
FFT[i] = FFT[i]+tempi;
}
wtemp = wr;
wr = wr*wpr-wi*wpi+wr;
wi = wi*wpr+wtemp*wpi+wi;
}
mmax = istep;
}
if( inverseFft )
for(i = 1; i <= 2*nn; i++)
FFT[i-1] = FFT[i-1]/nn;
if( !inverseFft ) {
twpr = -2.0 * Pow( Sin( 0.5 * ttheta ), 2 );
twpi = Sin(ttheta);
twr = 1.0+twpr;
twi = twpi;
for(i = 2; i <= NS/4+1; i++) {
i1 = i+i-2;
i2 = i1+1;
i3 = NS+1-i2;
i4 = i3+1;
wrs = twr;
wis = twi;
h1r = c1*(FFT[i1]+FFT[i3]);
h1i = c1*(FFT[i2]-FFT[i4]);
h2r = -c2*(FFT[i2]+FFT[i4]);
h2i = c2*(FFT[i1]-FFT[i3]);
FFT[i1] = h1r+wrs*h2r-wis*h2i;
FFT[i2] = h1i+wrs*h2i+wis*h2r;
FFT[i3] = h1r-wrs*h2r+wis*h2i;
FFT[i4] = -h1i+wrs*h2i+wis*h2r;
twtemp = twr;
twr = twr*twpr-twi*twpi+twr;
twi = twi*twpr+twtemp*twpi+twi;
}
h1r = FFT[0];
FFT[0] = h1r+FFT[1]; // DC
FFT[1] = h1r-FFT[1]; // FS/2 (NYQUIST)
}
return;
}
In matlab try using fftshift(fft(...)). Matlab doesn't automatically shift the spectrum after the FFT is called which is why they implemented the fftshift() function.
It is simply a matlab formatting thing. Basically, matlab arrange Fourier transform in following order
DC, (DC-1), .... (Nyquist-1), -Nyquist, -Nyquist+1, ..., DC-1
Let's say you have a 8 point sequence: [1 2 3 1 4 5 1 3]
In your signal processing class, your professor probably draws the Fourier spectrum based on a Cartesian system ( negative -> positive for x axis); So your DC should be located at 0 (the 4th position in your fft sequence, assuming position index here is 0-based) on your x axis.
In matlab, the DC is the very first element in the fft sequence, so you need to to fftshit() to swap the first half and second half of the fft sequence such that DC will be located at 4th position (position is 0-based indexed)
I am attaching a graph here so you may have a visual:
where a is the original 8-point sequence; FT(a) is the Fourier transform of a.
The matlab code is here:
a = [1 2 3 1 4 5 1 3];
A = fft(a);
N = length(a);
x = -N/2:N/2-1;
figure
subplot(3,1,1), stem(x, a,'o'); title('a'); xlabel('time')
subplot(3,1,2), stem(x, fftshift(abs(A),2),'o'); title('FT(a) in signal processing'); xlabel('frequency')
subplot(3,1,3), stem(x, abs(A),'o'); title('FT(a) in matlab'); xlabel('frequency')