LiDAR to camera image fusion - matlab

I want to fuse LiDAR {X,Y,Z,1} points on camera image {u,v} for which we have LiDAR points, camera matrix (K), distortion coefficient (D), position of camera and LiDAR (x,y,z), rotation of camera and LiDAR (w+xi+yj+zk). There are three coordinates system involved. Vehicle axle coordinate system(X:forward, Y:Left, Z: upward), LiDAR coordinate (X:Right, Y:Forward, Z: Up) and camera coordinate system (X: Right, Y:down, Z: Forward). I tried the below approach but the points are not fusing properly. All points are wrongly plotted.
Coordinate system:
For given Rotation and Position of camera and LiDAR we compute the translation using below equation.
t_lidar = R_lidar * Position_lidar^T
t_camera = R_camera *Position_camera^T
Then relative rotation and translation is computed as flows
R_relative = R_camera^T * R_lidar
t_relative = t_lidar - t_camera
Then the final Transformation Matrix and point transformation between LiDAR Points [X,Y,Z,1] and image frame [u,v,1] is given by:
T = [ R_relative | t_relative ]
[u,v,1]^T = K * T * [X,Y,Z,1]^T
Is there anything which I am missing?

Use opencv projectpoint directly
https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#projectpoints
C++: void projectPoints(InputArray objectPoints, InputArray rvec, InputArray tvec, InputArray cameraMatrix, InputArray distCoeffs, OutputArray imagePoints, OutputArray jacobian=noArray(), double aspectRatio=0 )
objectPoints – Array of object points, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel (or vector ), where N is the number of points in the view.
rvec – Rotation vector. See Rodrigues() for details.
tvec – Translation vector.
cameraMatrix – Camera matrix

Related

Depth to world registration hololens2 unity

I'm working on a program on hololens2 research mode on unity. Hololens give us a depth image that is distance from depth sensor to object in front, for every pixel.
What I do is for every pixel I project pixel to image plane, then backproject it according to depth distance captured by depth sensor and it gives me the xyz in depth sensor coordinate frame. now it is needed to transform this coordinate to global coordinate system. to do so I get camera coordinate from unity by cam_pose = Camera.main.transform and in the other hand saved depth sensor extrinsic matrix.
From these two matrices I create a depth_to_world = cam_pose # inv(extrinsic). Now for every xyz on depth I perform global_xyz = depth_to_world # xyz to get point in real world. Problem is it return a point with 10-15 cm error. What am I doing wrong? (code is in python)
x = self.us[Depth_i, Depth_j] # projection from pixels to image plane
y = self.vs[Depth_i, Depth_j] # projection from pixels to image plane
D = distance_img[Depth_i, Depth_j] #distance_img is depth image
distance = 1000*float(D) / np.sqrt(x * x + y * y + 1) #distance according to spherical image plane D is in millimeter
depth_to_world = cam_pose # np.linalg.inv(Constants.camera_extrinsic)
X = (np.array([x * distance, y * distance, 1.0 * distance, 1])).reshape(4, 1)
point = (depth_to_world # X )[0:3, 0]
I got it! according to (https://github.com/petergu684/HoloLens2-ResearchMode-Unity) first I passed unity world origin to a winrt plugin, and depth_to_world was depth_to_world = inv(extrinsic) * cam_pose witch cam_pose is given by TryLocateAtTimeStamp. And other point is that unity coordinate is left handed (surprisingly!) so we should multiply a (-1) to z. (z <- -z)
my depth_to_world transformation was near but not correct.

Camera Intrinsics Resolution vs Real Screen Resolution

I am writing an ARKit app where I need to use camera poses and intrinsics for 3D reconstruction.
The camera Intrinsics matrix returned by ARKit seems to be using a different image resolution than mobile screen resolution. Below is one example of this issue
Intrinsics matrix returned by ARKit is :
[[1569.249512, 0, 931.3638306],[0, 1569.249512, 723.3305664],[0, 0, 1]]
whereas input image resolution is 750 (width) x 1182 (height). In this case, the principal point seems to be out of the image which cannot be possible. It should ideally be close to the image center. So above intrinsic matrix might be using image resolution of 1920 (width) x 1440 (height) returned that is completely different than the original image resolution.
The questions are:
Whether the returned camera intrinsics belong to 1920x1440 image resolution?
If yes, how can I get the intrinsics matrix representing original image resolution i.e. 750x1182?
Intrinsics 3x3 matrix
Intrinsics camera matrix converts between the 2D camera plane and 3D world coordinate space. Here's a decomposition of an intrinsic matrix, where:
fx and fy is a Focal Length in pixels
xO and yO is a Principal Point Offset in pixels
s is an Axis Skew
According to Apple Documentation:
The values fx and fy are the pixel focal length, and are identical for square pixels. The values ox and oy are the offsets of the principal point from the top-left corner of the image frame. All values are expressed in pixels.
So you let's examine what your data is:
[1569, 0, 931]
[ 0, 1569, 723]
[ 0, 0, 1]
fx=1569, fy=1569
xO=931, yO=723
s=0
To convert a known focal length in pixels to mm use the following expression:
F(mm) = F(pixels) * SensorWidth(mm) / ImageWidth(pixels)
Points Resolution vs Pixels Resolution
Look at this post to find out what a Point Rez and what a Pixel Rez are.
Let's explore what is what when using iPhoneX data.
#IBOutlet var arView: ARSCNView!
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) {
let imageRez = (self.arView.session.currentFrame?.camera.imageResolution)!
let intrinsics = (self.arView.session.currentFrame?.camera.intrinsics)!
let viewportSize = self.arView.frame.size
let screenSize = self.arView.snapshot().size
print(imageRez as Any)
print(intrinsics as Any)
print(viewportSize as Any)
print(screenSize as Any)
}
Apple Documentation:
imageResolution instance property describes the image in the capturedImage buffer, which contains image data in the camera device's native sensor orientation. To convert image coordinates to match a specific display orientation of that image, use the viewMatrix(for:) or projectPoint(_:orientation:viewportSize:) method.
iPhone X imageRez (aspect ratio is 4:3).
These aspect ratio values correspond to camera sensor values:
(1920.0, 1440.0)
iPhone X intrinsics:
simd_float3x3([[1665.0, 0.0, 0.0], // first column
[0.0, 1665.0, 0.0], // second column
[963.8, 718.3, 1.0]]) // third column
iPhone X viewportSize (ninth part of screenSize):
(375.0, 812.0)
iPhone X screenSize (resolution declared in tech spec):
(1125.0, 2436.0)
Pay attention, there's no snapshot() method for RealityKit's ARView.

Camera Calibration to get a peripendicular plane

Calibration:
I have calibrated the camera using this vision toolbox in Matlab. I used checkerboard images to do so. After calibration I get the following:
>> cameraParams
cameraParams =
cameraParameters with properties:
Camera Intrinsics
IntrinsicMatrix: [3x3 double]
FocalLength: [1.0446e+03 1.0428e+03]
PrincipalPoint: [604.1474 359.7477]
Skew: 3.5436
Lens Distortion
RadialDistortion: [0.0397 0.0798 -0.2034]
TangentialDistortion: [-0.0063 -0.0165]
Camera Extrinsics
RotationMatrices: [3x3x18 double]
TranslationVectors: [18x3 double]
Accuracy of Estimation
MeanReprojectionError: 0.1269
ReprojectionErrors: [48x2x18 double]
ReprojectedPoints: [48x2x18 double]
Calibration Settings
NumPatterns: 18
WorldPoints: [48x2 double]
WorldUnits: 'mm'
EstimateSkew: 1
NumRadialDistortionCoefficients: 3
EstimateTangentialDistortion: 1
I know the transformation from the camera's coordinates to the checkerboard coordinates: R1, t1. How can I figure out the transformation between the checkerboard and a perpendicular plane: R2, t2. Given that this plane is parallel to the ground and at a height 193.040 cm from it.
Note:
This question is sort of subpart of Calibration of images to obtain a top-view for points that lie on a same plane. I posted it, to ask a generalized question.
So, IIRC the view coordinate system in the toolbox are defined with the origin at the top-left corner the checkerboard, x axis toward the right and y axis downward (and of course the z axis is the cross product of x and y).This is easy to verify, just back-project points [0; 0; 0], [10; 0; 0] and [0; 10; 0] on top of one of the calibration images and see where they fall.
Let's call this the "calibration view" frame. Let's also call "floor" the second plane you are interested in.
Now let's assume (big assumption) that you carefully placed the checkerboard in that view so that it was orthogonal to the floor, and with its horizontal edge parallel to the floor. This means that the x axis of the calibration view frame is parallel to the floor, and the y axis is orthogonal to the floor.
Therefore the floor is parallel to the (x, z) plane of the calibration view frame. Therefore, if
Rc = [x y z]
is the rotation of the calibration view w.r.t the camera, then the floor has rotation
Rf = [x z y]
(assuming the normal vector of the floor goes into it. If you prefer that it goes up from it, then it would be Rf = [z x -y]).
Further, let's call H the distance (height) of the origin of the calibration view frame from the floor. Remembering that the y axis of that frame is pointing toward the floor, we see that the point F = [0; H; 0] (in view frame coordinates) is on the floor, and we can use it as the origin of the floor frame.
In camera coordinates, vector F is represented by:
Fc = Rc * F = Rc * [0; H; 0]
and if Tc is the (calibrated) translation w.r.t. the camera of the calibration view frame, then that same point on the floor is, in camera coordinates:
F = Tc + Fc
So the 3x4 coordinate transform matrix from the floor to the camera is
Q = [Rf, F]
This should give you a decent estimate, provided that your assumptions hold.
Of course, a much better way to proceed would be to take an image of the checkerboard on the floor...

Calibration of images to obtain a top-view for points that lie on a same plane

Calibration:
I have calibrated the camera using this vision toolbox in Matlab. I used checkerboard images to do so. After calibration I get the cameraParams
which contains:
Camera Extrinsics
RotationMatrices: [3x3x18 double]
TranslationVectors: [18x3 double]
and
Camera Intrinsics
IntrinsicMatrix: [3x3 double]
FocalLength: [1.0446e+03 1.0428e+03]
PrincipalPoint: [604.1474 359.7477]
Skew: 3.5436
Aim:
I have recorded trajectories of some objects in motion using this camera. Each object corresponds to a single point in a frame. Now, I want to project the points such that I get a top-view.
Note all these points I wish to transform are are the on the same plane.
ex: [xcor_i,ycor_i ]
-101.7000 -77.4040
-102.4200 -77.4040
KEYPOINT: This plane is perpendicular to one of images of checkerboard used for calibration. For that image(below), I know the height of origin of the checkerboard of from ground(193.040 cm). And the plane to project the points on is parallel to the ground and perpendicular to this image.
Code
(Ref:https://stackoverflow.com/a/27260492/3646408 and answer by #Dima below):
function generate_homographic_matrix()
%% Calibrate camera
% Define images to process
path=['.' filesep 'Images' filesep];
list_imgs=dir([path '*.jpg']);
list_imgs_path=strcat(path,{list_imgs.name});
% Detect checkerboards in images
[imagePoints, boardSize, imagesUsed] = detectCheckerboardPoints(list_imgs_path);
imageFileNames = list_imgs_path(imagesUsed);
% Generate world coordinates of the corners of the squares
squareSize = 27; % in units of 'mm'
worldPoints = generateCheckerboardPoints(boardSize, squareSize);
% Calibrate the camera
[cameraParams, imagesUsed, estimationErrors] = estimateCameraParameters(imagePoints, worldPoints, ...
'EstimateSkew', true, 'EstimateTangentialDistortion', true, ...
'NumRadialDistortionCoefficients', 3, 'WorldUnits', 'mm');
%% Compute homography for peripendicular plane to checkerboard
% Detect the checkerboard
im=imread(['.' filesep 'Images' filesep 'exp_19.jpg']); %exp_19.jpg is the checkerboard orthogonal to the floor
[imagePoints, boardSize] = detectCheckerboardPoints(im);
% Compute rotation and translation of the camera.
[Rc, Tc] = extrinsics(imagePoints, worldPoints, cameraParams);
% Rc(rotation of the calibration view w.r.t the camera) = [x y z])
%then the floor has rotation Rf = [z x -y].(Normal vector of the floor goes up.)
Rf=[Rc(:,3),Rc(:,1),Rc(:,2)*-1];
% Translate it to the floor
H=452;%distance btw origin and floor
Fc = Rc * [0; H; 0];
Tc = Tc + Fc';
% Combine rotation and translation into one matrix:
Rf(3, :) = Tc;
% Compute the homography between the checkerboard and the image plane:
H = Rf * cameraParams.IntrinsicMatrix;
save('homographic_matrix.mat','H')
end
%% Transform points
function [x_transf,y_transf] =transform_points(xcor_i,ycor_i)
% creates a projective2D object and then transforms the points forward to
% get a top-view
% xcor_i and ycor_i are 1d vectors comprising of the x-coordinates and
% y-coordinates of trajectories.
data=load('homographic_matrix.mat');
homo_matrix=data.H;
tform=projective2d(inv(homo_matrix));
[x_transf,y_transf] = transformPointsForward(tform,xcor_i,ycor_i);
end
Quoting text from OReilly Learning OpenCV Pg 412:
"Once we have the homography matrix and the height parameter set as we wish, we could
then remove the chessboard and drive the cart around, making a bird’s-eye view video
of the path..."
This what I essentially wish to achieve.
Abhishek,
I don't entirely understand what you are trying to do. Are your points on a plane, and are you trying to create a bird's eye view of that plane?
If so, then you need to know the extrinsics, R and t, describing the relationship between that plane and the camera. One way to get R and t is to place a checkerboard on the plane, and then use the extrinsics function.
After that, you can follow the directions in the question you cited to get the homography. Once you have the homography, you can create a projective2D object, and use its transformPointsForward method to transform your points.
Since you have the size of squares on the grid, then given 2 points that you know are connected by an edge of size E (in real world units), you can calculate their 3D position.
Taking the camera intrinsic matrix K and the 3D position C and the camera orientation matrix R, you can calculate a ray to each of the points p by doing:
D = R^T * K^-1 * p
Each 3D point is defined as:
P = C + t*D
and you have the constraint that ||P1-P2|| = E
then it's a matter of solving for t1,t2 and finding the 3D position of the two points.
In order to create a top view, you can take the 3D points and project them using a camera model for that top view to generate a new image.
If all your points are on a single plane, it's enough to calculate the position of 3 points, and you can extrapolate the rest.
If your points are located on a plane that you know one coordinate of, you can do it simply for each point. For example, if you know that your camera is located at height h=C.z, and you want to find the 3D location of points in the frame, given that they are on the floor (z=0), then all you have to do is calculate the direction D as above, and then:
t=abs( (h-0)/D.z )
The 0 represent the height of the plane. Substitute for any other value for other planes.
Now that you have the value of t, you can calculate the 3D position of each point: P=C+t*D.
Then, to create a top view, create a new camera position and rotation to match your required projection, and you can project each point onto this camera's image plane.
If you want a full image, you can interpolate positions and fill in the blanks where no feature point was present.
For more details, you can always read: http://www.robots.ox.ac.uk/~vgg/hzbook/index.html

Transform the points in the world to the camera coordinate frame

I have X: 3 by N matrix containing the point coordinates in 3-dimensional world coordinates. 2 intrinsic parameters: cam.f, focal length(scalar) and cam.c image center (principal point, 2 by 1 vector). and 2 extrinsic parameters cam.R : camera rotation matrix (3 by 3) and cam.t camera translation matrix (3 by 1 vector).
The question I have is, how to transform the points in the world to camera coordinates in MATLAB.