Rotated constellation - simulink

I would like to know how to implement rotated 64QAM constellation under simulink? I have already initiate my transmission chain which should contain QAM mapping followed by constellation rotation then cell interleaver.But I didn't have any idea how to implement the block of rotated constellation.

Related

How can I rotate connected components so that they are upright in Matlab?

Currently I am working with a sudoku grid and I have the binary image. I am using Regionprops to get the area of the connected components and then turn the rest of the image black. After this I call the OCR method to try and read the sudoku numbers. The problem is that this only works if the sudoku grid in the image is straight and upright. If it is rotated even a little bit I am not able to pull the numbers. This is the code I have so far:
% get grid connected parts
conn_part = bwconncomp(im_binary);
% blacken area outside
stats = regionprops(conn_part,'Area');
im_out = im_binary; % Make mask
im_out(vertcat(conn_part.PixelIdxList{[stats.Area] < 825 | [stats.Area] > 2500})) = 0;
imagesc(im_out);
title("Numbers pulled");
sudokuNum = ocr(im_out,'TextLayout','Block','CharacterSet','0123456789');
sudokuNum.Text;
Where im_binary is the binary image
im_out is the output image
stats is the object returned from regionprops containing the area of the connected components
I know I can rotate the image before getting the OCR results by doing:
im_out = imrotate(im_out, angle)
However I don't know what angle the grid is at since this is part of a function that loops through for multiple images. I looked into the regionprops method because there is an attribute 'Orientation' which I can pull from there but I don't understand how I would actually use it. It also states that regionprops will return a value between -90 and 90, but my image could be rotated by more than 90 degrees.
Don't rotate the connected component or the binary image. First use the binary image to determine the rotation, then rotate the original grey-scale or color input image, and then binarize the rotated image. You'll be able to transform with interpolation, which will improve your results greatly. It does require to do the binarization step twice, but I don't think this step usually is too expensive.
The regionprops orientation feature is computed by "fitting" an ellipse to the shape. This is meaningful only for elongated objects. For a square sudoku grid this will not yield any valuable information.
Instead, look at the angle at which the smallest Feret diameter was obtained. The Feret diameters are the lengths of the projections at arbitrary angles. At one angle, this projection is smallest. By necessity it will be at an angle corresponding to one of the principal axes of the square. Here is more information about how to compute Feret diameters in MATLAB.
A different alternative is e.g. to use the Hough transform to detect the lines of the grid.
Do note that the geometry of the puzzle will never tell you about which side is up. The angle you get here should be taken modulo π/2 (i.e. constrain to the range -π/4 to π/4).
To know what direction is up you might do by trying to read the text, if it fails, rotate by 90 degrees and try again.

Getting a trajectory from accelerometer and gyroscope (IMU)

I am well aware of the existence of this question but mine will differ. I also know that there could be significant errors with this approach but I want to understand the configuration also theoretically.
I have some basic questions which I find hard to answer for myself clearly. There is a lot of information about accelerometers and gyroscopes but I still haven't found an explanation "from first principles" of some basic properties.
So I have a plate sensor that contains an accelerometer and gyroscope. There is also a magnetometer which I skip for now.
The accelerometer gives information in each time t about the temporary acceleration vector a = (ax, ay, az) in m/s^2 according to the fixed coordinate system to the sensor.
The gyroscope gives a 3D vector in deg/s which says the temporary speed of rotation of the three axes (Ox, Oy and Oz). From this information, one can get a rotation matrix that corresponds to an infinitesimal rotation of the coordinate system (according to the previous moment). Here is some explanation how to obtain a quaternion, that represents R.
So we know that the infinitesimal movement can be calculated considering that the acceleration is the second derivative of the position.
Imagine that your sensor is attached to your hand or leg. In the first moment we can consider its point in 3D space as (0,0,0) and the initial coordinate system also attached in this physical point. So for the very first time step we will have
r(1) = 0.5a(0)dt^2
where r is the infinitesimal movement vector, a(0) is the acceleration vector.
In each of the following steps we will use the calculations
r(t+1) = 0.5a(t)dt^2 + v(t)dt + r(t)
where v(t) is the speed vector which will be estimated in some way, for example as (r(t)-r(t-1)) / dt.
Also, after each infinitesimal movement we will have to take into account the data from the gyroscope. We will use the rotation matrix to rotate the vector r(t+1).
In this way, probably with tremendous error I will get some trajectory according to the initial coordinate system.
My queries are:
Am I principally correct with this algorithm? If not, where am I wrong?
I would very much appreciate some resources with a working example where the first principles are not skipped.
How should I proceed with using the Kalman's filter to obtain a better trajectory? In what way exactly do I pass all the IMU data (accelerometer, gyroscope and magnetometer) to the Kalman filter?
Your conceptual framework is correct, but the equations need some work. The acceleration is measured in the platform frame, which can rotate very quickly, so it is not advisable to integrate acceleration in the platform frame and rotate the position change. Rather, the accelerations are transformed into a relatively slowly rotating frame and the integration to velocity change and position change is done there. Typically a locally-level frame (e.g. North-East-Down or Wander Aziumuth) or an Earth-centered frame (ECEF or ECI). Gravity and Coriolis force must be included in the acceleration.
Derivations from first principles can be found in many references, one of my favorites is Strapdown Inertial Navigation Technology by Titterton and Weston. Derivations of the inertial navigation equations in locally-level and Earth-fixed frames are given in Chapter 3.
As you've recognized in your question - the initial velocity is an unknown constant of integration. Without some estimate of initial velocity the trajectory resulting from integrating the inertial data can be wildly wrong.

Verify that camera calibration is still valid

How do you determine that the intrinsic and extrinsic parameters you have calculated for a camera at time X are still valid at time Y?
My idea would be
to use a known calibration object (a chessboard) and place it in the camera's field of view at time Y.
Calculate the chessboard corner points in the camera's image (at time Y).
Define one of the chessboard corner points as world origin and calculate the world coordinates of all remaining chessboard corners based on that origin.
Relate the coordinates of 3. with the camera coordinate system.
Use the parameters calculated at time X to calculate the image points of the points from 4.
Calculate distances between points from 2. with points from 5.
Is that a clever way to go about it? I'd eventually like to implement it in MATLAB and later possibly openCV. I think I'd know how to do steps 1)-2) and step 6). Maybe someone can give a rough implementation for steps 2)-5). Especially I'd be unsure how to relate the "chessboard-world-coordinate-system" with the "camera-world-coordinate-system", which I believe I would have to do.
Thanks!
If you have a single camera you can easily follow the steps from this article:
Evaluating the Accuracy of Single Camera Calibration
For achieving step 2, you can easily use detectCheckerboardPoints function from MATLAB.
[imagePoints, boardSize, imagesUsed] = detectCheckerboardPoints(imageFileNames);
Assuming that you are talking about stereo-cameras, for stereo pairs, imagePoints(:,:,:,1) are the points from the first set of images, and imagePoints(:,:,:,2) are the points from the second set of images. The output contains M number of [x y] coordinates. Each coordinate represents a point where square corners are detected on the checkerboard. The number of points the function returns depends on the value of boardSize, which indicates the number of squares detected. The function detects the points with sub-pixel accuracy.
As you can see in the following image the points are estimated relative to the first point that covers your third step.
[The image is from this page at MATHWORKS.]
You can consider point 1 as the origin of your coordinate system (0,0). The directions of the axes are shown on the image and you know the distance between each point (in the world coordinate), so it is just the matter of depth estimation.
To find a transformation matrix between the points in the world CS and the points in the camera CS, you should collect a set of points and perform an SVD to estimate the transformation matrix.
But,
I would estimate the parameters of the camera and compare them with the initial parameters at time X. This is easier, if you have saved the images that were used when calibrating the camera at time X. By repeating the calibrating process using those images you should get very similar results, if the camera calibration is still valid.
Edit: Why you need the set of images used in the calibration process at time X?
You have a set of images to do the calibrations for the first time, right? To recalibrate the camera you need to use a new set of images. But for checking the previous calibration, you can use the previous images. If the parameters of the camera are changes, there would be an error between the re-estimation and the first estimation. This can be used for evaluating the validity of the calibration not for recalibrating the camera.

Understanding of openCV undistortion

I'm receiving depth images of a tof camera via MATLAB. the delivered drivers of the tof camera to compute x,y,z coordinates out of the depth image are using openCV function, which are implemented in MATLAB via mex-files.
But later on I can't use those drivers anymore nor use openCV functions, therefore I need to implement the 2d to 3d mapping on my own including the compensation of radial distortion. I already got hold of the camera parameters and the computation of the x,y,z coordinates of each pixel of the depth image is working. Until now I am solving the implicit equations of the undistortion via the newton method (which isn't really fast...). But I want to implement the undistortion of the openCV function.
... and there is my problem: I dont really understand it and I hope you can help me out there. how is it actually working? I tried to search through the forum, but havent found any useful threads concerning this case.
greetings!
The equations of the projection of a 3D point [X; Y; Z] to a 2D image point [u; v] are provided on the documentation page related to camera calibration :
(source: opencv.org)
In the case of lens distortion, the equations are non-linear and depend on 3 to 8 parameters (k1 to k6, p1 and p2). Hence, it would normally require a non-linear solving algorithm (e.g. Newton's method, Levenberg-Marquardt algorithm, etc) to inverse such a model and estimate the undistorted coordinates from the distorted ones. And this is what is used behind function undistortPoints, with tuned parameters making the optimization fast but a little inaccurate.
However, in the particular case of image lens correction (as opposed to point correction), there is a much more efficient approach based on a well-known image re-sampling trick. This trick is that, in order to obtain a valid intensity for each pixel of your destination image, you have to transform coordinates in the destination image into coordinates in the source image, and not the opposite as one would intuitively expect. In the case of lens distortion correction, this means that you actually do not have to inverse the non-linear model, but just apply it.
Basically, the algorithm behind function undistort is the following. For each pixel of the destination lens-corrected image do:
Convert the pixel coordinates (u_dst, v_dst) to normalized coordinates (x', y') using the inverse of the calibration matrix K,
Apply the lens-distortion model, as displayed above, to obtain the distorted normalized coordinates (x'', y''),
Convert (x'', y'') to distorted pixel coordinates (u_src, v_src) using the calibration matrix K,
Use the interpolation method of your choice to find the intensity/depth associated with the pixel coordinates (u_src, v_src) in the source image, and assign this intensity/depth to the current destination pixel.
Note that if you are interested in undistorting the depthmap image, you should use a nearest-neighbor interpolation, otherwise you will almost certainly interpolate depth values at object boundaries, resulting in unwanted artifacts.
The above answer is correct, but do note that UV coordinates are in screen space and centered around (0,0) instead of "real" UV coordinates.
Source: own re-implementation using Python/OpenGL. Code:
def correct_pt(uv, K, Kinv, ds):
uv_3=np.stack((uv[:,0],uv[:,1],np.ones(uv.shape[0]),),axis=-1)
xy_=uv_3#Kinv.T
r=np.linalg.norm(xy_,axis=-1)
coeff=(1+ds[0]*(r**2)+ds[1]*(r**4)+ds[4]*(r**6));
xy__=xy_*coeff[:,np.newaxis]
return (xy__#K.T)[:,0:2]

pca in matlab - 2D curve stretching

I have N 3D observations taken from an optical motion capture system in XYZ form.
The motion that was captured was just a simple circle arc, derived from a rigid body with fixed axis of rotation.
I used the princomp function in matlab to get all marker points on the same plane i.e. the plane on which the motion has been done.
(See a pic representing 3D data on the plane that was found, below)
What i want to do after the previous step is to look the fitted data on the plane that was found and get the curve of the captured motion in 2D.
In the princomp how to, it is said that
The first two coordinates of the principal component scores give the
projection of each point onto the plane, in the coordinate system of
the plane.
(from "Fitting an Orthogonal Regression Using Principal Components Analysis" article on mathworks help site)
So i thought that if i just plot those pc scores -plot(score(:,1),score(:,2))- i'll get the motion curve. Instead what i got is this.
(See a pic representing curve data in 2D derived from pc scores, below)
The 2d curve seems stretched and nonlinear (different y values for same x values) when it shouldn't be. The curve that i am looking for, should be interpolated by just using simple polynomial (polyfit) or circle fit in matlab.
Is this happening because the plane that was found looks like rhombus relative to the original coordinate system and the pc axes are rotated with respect to the basis of plane in such way that produce this stretch?
Then i thought that, this is happening because of the different coordinate systems of optical system and Matlab. Optical system's (ie cameras) co.sys. is XZY oriented and Matlab's default (i think) co.sys is XYZ oriented. I transformed my data to correspond to Matlab's co.sys through a rotation matrix, run again princomp but i got the same stretch in the 2D curve (the new curve just had different orientation now).
Somewhere else i read that
Principal Components Analysis chooses the first PCA axis as that line
that goes through the centroid, but also minimizes the square of the
distance of each point to that line. Thus, in some sense, the line is
as close to all of the data as possible. Equivalently, the line goes
through the maximum variation in the data. The second PCA axis also
must go through the centroid, and also goes through the maximum
variation in the data, but with a certain constraint: It must be
completely uncorrelated (i.e. at right angles, or "orthogonal") to PCA
axis 1.
I know that i am missing something but i have a problem understanding why i get a stretched curve. What i have to do so i can get the curve right?
Thanks in advance.
EDIT: Here is a sample data file (3 columns XYZ coords for 2 markers)
w w w.sendspace.com/file/2hiezc