PCL: merging two sets of points to one cloud and visualizeing with PCL cloudviewer - visualization

I am trying to merge two sets of points from two different views to one single point cloud and visualize it with PCL cloud viewer.
mPtrPointCloud->points.clear();
mPtrPointCloud->points.resize(mFrameSize * 2);
auto it = mPtrPointCloud->points.begin();
received = PopReceived();
if(received != nullptr)
{
// p_data_cloud = (float*)received->mTransformedPC.data;
p_data_cloud = (float*)received->mCVPointCloud.data;
index = 0;
for (size_t i = 0; i < mFrameSize; ++i)
{
float X = p_data_cloud[index];
if (!isValidMeasure(X)) // Checking if it's a valid point
{
it->x = it->y = it->z = it->rgb = 0;
}
else
{
it->x = X;
it->y = p_data_cloud[index + 1];
it->z = p_data_cloud[index + 2];
it->rgb = convertColor(p_data_cloud[index + 3]); // Convert a 32bits float into a pcl .rgb format
}
index += 4;
++it;
}
}
frame = PopFrame();
if(frame != nullptr)
{
// p_data_cloud = frame->mSLPointCloud.getPtr<float>();
p_data_cloud = (float*)frame->mCVPointCloud.data;
index = 0;
for (size_t i = 0; i < mFrameSize; ++i)
{
float X = p_data_cloud[index];
if (!isValidMeasure(X)) // Checking if it's a valid point
{
it->x = it->y = it->z = it->rgb = 0;
}
else
{
it->x = X;
it->y = p_data_cloud[index + 1];
it->z = p_data_cloud[index + 2];
it->rgb = convertColor(p_data_cloud[index + 3]); // Convert a 32bits float into a pcl .rgb format
}
index += 4;
++it;
}
}
mPtrPCViewer->showCloud(mPtrPointCloud);
What I want to have is two sets of points are "fused" to one frame. However, it seems these two sets of points are still shown separately one after the other.
Could anyone help to explain how to really merge two sets of points into one cloud? Thanks

(1) Create a new empty pointcloud which will be the merged pointcloud at the end
pcl::PointCloud<pcl::PointXYZ> mPtrPointCloud;
(2) Transform point clouds to origin
pcl::PointCloud<pcl::PointXYZ> recieved_transformed;
Eigen::Transform<Scalar, 3, Eigen::Affine> recieved_transformation_mat(recieved.sensor_origin_ * recieved.sensor_orientation_);
pcl::transformPointCloud(recieved, recieved_transformed, recieved_transformation_mat);
pcl::PointCloud<pcl::PointXYZ> frame_transformed;
Eigen::Transform<Scalar, 3, Eigen::Affine> frame_transformation_mat(frame.sensor_origin_ * frame.sensor_orientation_);
pcl::transformPointCloud(frame, frame_transformed, frame_transformation_mat);
(3) Use the += operator
mPtrPointCloud += received_transformed;
mPtrPointCloud += frame_transformed;
(4) Visualize merged pointcloud
mPtrPCViewer->showCloud(mPtrPointCloud);
That's it. See also example http://pointclouds.org/documentation/tutorials/concatenate_clouds.php
http://pointclouds.org/documentation/tutorials/matrix_transform.php

Related

Why does my buffer geometry fail when I try to load in vertices, faces and normal from .mat file?

I want to load in my matlab geometry into my three.js scene. My 3D data is saved in a struct .mat file which contains .vertices, .faces, .VertexNormals and .VertexColorData arrays. I am able to load it into JavaScript and use buffer geometry and set attributes to store the data into a mesh geometry.
var keyName = keysArray[0];
meshGeometry = new THREE.BufferGeometry();
var index = 0;
var positions = new Float32Array(bfjson.data[keyName].vertices.length * 3);
for (let i = 0; i < bfjson.data[keyName].vertices.length; i++) {
positions[index++] = bfjson.data[keyName].vertices[i][0];
positions[index++] = bfjson.data[keyName].vertices[i][1];
positions[index++] = bfjson.data[keyName].vertices[i][2];
}
meshGeometry.setAttribute(
'position',
new THREE.BufferAttribute(positions, 3));
var index = 0;
var vectornormals = new Float32Array(bfjson.data[keyName].VertexNormals.length * 3);
for (let i = 0; i < bfjson.data[keyName].VertexNormals.length; i++) {
vectornormals[index++] = bfjson.data[keyName].VertexNormals[i][0];
vectornormals[index++] = bfjson.data[keyName].VertexNormals[i][1];
vectornormals[index++] = bfjson.data[keyName].VertexNormals[i][2];
}
meshGeometry.setAttribute(
'normal',
new THREE.BufferAttribute(vectornormals, 3));
var index = 0;
//var faces = new Uint16Array(bfjson.data[keyName].faces.length * 3);
var faces = [];
for (let i = 0; i < bfjson.data[keyName].faces.length; i++) {
faces[index++] = bfjson.data[keyName].faces[i][0];
faces[index++] = bfjson.data[keyName].faces[i][1];
faces[index++] = bfjson.data[keyName].faces[i][2];
}
meshGeometry.setIndex(faces);
// default color attribute
const colors = [];
for (let i = 0, n = meshGeometry.attributes.position.count; i < n; ++i) {
colors.push(1, 1, 1);
}
meshGeometry.setAttribute('color', new THREE.Float32BufferAttribute(colors, 3));
for (let i = 0; i < bfjson.data[keyName].CData.length; i++) {
CData[i] = (bfjson.data[keyName].CData[i]);
}
meshGeometry.setAttribute('perfusion', new THREE.Float32BufferAttribute(CData, 1));
mesh.geometry = meshGeometry;
updateColors();
The vertex coloring works fine. However, I end up with mesh with index faces or normal not connecting into a normal surface.
mesh with index faces or normal not connecting up into a normal surface.
I am not sure what I am doing wrong. I will be extremely grateful for any help provided.
Edit--
I have made a jsfiddle to help.
https://jsfiddle.net/marieO/5zdhsk78/68/
But you need to download the .mat file first then upload it to the scene. (as I was unable to add it to the jsfiddle)
https://dev.kingsvirtualanatomyandhistology.kcl.ac.uk//models/mat/p21_newmod.mat
Thanks for posting a working example with the steps needed to reproduce the error. It makes it much easier to help.
You have 652 vertices in your Matlab geometry. When using .setIndex(), these indices have to be in the [0, 651] range, because JavaScript arrays start at index 0. However, your faces data ranges from [1, 652], which means all your triangles are off by 1 vertex.
This is easily solvable by adding a -1 when assigning the index:
var index = 0;
var faces = [];
for (let i = 1; i < bfjson.data[keyName].faces.length; i++) {
faces[index++] = bfjson.data[keyName].faces[i][0] - 1;
faces[index++] = bfjson.data[keyName].faces[i][1] - 1;
faces[index++] = bfjson.data[keyName].faces[i][2] - 1;
}
meshGeometry.setIndex(faces);
Result:

How to generate a honeycomb field in Unity?

I need to generate such a field:
Photo
But I don't know how to do it. What happened to me:
My result
My code:
[ContextMenu("Generate grid")]
public void GenerateGrid()
{
for(int x = 0; x < _gridSize.x; x++)
{
for (int z = 0; z < _gridSize.z; z++)
{
var meshSize = _cell.GetComponent<MeshRenderer>().bounds.size;
var position = new Vector3(x * (meshSize.x + _offset), 0, z * (meshSize.z + _offset));
var cell = Instantiate(_cell, position, Quaternion.Euler(_rotationOffset), _parent.transform);
cell.GridActions = GridActions;
cell.Position = new Vector2(x, z);
cell.name = $"Cell: x:{x}, z:{z}";
GridActions.AllCell.Add(cell);
}
}
}
Simply for every odd z value, move the cell up/down by half a cell size, and move them inward toward the previous cell half a cell size. I didnt test it, but here is the code that might do that, not sure tho, again I didnt test this.
[ContextMenu("Generate grid")]
public void GenerateGrid()
{
for(int x = 0; x < _gridSize.x; x++)
{
for (int z = 0; z < _gridSize.z; z++)
{
int xResize = 0;
int zResize = 0;
if (z % 2 == 1) {
xResize = meshSize.x / 2;
zResize = meshSize.z / 2;
}
var meshSize = _cell.GetComponent<MeshRenderer>().bounds.size;
var position = new Vector3(x * (meshSize.x + _offset - xResize), 0, z * (meshSize.z + _offset - zResize));
var cell = Instantiate(_cell, position, Quaternion.Euler(_rotationOffset), _parent.transform);
cell.GridActions = GridActions;
cell.Position = new Vector2(x, z);
cell.name = $"Cell: x:{x}, z:{z}";
GridActions.AllCell.Add(cell);
}
}
}

mono for android application

I am working on gaming application in mono android. I want sample code for background image scrolling vertically from top to bottom. I have a code but it is not working properly.So plz somebody help me.
mBGFarMoveY = mBGFarMoveY + 3;
int newFarY = mBackgroundImageFar.Height + (+ mBGFarMoveY);
if (newFarY <= 0)
{
mBGFarMoveY = 0;
canvas.DrawBitmap (mBackgroundImageFar,0,mBGFarMoveY,null);
}
else
{
canvas.DrawBitmap (mBackgroundImageFar,0,mBGFarMoveY,null);
canvas.DrawBitmap (mBackgroundImageFar,0, newFarY, null);
}
Thanks&Regard's,
Chakradhar.
What do you see and what do you expect? There are several issues with your code as far as I can see.
The position is not calculated based upon time, so scrolling will
be jumpy.
The overlap code doesn't look too great and blits a lot out of bounds. I'm not sure
what 'canvas' is, but if it is the one from android.graphics, you
can specify the source and destination rectangles to blit rather
than just the 'y' position.
so something like (untested and I've not written code for this platform before but you should get the idea):
y = (time_seconds * pixels_per_second);
y = y % image.Height; // wrap
src_rect.left = 0;
src_rect.right = image.Width - 1;
src_rect.top = y;
src_rect.bottom = image.Height - 1;
dst_rect.left = 0;
dst_rect.right = image.Width - 1;
dst_rect.top = 0;
dst_rect.bottom = image.Height - 1;
if (y == 0) {
canvas.DrawBitmap(image, src_rect, dst_rect, null);
}
else {
dst_rect.bottom = src_rect.height() - 1;
canvas.DrawBitmap(image, src_rect, dst_rect, null);
src_rect.top = 0;
src_rect.bottom = y - 1;
dst_rect.top = dst_rect.bottom + 1;
dst_rect.bottom = image.Height - 1;
canvas.DrawBitmap(image, src_rect, dst_rect, null);
}

imregionalmax matlab function's equivalent in opencv

I have an image of connected components(circles filled).If i want to segment them i can use watershed algorithm.I prefer writing my own function for watershed instead of using the inbuilt function in OPENCV.I have successfu How do i find the regionalmax of objects using opencv?
I wrote a function myself. My results were quite similar to MATLAB, although not exact. This function is implemented for CV_32F but it can easily be modified for other types.
I mark all the points that are not part of a minimum region by checking all the neighbors. The remaining regions are either minima, maxima or areas of inflection.
I use connected components to label each region.
I check each region for any point belonging to a maxima, if yes then I push that label into a vector.
Finally I sort the bad labels, erase all duplicates and then mark all the points in the output as not minima.
All that remains are the regions of minima.
Here is the code:
// output is a binary image
// 1: not a min region
// 0: part of a min region
// 2: not sure if min or not
// 3: uninitialized
void imregionalmin(cv::Mat& img, cv::Mat& out_img)
{
// pad the border of img with 1 and copy to img_pad
cv::Mat img_pad;
cv::copyMakeBorder(img, img_pad, 1, 1, 1, 1, IPL_BORDER_CONSTANT, 1);
// initialize binary output to 2, unknown if min
out_img = cv::Mat::ones(img.rows, img.cols, CV_8U)+2;
// initialize pointers to matrices
float* in = (float *)(img_pad.data);
uchar* out = (uchar *)(out_img.data);
// size of matrix
int in_size = img_pad.cols*img_pad.rows;
int out_size = img.cols*img.rows;
int x, y;
for (int i = 0; i < out_size; i++) {
// find x, y indexes
y = i % img.cols;
x = i / img.cols;
neighborCheck(in, out, i, x, y, img_pad.cols); // all regions are either min or max
}
cv::Mat label;
cv::connectedComponents(out_img, label);
int* lab = (int *)(label.data);
in = (float *)(img.data);
in_size = img.cols*img.rows;
std::vector<int> bad_labels;
for (int i = 0; i < out_size; i++) {
// find x, y indexes
y = i % img.cols;
x = i / img.cols;
if (lab[i] != 0) {
if (neighborCleanup(in, out, i, x, y, img.rows, img.cols) == 1) {
bad_labels.push_back(lab[i]);
}
}
}
std::sort(bad_labels.begin(), bad_labels.end());
bad_labels.erase(std::unique(bad_labels.begin(), bad_labels.end()), bad_labels.end());
for (int i = 0; i < out_size; ++i) {
if (lab[i] != 0) {
if (std::find(bad_labels.begin(), bad_labels.end(), lab[i]) != bad_labels.end()) {
out[i] = 0;
}
}
}
}
int inline neighborCleanup(float* in, uchar* out, int i, int x, int y, int x_lim, int y_lim)
{
int index;
for (int xx = x - 1; xx < x + 2; ++xx) {
for (int yy = y - 1; yy < y + 2; ++yy) {
if (((xx == x) && (yy==y)) || xx < 0 || yy < 0 || xx >= x_lim || yy >= y_lim)
continue;
index = xx*y_lim + yy;
if ((in[i] == in[index]) && (out[index] == 0))
return 1;
}
}
return 0;
}
void inline neighborCheck(float* in, uchar* out, int i, int x, int y, int x_lim)
{
int indexes[8], cur_index;
indexes[0] = x*x_lim + y;
indexes[1] = x*x_lim + y+1;
indexes[2] = x*x_lim + y+2;
indexes[3] = (x+1)*x_lim + y+2;
indexes[4] = (x + 2)*x_lim + y+2;
indexes[5] = (x + 2)*x_lim + y + 1;
indexes[6] = (x + 2)*x_lim + y;
indexes[7] = (x + 1)*x_lim + y;
cur_index = (x + 1)*x_lim + y+1;
for (int t = 0; t < 8; t++) {
if (in[indexes[t]] < in[cur_index]) {
out[i] = 0;
break;
}
}
if (out[i] == 3)
out[i] = 1;
}
The following listing is a function similar to Matlab's "imregionalmax". It looks for at most nLocMax local maxima above threshold, where the found local maxima are at least minDistBtwLocMax pixels apart. It returns the actual number of local maxima found. Notice that it uses OpenCV's minMaxLoc to find global maxima. It is "opencv-self-contained" except for the (easy to implement) function vdist, which computes the (euclidian) distance between points (r,c) and (row,col).
input is one-channel CV_32F matrix, and locations is nLocMax (rows) by 2 (columns) CV_32S matrix.
int imregionalmax(Mat input, int nLocMax, float threshold, float minDistBtwLocMax, Mat locations)
{
Mat scratch = input.clone();
int nFoundLocMax = 0;
for (int i = 0; i < nLocMax; i++) {
Point location;
double maxVal;
minMaxLoc(scratch, NULL, &maxVal, NULL, &location);
if (maxVal > threshold) {
nFoundLocMax += 1;
int row = location.y;
int col = location.x;
locations.at<int>(i,0) = row;
locations.at<int>(i,1) = col;
int r0 = (row-minDistBtwLocMax > -1 ? row-minDistBtwLocMax : 0);
int r1 = (row+minDistBtwLocMax < scratch.rows ? row+minDistBtwLocMax : scratch.rows-1);
int c0 = (col-minDistBtwLocMax > -1 ? col-minDistBtwLocMax : 0);
int c1 = (col+minDistBtwLocMax < scratch.cols ? col+minDistBtwLocMax : scratch.cols-1);
for (int r = r0; r <= r1; r++) {
for (int c = c0; c <= c1; c++) {
if (vdist(Point2DMake(r, c),Point2DMake(row, col)) <= minDistBtwLocMax) {
scratch.at<float>(r,c) = 0.0;
}
}
}
} else {
break;
}
}
return nFoundLocMax;
}
I do not know if it is what you want, but in my answer to this post, I gave some code to find local maxima (peaks) in a grayscale image (resulting from distance transform).
The approach relies on subtracting the original image from the dilated image and finding the zero pixels).
I hope it helps,
Good luck
I had the same problem some time ago, and the solution was to reimplement the imregionalmax algorithm in OpenCV/Cpp. It is not that complicated, because you can find the C++ source code of the function in the Matlab distribution. (somewhere in toolbox). All you have to do is to read carefully and understand the algorithm described there. Then rewrite it or remove the matlab-specific checks and you'll have it.

Teaching a Neural Net: Bipolar XOR

I'm trying to to teach a neural net of 2 inputs, 4 hidden nodes (all in same layer) and 1 output node. The binary representation works fine, but I have problems with the Bipolar. I can't figure out why, but the total error will sometimes converge to the same number around 2.xx. My sigmoid is 2/(1+ exp(-x)) - 1. Perhaps I'm sigmoiding in the wrong place. For example to calculate the output error should I be comparing the sigmoided output with the expected value or with the sigmoided expected value?
I was following this website here: http://galaxy.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html , but they use different functions then I was instructed to use. Even when I did try to implement their functions I still ran into the same problem. Either way I get stuck about half the time at the same number (a different number for different implementations). Please tell me if I have made a mistake in my code somewhere or if this is normal (I don't see how it could be). Momentum is set to 0. Is this a common 0 momentum problem? The error functions we are supposed to be using are:
if ui is an output unit
Error(i) = (Ci - ui ) * f'(Si )
if ui is a hidden unit
Error(i) = Error(Output) * weight(i to output) * f'(Si)
public double sigmoid( double x ) {
double fBipolar, fBinary, temp;
temp = (1 + Math.exp(-x));
fBipolar = (2 / temp) - 1;
fBinary = 1 / temp;
if(bipolar){
return fBipolar;
}else{
return fBinary;
}
}
// Initialize the weights to random values.
private void initializeWeights(double neg, double pos) {
for(int i = 0; i < numInputs + 1; i++){
for(int j = 0; j < numHiddenNeurons; j++){
inputWeights[i][j] = Math.random() - pos;
if(inputWeights[i][j] < neg || inputWeights[i][j] > pos){
print("ERROR ");
print(inputWeights[i][j]);
}
}
}
for(int i = 0; i < numHiddenNeurons + 1; i++){
hiddenWeights[i] = Math.random() - pos;
if(hiddenWeights[i] < neg || hiddenWeights[i] > pos){
print("ERROR ");
print(hiddenWeights[i]);
}
}
}
// Computes output of the NN without training. I.e. a forward pass
public double outputFor ( double[] argInputVector ) {
for(int i = 0; i < numInputs; i++){
inputs[i] = argInputVector[i];
}
double weightedSum = 0;
for(int i = 0; i < numHiddenNeurons; i++){
weightedSum = 0;
for(int j = 0; j < numInputs + 1; j++){
weightedSum += inputWeights[j][i] * inputs[j];
}
hiddenActivation[i] = sigmoid(weightedSum);
}
weightedSum = 0;
for(int j = 0; j < numHiddenNeurons + 1; j++){
weightedSum += (hiddenActivation[j] * hiddenWeights[j]);
}
return sigmoid(weightedSum);
}
//Computes the derivative of f
public static double fPrime(double u){
double fBipolar, fBinary;
fBipolar = 0.5 * (1 - Math.pow(u,2));
fBinary = u * (1 - u);
if(bipolar){
return fBipolar;
}else{
return fBinary;
}
}
// This method is used to update the weights of the neural net.
public double train ( double [] argInputVector, double argTargetOutput ){
double output = outputFor(argInputVector);
double lastDelta;
double outputError = (argTargetOutput - output) * fPrime(output);
if(outputError != 0){
for(int i = 0; i < numHiddenNeurons + 1; i++){
hiddenError[i] = hiddenWeights[i] * outputError * fPrime(hiddenActivation[i]);
deltaHiddenWeights[i] = learningRate * outputError * hiddenActivation[i] + (momentum * lastDelta);
hiddenWeights[i] += deltaHiddenWeights[i];
}
for(int in = 0; in < numInputs + 1; in++){
for(int hid = 0; hid < numHiddenNeurons; hid++){
lastDelta = deltaInputWeights[in][hid];
deltaInputWeights[in][hid] = learningRate * hiddenError[hid] * inputs[in] + (momentum * lastDelta);
inputWeights[in][hid] += deltaInputWeights[in][hid];
}
}
}
return 0.5 * (argTargetOutput - output) * (argTargetOutput - output);
}
General coding comments:
initializeWeights(-1.0, 1.0);
may not actually get the initial values you were expecting.
initializeWeights should probably have:
inputWeights[i][j] = Math.random() * (pos - neg) + neg;
// ...
hiddenWeights[i] = (Math.random() * (pos - neg)) + neg;
instead of:
Math.random() - pos;
so that this works:
initializeWeights(0.0, 1.0);
and gives you initial values between 0.0 and 1.0 rather than between -1.0 and 0.0.
lastDelta is used before it is declared:
deltaHiddenWeights[i] = learningRate * outputError * hiddenActivation[i] + (momentum * lastDelta);
I'm not sure if the + 1 on numInputs + 1 and numHiddenNeurons + 1 are necessary.
Remember to watch out for rounding of ints: 5/2 = 2, not 2.5!
Use 5.0/2.0 instead. In general, add the .0 in your code when the output should be a double.
Most importantly, have you trained the NeuralNet long enough?
Try running it with numInputs = 2, numHiddenNeurons = 4, learningRate = 0.9, and train for 1,000 or 10,000 times.
Using numHiddenNeurons = 2 it sometimes get "stuck" when trying to solve the XOR problem.
See also XOR problem - simulation