Why does my buffer geometry fail when I try to load in vertices, faces and normal from .mat file? - matlab

I want to load in my matlab geometry into my three.js scene. My 3D data is saved in a struct .mat file which contains .vertices, .faces, .VertexNormals and .VertexColorData arrays. I am able to load it into JavaScript and use buffer geometry and set attributes to store the data into a mesh geometry.
var keyName = keysArray[0];
meshGeometry = new THREE.BufferGeometry();
var index = 0;
var positions = new Float32Array(bfjson.data[keyName].vertices.length * 3);
for (let i = 0; i < bfjson.data[keyName].vertices.length; i++) {
positions[index++] = bfjson.data[keyName].vertices[i][0];
positions[index++] = bfjson.data[keyName].vertices[i][1];
positions[index++] = bfjson.data[keyName].vertices[i][2];
}
meshGeometry.setAttribute(
'position',
new THREE.BufferAttribute(positions, 3));
var index = 0;
var vectornormals = new Float32Array(bfjson.data[keyName].VertexNormals.length * 3);
for (let i = 0; i < bfjson.data[keyName].VertexNormals.length; i++) {
vectornormals[index++] = bfjson.data[keyName].VertexNormals[i][0];
vectornormals[index++] = bfjson.data[keyName].VertexNormals[i][1];
vectornormals[index++] = bfjson.data[keyName].VertexNormals[i][2];
}
meshGeometry.setAttribute(
'normal',
new THREE.BufferAttribute(vectornormals, 3));
var index = 0;
//var faces = new Uint16Array(bfjson.data[keyName].faces.length * 3);
var faces = [];
for (let i = 0; i < bfjson.data[keyName].faces.length; i++) {
faces[index++] = bfjson.data[keyName].faces[i][0];
faces[index++] = bfjson.data[keyName].faces[i][1];
faces[index++] = bfjson.data[keyName].faces[i][2];
}
meshGeometry.setIndex(faces);
// default color attribute
const colors = [];
for (let i = 0, n = meshGeometry.attributes.position.count; i < n; ++i) {
colors.push(1, 1, 1);
}
meshGeometry.setAttribute('color', new THREE.Float32BufferAttribute(colors, 3));
for (let i = 0; i < bfjson.data[keyName].CData.length; i++) {
CData[i] = (bfjson.data[keyName].CData[i]);
}
meshGeometry.setAttribute('perfusion', new THREE.Float32BufferAttribute(CData, 1));
mesh.geometry = meshGeometry;
updateColors();
The vertex coloring works fine. However, I end up with mesh with index faces or normal not connecting into a normal surface.
mesh with index faces or normal not connecting up into a normal surface.
I am not sure what I am doing wrong. I will be extremely grateful for any help provided.
Edit--
I have made a jsfiddle to help.
https://jsfiddle.net/marieO/5zdhsk78/68/
But you need to download the .mat file first then upload it to the scene. (as I was unable to add it to the jsfiddle)
https://dev.kingsvirtualanatomyandhistology.kcl.ac.uk//models/mat/p21_newmod.mat

Thanks for posting a working example with the steps needed to reproduce the error. It makes it much easier to help.
You have 652 vertices in your Matlab geometry. When using .setIndex(), these indices have to be in the [0, 651] range, because JavaScript arrays start at index 0. However, your faces data ranges from [1, 652], which means all your triangles are off by 1 vertex.
This is easily solvable by adding a -1 when assigning the index:
var index = 0;
var faces = [];
for (let i = 1; i < bfjson.data[keyName].faces.length; i++) {
faces[index++] = bfjson.data[keyName].faces[i][0] - 1;
faces[index++] = bfjson.data[keyName].faces[i][1] - 1;
faces[index++] = bfjson.data[keyName].faces[i][2] - 1;
}
meshGeometry.setIndex(faces);
Result:

Related

How to generate a honeycomb field in Unity?

I need to generate such a field:
Photo
But I don't know how to do it. What happened to me:
My result
My code:
[ContextMenu("Generate grid")]
public void GenerateGrid()
{
for(int x = 0; x < _gridSize.x; x++)
{
for (int z = 0; z < _gridSize.z; z++)
{
var meshSize = _cell.GetComponent<MeshRenderer>().bounds.size;
var position = new Vector3(x * (meshSize.x + _offset), 0, z * (meshSize.z + _offset));
var cell = Instantiate(_cell, position, Quaternion.Euler(_rotationOffset), _parent.transform);
cell.GridActions = GridActions;
cell.Position = new Vector2(x, z);
cell.name = $"Cell: x:{x}, z:{z}";
GridActions.AllCell.Add(cell);
}
}
}
Simply for every odd z value, move the cell up/down by half a cell size, and move them inward toward the previous cell half a cell size. I didnt test it, but here is the code that might do that, not sure tho, again I didnt test this.
[ContextMenu("Generate grid")]
public void GenerateGrid()
{
for(int x = 0; x < _gridSize.x; x++)
{
for (int z = 0; z < _gridSize.z; z++)
{
int xResize = 0;
int zResize = 0;
if (z % 2 == 1) {
xResize = meshSize.x / 2;
zResize = meshSize.z / 2;
}
var meshSize = _cell.GetComponent<MeshRenderer>().bounds.size;
var position = new Vector3(x * (meshSize.x + _offset - xResize), 0, z * (meshSize.z + _offset - zResize));
var cell = Instantiate(_cell, position, Quaternion.Euler(_rotationOffset), _parent.transform);
cell.GridActions = GridActions;
cell.Position = new Vector2(x, z);
cell.name = $"Cell: x:{x}, z:{z}";
GridActions.AllCell.Add(cell);
}
}
}

PCL: merging two sets of points to one cloud and visualizeing with PCL cloudviewer

I am trying to merge two sets of points from two different views to one single point cloud and visualize it with PCL cloud viewer.
mPtrPointCloud->points.clear();
mPtrPointCloud->points.resize(mFrameSize * 2);
auto it = mPtrPointCloud->points.begin();
received = PopReceived();
if(received != nullptr)
{
// p_data_cloud = (float*)received->mTransformedPC.data;
p_data_cloud = (float*)received->mCVPointCloud.data;
index = 0;
for (size_t i = 0; i < mFrameSize; ++i)
{
float X = p_data_cloud[index];
if (!isValidMeasure(X)) // Checking if it's a valid point
{
it->x = it->y = it->z = it->rgb = 0;
}
else
{
it->x = X;
it->y = p_data_cloud[index + 1];
it->z = p_data_cloud[index + 2];
it->rgb = convertColor(p_data_cloud[index + 3]); // Convert a 32bits float into a pcl .rgb format
}
index += 4;
++it;
}
}
frame = PopFrame();
if(frame != nullptr)
{
// p_data_cloud = frame->mSLPointCloud.getPtr<float>();
p_data_cloud = (float*)frame->mCVPointCloud.data;
index = 0;
for (size_t i = 0; i < mFrameSize; ++i)
{
float X = p_data_cloud[index];
if (!isValidMeasure(X)) // Checking if it's a valid point
{
it->x = it->y = it->z = it->rgb = 0;
}
else
{
it->x = X;
it->y = p_data_cloud[index + 1];
it->z = p_data_cloud[index + 2];
it->rgb = convertColor(p_data_cloud[index + 3]); // Convert a 32bits float into a pcl .rgb format
}
index += 4;
++it;
}
}
mPtrPCViewer->showCloud(mPtrPointCloud);
What I want to have is two sets of points are "fused" to one frame. However, it seems these two sets of points are still shown separately one after the other.
Could anyone help to explain how to really merge two sets of points into one cloud? Thanks
(1) Create a new empty pointcloud which will be the merged pointcloud at the end
pcl::PointCloud<pcl::PointXYZ> mPtrPointCloud;
(2) Transform point clouds to origin
pcl::PointCloud<pcl::PointXYZ> recieved_transformed;
Eigen::Transform<Scalar, 3, Eigen::Affine> recieved_transformation_mat(recieved.sensor_origin_ * recieved.sensor_orientation_);
pcl::transformPointCloud(recieved, recieved_transformed, recieved_transformation_mat);
pcl::PointCloud<pcl::PointXYZ> frame_transformed;
Eigen::Transform<Scalar, 3, Eigen::Affine> frame_transformation_mat(frame.sensor_origin_ * frame.sensor_orientation_);
pcl::transformPointCloud(frame, frame_transformed, frame_transformation_mat);
(3) Use the += operator
mPtrPointCloud += received_transformed;
mPtrPointCloud += frame_transformed;
(4) Visualize merged pointcloud
mPtrPCViewer->showCloud(mPtrPointCloud);
That's it. See also example http://pointclouds.org/documentation/tutorials/concatenate_clouds.php
http://pointclouds.org/documentation/tutorials/matrix_transform.php

How can I get a reValue in a specific position of the matrix?

I'm new in sprite kit. I'm taking the value at a specific position of a matrix, but I really have a problem when I print that value. The simulator prints: optional(nil value) instead of printing just the value.
So, how can I get the reValue in a specific position of the matrix?
Code:
NumColumns = 4
NumRows = 4
func matrix() {
var valor = "0"
var principal = "0"
for var column = 0; column < NumColumns; column++ {
for var j = 0; j < NumRows; j++ {
valor = "\(numbers[column, j])"
cont++
principal = "\(cont)"
if valor != "0" {
numbers[column, j] = valor + principal
println("\(numbers[column, j])") //This print show: optional(nil >value)
}
else {
numbers[column, j] = principal
}
}
}
}
You can create a two-dimensional matrix of strings with the following:
var numColumns = 4
var numRows = 4
// Create a 4x4 matrix of Strings
var array = [[String]](count: numColumns, repeatedValue:[String](count: numRows, repeatedValue:String()))
// Assign a string to a matrix element
array[1][2] = "element at (1,2)"
println (array[1][2])

Matlab to openCV code conversion

I am trying to convert a code in MatLab to OpenCV but I am stuck about the following lines as I don't know much programming
MatLab code:
[indx_row, indx_col] = find(mask ==1);
Indx_Row = indx_row;
Indx_Col = indx_col;
for ib = 1:nB;
istart = (ib-1)*n + 1;
iend = min(ib*n, N);
indx_row = Indx_Row(istart:iend);
indx_col = Indx_Col(istart:iend);
openCV code:
vector <Point> index_rowCol;
for(int i=0; i<mask.rows; i++)
{
for(int j=0; j<mask.cols; j++)
{
if( mask.at<float>(i,j) == 1 )
{
Point pixel;
pixel.x = j;
pixel.y = i;
index_rowCol.push_back(pixel);
}
}
}
//Code about the "for loop" in MatLab code
for(int ib=0 ; ib<nB; ib++)
{
int istart = (ib-1)*n;
int iend = std::min( ib*n, N );
index_rowCol.clear();// Clearing the "index_rowCol" so that we can fill it again from "istart" to "iend"4
for(int j = istart; j<iend; j++)
{
index_rowCol.push_back( Index_RowCol[j] );
}
}
I am unable to understand if it is ok or not?
I think that there is mistake in usage of min function.
Here
for ib = 1:nB;
istart = (ib-1)*n + 1;
iend = min(ib*n, N);
ib - is array [1,2,3..nB] and you compare each value with N. As the result you also get array.
So as result:
ib - is array, istart - is array and iend also an array.
In C++ implementation
for(int ib=0 ; ib<nB; ib++)
{
int istart = (ib-1)*n;
int iend = std::min( ib*n, N );
you work with scalars (here ib,istars and iend are scalars).
For better understand how the code above works use step-by-step debugging. (Set breakpoint and run the code then press (F10 key-for matlab) )

How to make double[,] x_List in C#3.0?

I ned to implement the multi-linear regression in C#(3.0) by using the LinESt function of Excel.
Basically I am trying to achieve
=LINEST(ACL_returns!I2:I10,ACL_returns!J2:K10,FALSE,TRUE)
So I have the data as below
double[] x1 = new double[] { 0.0330, -0.6463, 0.1226, -0.3304, 0.4764, -0.4159, 0.4209, -0.4070, -0.2090 };
double[] x2 = new double[] { -0.2718, -0.2240, -0.1275, -0.0810, 0.0349, -0.5067, 0.0094, -0.4404, -0.1212 };
double[] y = new double[] { 0.4807, -3.7070, -4.5582, -11.2126, -0.7733, 3.7269, 2.7672, 8.3333, 4.7023 };
I have to write a function whose signature will be
Compute(double[,] x_List, double[] y_List)
{
LinEst(x_List,y_List, true, true); < - This is the excel function that I will call.
}
My question is how by using double[] x1 and double[] x2 I will make double[,] x_List ?
I am using C#3.0 and framework 3.5.
Thanks in advance
double[,] xValues = new double[x1.Length, x2.Length];
for (int i = 0; i < x1.Length; i++)
{
xValues[i, 0] = x1[i];
xValues[i, 1] = x2[i];
}
Actually it should be
double[,] xValues = new double[x1.Length, x2.Length];
int max = (new int[]{ x1.Length,x2.Length}).Max();
for (int i = 0; i < max; i++)
{
xValues[0, i] = x1.Length > i ? x1[i] : 0;
xValues[1, i] = x2.Length > i ? x2[i] : 0;
}
Samir is right, that i should call Max() once outside the iterator rather than each iteration, i've amended this.
The side of the multi-dimensional array is incorrect in both your answer and bablo's. In addition, the call to Max on every iteration in bablo's answer seems really slow, especially with large numbers of elements.
int max = (new int[] { x1.Length, x2.Length }).Max();
double[,] xValues = new double[2, max];
for (int i = 0; i < max; i++)
{
xValues[0, i] = x1.Length > i ? x1[i] : 0;
xValues[1, i] = x2.Length > i ? x2[i] : 0;
}