meshlab offset mesh produces no vertices or faces - offset

I cannot get this Filter to produce any vertices or faces :-(
I have followed this turorial: http://www.soliforum.com/topic/13709/how-to-hollow-out-a-model/
I have tried various values for Precision and for Offset.
SCREENSHOT showing model and resulting offset attempts
-and- SCREENSHOT of my Filter settings

Looking at your screenshot, your offset value is way too large for the size of your mesh - you are trying to offset -78 units but your mesh is only 133 and 62 units on the smaller axes.

Related

MATLAB ifanbeam() reconstructs blurry image

we have been using the ifanbeam function to produce an image of a simulated phantom for medical X-ray imaging. The phantom consists of a water disc with three smaller discs inserted, made of bone, fat, and air. The phantom is positioned in the middle between a detector and an X-ray source (distance between detector and source is 5 cm). The X-ray beam is defined as a fan beam of 56 deg opening angle. The phantom rotates around its axis.
Our issue is that the reconstructed image looks blurry inside and it is difficult to see the smaller discs. Image reconstructed using ifanbeam().
I've attached the ground truth image which I obtained from a different simulation using parallel beam rather than fan beam. Ground truth image reconstructed using iradon()
The matlab code is below. After we preprocessed the raw data, we are creating a 3D array of size 180x240x20, which corresponds to an array of the single projections images of size 180x240. FYI, the raw data only consists of 10 projections, but we faced some issues with the FanCoverage parameter, so we padded the sinogram with zeros to artificially add another 10 projections and then setting FanCoverage to "cycle".
Did anyone have a similar problem before or knows how to help?
n=max(size(indices_time)); % indices_time corresponds to the number of events in the simulation
images=zeros(180,240,nrOfProjections);
for m=1:n
images(indices_y(m),indices_x(m),indices_time(m))=images(indices_y(m),indices_x(m),indices_time(m))+1;
end
sinogram=zeros(240,nrOfProjections*2);
for m=1:nrOfProjections
sinogram(:,m)=sum(images(89:90,:,m));
end
theta=0:18:342;
figure(1)
colormap(gray)
imagesc(sinogram)
movegui('northwest')
rec_fanbeam=ifanbeam(sinogram,113.5,"FanCoverage","cycle","FanRotationIncrement",18,"FanSensorGeometry","line","FanSensorSpacing",0.25,"OutputSize",100);
figure(2)
colormap(gray)
imagesc(rec_fanbeam)
xlabel('xPos')
ylabel('yPos')
title('Reconstructed image')
movegui('northeast')

Rotate vertices selected using weight map on UVs in Unity3D's Shader Graph around pivot point

TLDR: Can't figure out the correct Shader Graph setup for using UV and vertex displacement to cheaply animate a (unrigged) mesh.
I am trying to rotate a part of the mesh based on the UV coordinates, e.g: fromX 0 toX 0.4, fromY 0 toY 0.6. The mesh is created uv-mapped with this in mind.
I have no problem getting the affected vertices in this area. Problem is that I want to rotate these verts for customizable axis e.g. axis(X:1, Y:0, Z:1) using a weight so that the rotation takes place around a pivoted point. I want the bottom selection to stay connected to the rest of the mesh while the other affected vertices neatly rotate around this point.
The weight can be painted by using split UV channels as seen in the picture:
I multiply the weighted area with a rotation node to rotate it.
And I add that to the negative multiplied position (the rest of the verts, excluding the rotated area) to get the final output displacement.
But the rotated mesh is bent. I need it to be stiff as in the whole part rotated with weight=1 except for the very pivoting vertex.
I can get it as described using a weight=1 based rotation, but the pivot point becomes the center of the mesh, not the desired point.
How can I do this correctly?
Been at it for days, please help :')
I started using Unity about a month ago, and this is one of the first issues I faced.
The node you are using will always transform the vertices around the origin.
I think you have two options available:
Translate the vertices by the offset of where you want to rotate the wings. This would require storing the pivot point of the wings in the mesh somehow - This could done by utilizing a spare UV channel, or by using the vertex color channel.
Use bones and paint the weights in your chosen 3D package. This way, you can record the animation, and use Unity's skinned mesh shader to render it.
Hope that helps.
Try this:
I've used the UV ranges from your example applied to a sphere of unit size. The spheres original pivot is in the centre, and its adjusted pivot is shifted 0.5 on the Y axis.
The only variable the shader doesn't know, is the adjusted pivot position; so I pass this through the material.
I've not implemented your weight in the graph, as I just wanted to show you the process. You can easily plug that in.
The color output is just being used for debug purposes.
The first image is with the default object pivot.
The second image is with the adjusted pivot.
The final image is the graph. (Note the logic group is driving the vertex rotation based on the UV mask).

How to understand RPN phrase in Faster-RCNN

Sorry, it is all by one question but relate to many small questions. I can't split them into seperated questions.
For example, input picture size 960x640
Through VGG16 layer 13 Conv5_3, get feature_map 60x40x512
Do 3x3 convolution.
    3.1 How 3x3 convolution compress the output above to 1x512 ?
    3.2 I read some article said, RPN would random select 512 samples from 2000 anchors . If 1x512 matrix mean this , what is 3x3 convolution doing ?
Loop feature_map, with 16 stride and 16 scale to find the center of the original map (corresponding feature map current point), cut 9 anchors out, calculate IoU< 0.3 as neg samples and IoU > 0.7 as pos samples.
    4.1 If there are several points on the feature_map, how to cover the GT? I mean, because it need IoU > 0.7 to label pos sample, here IoU refers to [ the intersection of the area(map from this point to the original image) and GT], or [all the area of GT]? I think it should be the former.
After all the loops are over, filter out the positive and negative samples by nms . Is it possible to have multiple anchors in a single point, or is nms sure to filter this out?
Pass to softmax.
    6.1 My problem is , in many cases, the positions(labeled by positive and negative) of the points on the feature_map are different. Because the position of the parameter is also fixed at a specific position on the feature_map, how to find proposals from an image in the detect phase?
    6.2 random selection of anchors is at here? ? ?
RoI merges feature_map and proposals to do pooling. (1. roi (roi said a group of anchors it) is located in the feature map , and get the patch zone in feature map 2. something like SPP layer(7x7 down sampling ) is applied to the feature map patch, transform to fixed size of features, to fit full connection layer)
Another softmax. (Training phase using BP to tune the parameters), my problem is that in many cases the positions of the points on the feature_map each time labeled positive and negative are different. Because the position of the parameter is also fixed at a specific position on the feature_map, how to find proposals from the image in the detect phase?
RoI compare to GT, do reggession.
After finishing the above questions, re-think again. I found my understanding of anchors, proposals a bit confusing. Does many anchors compose to a proposal?
If so, then the above 6 becomes
Select 512 anchors , pass their parameters into softmax, the output show if it is part of the target object. So this layer is the detect phase. When doing detect phase, just loop all the anchors to get the possible ones .
6.1 But in this case, how RPN output bbox size (x, y, w h) ? I think it need merge selected anchors and then scale to the size of the original image , to get the bbox size.
    6.2 If operation is merger , then randomly selected 512 from 2000 is likely to miss some areas, isn't it ?
Mainly is 3 and 6, and I think all of them are highly relative can not be seperated. Some are just need yes or no confirm, thanks

Depth Image in Matlab

I have a depth image taken from Kinect V2 which is given below. I want to extract the pixel value x at any specific coordinate and its depth value in Matlab. I used the follwing code in Matlab but it gives me 16-bit value. However, I'm not sure is it pixel value or depth value of pixel x.
im=imread('image_depth.png');
val=im(88,116);
display(val);
Result
val= (uint16) 2977
Would someone please help me that, how to extract both pixel and depth value in Matlab?
The image name hints it is a depth map. The color map is stored usually in separate file usually in different resolution and with some offset present if not aligned already. To align RGB and Depth images see:
Align already captured rgb and depth images
and the sub-link too...
The image you provided peaks with color picker the same 0x000B0B0B color for the silhouette inside. That hints it is either not Depth map or it has too low bit-width or the SO+Brownser conversion lose precision. If any pixel inside returns the same number for you too then the depth map is unusable.
In case your peaks returns 16 bit value it hints RAW Kinect depth values. If the case see:
Kinect raw depth to distance in meters
Otherwise it could be just scaled depth so you can convert your x value to depth like:
depth = a0 + x*(a1-a0)
where <a0,a1> is the depth range of the image which should be stated somewhere in your dataset source ...
From your description, and the filename, the values at each location in your image are depth values.
If you need actual color values for each pixel, they would likely be stored in a separate image file, hopefully of the same dimension.
What you are seeing here is likely the depth values normalized to a displayable region with MATLAB or your picture viewing software.
You will need to look at the specs to see how a value like 2977 converts to the physical world (e.g. cm). Hopefully it is just a scalar value you can multiply to get that answer.

Finding the length/area of the object using 2d web cam

I have to calculate the area, or length of the objects present in the frame.
As i use the 2d camera, the distance from the camera can't be found.
In this case, i am planning to draw a constant(X CM) line in the back ground where its length is known in CM/M.
Please find the attachment for a sample input image. (Yellow Line is a Constant line)
Consider that a person or an object stands in front of a wall, where the constant line is drawn.
Is there any way to calculate the distance of other objects with reference to the constant line?
First, it isn't a line. It is a parcel. A line is non-physical. The parcel of pixels has both area and length. The natural unit of measurement of images is pixels. Units of length are both non-physical and require assumptions.
Second, you can do a thresholded 2-d convolution. PIV-sleuth uses 2d convolution. It can allow some faster, more accurate measurement in images. Peak intensity will tell you something about the length or area. You can also use row-sum and column sum very quickly to get ideas of lengths. It helps if the images are aligned to the pixel-axes in your image. Use of affine transformations can help you test various rotations for suitability.