How do I rotate a 3D spectrum-Image in Digital Micrograph by scripting? - image-rotation

I want to find the equivalent of the rotate(image, degree) script command to rotate around the x or y axis (I only need 90º rotations). I know I can do it using the tool menu but it would be much faster if I could find a command or a function to do it using a script.
Thank you in advance!

Using the Slice command can be confusing at first, so here is a
detailed explanation on using the command for a rotation around the
X-axis.
This example shows how one would rotate a 3D data clockwise around its X axis (viewing along X) using the Slice3 command.
The Slice3 command specifies a new view onto an existing data array.
It first specifies the origin pixel, i.e. the coordinates (in the original data) which will be represented by (0,0,0) in the new view.
It then specifes the sampling direction, length and stepsize for each of its new three axes. The first triplet specifies how the coordinates (in the original data) change along the new image's x-direction, the second triplet for the new images's y-direction and the last triplet for the new image's z-direction.
So a rotation around the x-axis can be imagined as:
For the "resampled" data:
The new (rotated) data has it's origin at the original's data point (0,0,SZ-1).
Its X-direction remains the same, i.e. one step in X in the new data, would increment the coordinate triplet in the original's data also by (1,0,0).And one goes SX steps with a step-size of 1.
Its Y-direction is essentially the negative Z-direction from before, i.e. one step in Y in the new data, would increment the coordinate triplet in the original's data also by (0,0,-1).So one goes SZ steps with a step-size of -1.
Its Z-direction is essentially the Y-direction from before, i.e. one step in Z in the new data, would increment the coordinate triplet in the original's data by (0,1,0).So one goes SY steps with a step-size of 1.
So, for a clockwise rotation around the X-axis, the command is:
img.Slice3( 0,0,SZ-1, 0,SX,1, 2,SZ,-1, 1,SY,1 )
This command will just create a new view onto the same data (i.e. no addtional memory is used.) So to get the rotated image as a new image (with data values aligned as they should be in memory), one would clone this view into a new image usign ImageClone()
In total, the following script shows this as an example:
// Demo of rotating 3D data orthogonally around the X axis
// This is done by resampling the data using the Slice3 command
// Creation of test image with regcognizeable pattern
number SX = 100
number SY = 30
number SZ = 50
image img := RealImage("Test",4, SX,SY,SZ)
// trig. modulated linear increase in X
img = icol/iwidth* sin( icol/(iwidth-1) * 5 * Pi() ) **2
// Simple linear increase in Y
img += (irow/iheight) * 2
// Modulation of values in Z
// (doubling values for index 0,1, 3, 4, 9, 16, 25, 36, 49)
img *= (SQRT(iplane) == trunc(SQRT(iplane)) ? 2 : 1 )
img.ShowImage()
// Show captions. Image coordinate system is
// Origin (0,0,0) in top-left-front most pixel
// X axis goes left to right
// Y axis goes top to down
// Z axis goes front to back
img.ImageSetDimensionCalibration(0,0,1,"orig X",0)
img.ImageSetDimensionCalibration(1,0,1,"orig Y",0)
img.ImageSetDimensionCalibration(2,0,1,"orig Z",0)
img.ImageGetImageDisplay(0).ImageDisplaySetCaptionOn(1)
// Rotation around X axis, clockwise looking along X
// X --> X` (unchanged)
// Y --> Z'
// Z --> -Y'
// old origin moves to bottom-left-front most
// This means for "new" sampling:
// Specify sampling starting point:
// New origin (0,0,0)' will be value which was at (0,0,SZ-1)
// Going one step in X' in the new data, will be like going one step in X
// Going one step in Y' in the new data, will be like going one step backwards in Z
// Going one step in Z' in the new data, will be like going one step in Y
image rotXCW := img.Slice3( 0,0,SZ-1, 0,SX,1, 2,SZ,-1, 1,SY,1 ).ImageClone()
rotXCW.SetName("rotated X, CW")
rotXCW.ShowImage()
rotXCW.ImageGetImageDisplay(0).ImageDisplaySetCaptionOn(1)
The following methods perform 90degree rotations:
// Functions for 90degree rotations of data
image RotateXCW( image input )
{
number SX,SY,SZ
input.Get3DSize(SX,SY,SZ)
return input.Slice3( 0,0,SZ-1, 0,SX,1, 2,SZ,-1, 1,SY,1 ).ImageClone()
}
image RotateXCCW( image input )
{
number SX,SY,SZ
input.Get3DSize(SX,SY,SZ)
return input.Slice3( 0,SY-1,0, 0,SX,1, 2,SZ,1, 1,SY,-1 ).ImageClone()
}
image RotateYCW( image input )
{
number SX,SY,SZ
input.Get3DSize(SX,SY,SZ)
return input.Slice3( SX-1,0,0, 2,SZ,1, 1,SY,1, 0,SX,-1 ).ImageClone()
}
image RotateYCCW( image input )
{
number SX,SY,SZ
input.Get3DSize(SX,SY,SZ)
return input.Slice3( 0,0,SZ-1, 2,SZ,-1, 1,SY,1, 0,SX,1 ).ImageClone()
}
image RotateZCW( image input )
{
number SX,SY,SZ
input.Get3DSize(SX,SY,SZ)
return input.Slice3( 0,SY-1,0, 1,SY,-1, 0,SX,1, 2,SZ,1 ).ImageClone()
}
image RotateZCCW( image input )
{
number SX,SY,SZ
input.Get3DSize(SX,SY,SZ)
return input.Slice3( SX-1,0,0, 1,SY,1, 0,SX,-1, 2,SZ,1 ).ImageClone()
}
Rotations around the z-axis could als be done with RotateRight() and RotateLeft(). Note, however, that these commands will not adapt the images' dimension calibrations, while the Slice3 command will.

For pure orthogonal rotation the easiest (and fastest) way is to use the'slice' commands, i.e. 'slice3' for 3D images.
It turns out that the latest version of GMS has an example of it in the help documention, so I'm just copy-pasting the code here:
number sx = 10
number sy = 10
number sz = 10
number csx, csy, csz
image img3D := RealImage( "3D", 4, sx, sy, sz )
img3D = 1000 + sin( 2*PI() * iplane/(idepth-1) ) * 100 + icol * 10 + irow
img3D.ShowImage()
// Rotate existing image
if ( OKCancelDialog( "Rotate clockwise (each plane)\n= Rotate block around z-axis" ) )
img3D.RotateRight()
if ( OKCancelDialog( "Rotate counter-clockwise (each plane)\n= Rotate block around z-axis" ) )
img3D.RotateLeft()
if ( OKCancelDialog( "Rotate block counter-clockwise around X-axis" ) )
{
// Equivalent of sampling the data anew
// x-axis remains
// y- and z-axis change their role
img3D.Get3DSize( csx, csy, csz ) // current size along axes
img3D = img3D.Slice3( 0,0,0, 0,csx,1, 2,csz,1, 1,csy,1 )
}
if ( OKCancelDialog( "Rotate block clockwise around X-axis" ) )
{
// Equivalent of sampling the data anew
// x-axis remains
// y- and z-axis change their role
img3D.Get3DSize( csx, csy, csz ) // current size along axes
img3D = img3D.Slice3( 0,csy-1,csz-1, 0,csx,1, 2,csz,-1, 1,csy,-1 )
}
if ( OKCancelDialog( "Rotate 30 degree (each plane)\n= Rotate block around z-axis" ) )
{
number aDeg = 30
number interpolMeth = 2
number keepSize = 1
image rotImg := img3D.Rotate( 2*Pi()/360 * aDeg, interpolMeth, keepSize )
rotImg.ShowImage()
}
You may also want to look at this answer for some more info on subsampling and creating differnt views on data.

Related

How to perform an orthographic projection on a z-Buffer image in Matlab?

I am facing the same problem as mentioned in this post, however, I am not facing it with OpenGL, but simply with MATLAB. Depth as distance to camera plane in GLSL
I have a depth image rendered from the Z-Buffer from 3ds max. I was not able to get an orthographic representation of the z-buffer. For a better understanding, I will use the same sketch as made by the previous post:
* |--*
/ |
/ |
C-----* C-----*
\ |
\ |
* |--*
The 3 asterisks are pixels and the C is the camera. The lines from the
asterisks are the "depth". In the first case, I get the distance from the pixel to the camera. In the second, I wish to get the distance from each pixel to the plane.
The settins of my camera are the following:
WIDTH = 512;
HEIGHT = 424;
FOV = 89.971;
aspect_ratio = WIDTH/HEIGHT;
%clipping planes
near = 500;
far = 5000;
I calulate the frustum settings like the following:
%calculate frustums settings
top = tan((FOV/2)*5000)
bottom = -top
right = top*aspect_ratio
left = -top*aspect_ratio
And set the projection matrix like this:
%Generate matrix
O_p = [2/(right-left) 0 0 -((right+left)/(right-left)); ...
0 2/(top-bottom) 0 -((top+bottom)/(top-bottom));...
0 0 -2/(far-near) -(far+near)/(far-near);...
0 0 0 1];
After this I read in the depth image, which was saved as a 48 bit RGB- image, where each channel is the same, thus only one channel has to be used.
%Read in image
img = imread('KinectImage.png');
%Throw away, except one channel (all hold the same information)
c1 = img(:,:,1);
The pixel values have to be inverted, since the closer the values are to the camera, the brigher they were. If a pixel is 0 (no object to render available) it is set to 2^16, so , that after the bit complementation, the value is still 0.
%Inverse bits that are not zero, so that the z-image has the correct values
c1(c1 == 0) = 2^16
c1_cmp = bitcmp(c1);
To apply the matrix, to each z-Buffer value, I lay out the vector one dimensional and build up a vector like this [0 0 z 1] , over every element.
c1_cmp1d = squeeze(reshape(c1_cmp,[512*424,1]));
converted = double([zeros(WIDTH*HEIGHT,1) zeros(WIDTH*HEIGHT,1) c1_cmp1d zeros(WIDTH*HEIGHT,1)]) * double(O_p);
After that, I pick out the 4th element of the result vector and reshape it to a image
img_con = converted(:,4);
img_con = reshape(img_con,[424,512]);
However, the effect, that the Z-Buffer is not orthographic is still there, so did I get sth wrong? Is my calculation flawed ? Or did I make mistake here?
Depth Image coming from 3ds max
After the computation (the white is still "0" , but the color axis has changed)
It would be great to achieve this with 3ds max, which would resolve this issue, however I was not able to find this setting for the z-buffer. Thus, I want to solve this using Matlab.

How do i create a rectangular mask at known angles?

I have created a synthetic image that consists of a circle at the centre of a box with the code below.
%# Create a logical image of a circle with image size specified as follows:
imageSizeY = 400;
imageSizeX = 300;
[ygv, xgv] = meshgrid(1:imageSizeY, 1:imageSizeX);
%# Next create a logical mask for the circle with specified radius and center
centerY = imageSizeY/2;
centerX = imageSizeX/2;
radius = 100;
Img = double( (ygv - centerY).^2 + (xgv - centerX).^2 <= radius.^2 );
%# change image labels from double to numeric
for ii = 1:numel(Img)
if Img(ii) == 0
Img(ii) = 2; %change label from 0 to 2
end
end
%# plot image
RI = imref2d(size(Img),[0 size(Img, 2)],[0 size(Img, 1)]);
figure, imshow(Img, RI, [], 'InitialMagnification','fit');
Now, i need to create a rectangular mask (with label == 3, and row/col dimensions: 1 by imageSizeX) across the image from top to bottom and at known angles with the edges of the circle (see attached figure). Also, how can i make the rectangle thicker than 1 by imageSizeX?. As another option, I would love to try having the rectangle stop at say column 350. Lastly, any ideas how I can improve on the resolution? I mean is it possible to keep the image size the same while increasing/decreasing the resolution.
I have no idea how to go about this. Please i need any help/advice/suggestions that i can get. Many thanks!.
You can use the cos function to find the x coordinate with the correct angle phi.
First notice that the angle between the radius that intersects the vertex of phi has angle with the x-axis given by:
and the x coordinate of that vertex is given by
so the mask simply needs to set that row to 3.
Example:
phi = 45; % Desired angle in degrees
width = 350; % Desired width in pixels
height = 50; % Desired height of bar in pixels
theta = pi-phi*pi/180; % The radius angle
x = centerX + round(radius*cos(theta)); % Find the nearest row
x0 = max(1, x-height); % Find where to start the bar
Img(x0:x,1:width)=3;
The resulting image looks like:
Note that the max function is used to deal with the case where the bar thickness would extend beyond the top of the image.
Regarding resolution, the image resolution is determined by the size of the matrix you create. In your example that is (400,300). If you want higher resolution simply increase those numbers. However, if you would like to link the resolution to a higher DPI (Dots per Inch) so there are more pixels in each physical inch you can use the "Export Setup" window in the figure File menu.
Shown here:

How to get pixel color from Matlab imaq.VideoDevice step() output

I'm using step() in imaq.VideoDevice, but can't find description of format of step() output. Am using thermal infrared camera, and want to filter for specific temperature range.
So, I want to use step() on each frame, and then search the frame for pixels within specific thermal range. And obviously need to know the X,Y of each pixel, too.
My goal is to filter pixels from a frame and leave only pixels within desired temperature.
You probably need to get the information on temperature and color from your IR camera. Look up the documentation it probably says which values correspond to what pixel values. At that point you just create a mask for each frame. something like this (Assuming the values from the ir camera is "grayscale', meanign there is only one channel)
highest_temp = 200; %just a random number
lowest_temp = 50;
my_mask = (im <= higest_Temp) & (im >= loest_temp);
my_mask is a logical array with a 0 when the pixel is outside the range, and a 1 (true) when the pixel is inside the range. IF you want to apply the mask to the image just multiply them together (and take care of units, here I assume the IR camera is <16 bits)
masked_im = uint16(im .* double(mask));
I would also use trigger function rather than step If I'm not mistaken the trigger action should take only 1 image/frame by default. so Make a loop, grab a frame, do your processing, then go to the next loop iteration, over and over. Hope that helps
Answer:
step() outputs ROW X COLUMN X pixel_color
where pixel_color = index 1 is the amount of red in the pixel.
pixel_color = index 2 is the amount of green in the pixel.
pixel_color = index 3 is the amount of blue in the pixel.
For example, for color of pixel at X, Y = 5,10 = row 5, column 10
then:
amount of red = (5, 10, 1)
amount of green is = (5, 10, 2)
amount of blue is = (5, 10, 3)
EXAMPLE USAGE THAT DISPLAY A FRAME WITH RED COLUMN AND GREEN ROW........
% Get a video frame:
load('handshakeStereoParams.mat');
videoFileLeft = 'handshake_left.avi';
readerLeft = vision.VideoFileReader(videoFileLeft, 'VideoOutputDataType', 'uint8');
frameLeft = readerLeft.step();
live_scene_player = vision.VideoPlayer('Position', [20, 600, 850, 500], 'Name','LEFT');
% Make green horizontal stripe at row 10 on the image:
frameLeft(10,:,1)=0; % remove red from stripe
frameLeft(10,:,2)=255; % turn on all green
frameLeft(10,:,3)=0; % remove blue from stripe
% Make red horizontal stripe at column 10 on the image:
frameLeft(:,10,1)=255; % remove red from stripe
frameLeft(:,10,2)=0; % turn on all green
frameLeft(:,10,3)=0; % remove blue from stripe
% display it:
step( live_scene_player, frameLeft); % originally from frameLeftRect

Get pixel values in local area under center-pixel's gradient direction

How to get pixel values in local area under center-pixel's gradient direction using matlab?
I already found function imgradient() that's good, but how to transform angle to line under this angle?
So you want to know how to define a line given a point (x0,y0) and an angle theta? Something like this perhaps:
% T determines the length of the line. I am using a step size
% of 0.5 since it should get each pixel. You could always go finer
% and let the call to unique get rid of the duplicates.
t = 0:0.5:T;
p = unique( round( [x0+t(:)*cos(theta), y0+t(:)*sin(theta) ] ), 'rows' );
p in the above will be a Nx2 array of pixel coordinates that are under (technically within 1/2 a pixel) of the line that starts at (x0,y0) and extends out an angle theta.

Plot depth image in matlab

I have an RGB-D image and am trying to get a 3D visualization in matlab. Currently I am doing:
depth = imread('img_031_depth.png');
depth = double(depth);
img = imread('img_031.png');
surf(depth, img, 'FaceColor', 'texturemap', 'EdgeColor', 'none' )
view(158, 38)
Which gives me an image like:
I have two questions:
1) how can I save the image without it blurring as above
2) As you can see some edges show lined going to zero (e.g. the top of the coffee cup) I would like to remove these.
What I'm trying to produce is a 3D looking pointcloud, as these are only 2.5D I must show them from the right angle.
Any help is appreciated
EDIT: added images (note depth image needs to be normalized for visualization)
If you are only interested in a point cloud, you might want to consider scatter3.
You can select which points to plot (discard those with depth == 0).
You need to have explicit x-y coordinates though.
[y x] = ndgrid( 1:size(img,1), 1:size(img,2) );
sel = depth > 0 ; % which points to plot
% "flatten" the matrices for scatter plot
x = x(:);
y = y(:);
img = reshape( img, [], 3 );
depth = depth(:);
scatter3( x(sel), y(sel), depth(sel), 20, img( sel, : ), 'filled' );
view(158, 38)
Edit: sampled version
[y x] = ndgrid( 1:2:size(img,1), 1:2:size(img,2) );
sel = depth( 1:2:end, 1:2:end ) > 0;
x = x(:);
y = y(:);
img = reshape( img( 1:2:end, 1:2:end, : ), [], 3 );
depth = depth( 1:2:end, 1:2:end );
scatter( x(sel), y(sel), depth(sel), 20, img( sel, : ), 'filled' );
view( 158, 38 );
Alternatively, you can directly manipulate sel mask.
i suggest you first restore x=zu/f and y=zv/f, to obtain x, y, z, where f is your camera focal length;
then apply whatever rotation, translation you want before displaying them [x’,y’,z’] = R[x, y, z] + t;
then project them back using col = xf/z+w/2, row = h/2-yf/z to get a simple image that you can display fast; you can add a depth buffer to the last operation to guarantee
proper occlusions by writing depth at each pixel there and checking that repetitive writing happens only if new z is smaller (that is a new pixel is close to the viewer). The resulting image will still have holes due to the nature of point clouds. You can interpolate in those holes but this means you have to trace rays from every pixels in the image to your point cloud and find a closest neighbor to the ray which probably takes forever in Matlab.
I am also doing some 3D image restoring and reconstructing. The first question is easy. Your photo is taken by a camera. So you need to transform the position to camera coordinate system. In other words, you need to know some intrinsic value of your camera! Or you can never recover it with a single image. Google 'kinect intrinsic value' you can get the focal length etc.
Also, change your view.
Try this! And if it's not working, ask again.