DICOM to Nifti metadata not transfering - metadata

I am trying to take a number of DICOM stacks and convert them to Nifti files. When I do the conversion and open the new Nifti file in a 3D viewer the volume is smashed together in the z direction. The Nifti files do not know what the spacing is between slices. To my understanding imageio.volread() does not read the metadata. I tried using pydicom.filereader.dcmread() but that only reads one file. How can I copy the metadata from the DICOM stack to the Nifti file when converting formats?
import nibabel as nib
import imageio
import numpy as np
import os, sys
DIR = '\\all scans\\'
savefold = '\\nifti\\'
for root, dirs, files in os.walk(DIR):
for directory in dirs:
vol = imageio.volread(DIR + directory).astype(int)
vol = np.transpose(vol, (2,1,0)).astype(int)
niftisave = nib.Nifti1Image(vol, affine=np.eye(4))
nib.save(niftisave, os.path.join(savefold + directory) + '.nii')
UPDATE:
I am using Nifti1Header and setting my voxel spacing but the voxel spacing is still 1x1x1 when I save and open the file in other programs. When I print the header right before saving the pixdim shows [1. 0.09 0.09 0.09 1. 1. 1. 1. ].
header = nib.Nifti1Header()
OM = np.eye(4)
header.set_data_shape((224,352,224))
voxel_spacing = ((.09,.09,.09))
header.set_zooms(voxel_spacing)
header.set_sform(OM)
header.set_dim_info(slice = 2)
vol=imageio.volread(source)
ROI_save = nib.Nifti1Image(vol, OM, header=header)
print(ROI_save.header)
HEADER:
<class 'nibabel.nifti1.Nifti1Header'> object, endian='<'
sizeof_hdr : 348
data_type : b''
db_name : b''
extents : 0
session_error : 0
regular : b''
dim_info : 48
dim : [ 3 224 352 224 1 1 1 1]
intent_p1 : 0.0
intent_p2 : 0.0
intent_p3 : 0.0
intent_code : none
datatype : float32
bitpix : 32
slice_start : 0
pixdim : [1. 0.09 0.09 0.09 1. 1. 1. 1. ]
vox_offset : 0.0
scl_slope : nan
scl_inter : nan
slice_end : 0
slice_code : unknown
xyzt_units : 0
cal_max : 0.0
cal_min : 0.0
slice_duration : 0.0
toffset : 0.0
glmax : 0
glmin : 0
descrip : b''
aux_file : b''
qform_code : unknown
sform_code : aligned
quatern_b : 0.0
quatern_c : 0.0
quatern_d : 0.0
qoffset_x : 0.0
qoffset_y : 0.0
qoffset_z : 0.0
srow_x : [1. 0. 0. 0.]
srow_y : [0. 1. 0. 0.]
srow_z : [0. 0. 1. 0.]
intent_name : b''
magic : b'n+1'
AFFINE:
np.eye(4)
--->[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]]
DESIRED AFFINE:
[[-0.09 0. 0. -0. ]
[ 0. -0.09 0. -0. ]
[ 0. 0. 0.09 0. ]
[ 0. 0. 0. 1. ]]

You need to directly specify pixel spacing and array shape, consider you have 512x512x128 3D volume, with 0.5 x 0.5 x 2.5 mm voxel spacing, and identity orientation matrix see example below:
from nibabel import Nifti1Header, Nifti1Image
img_array = np.zeros((512, 512, 128))
voxel_spacing = [0.5, 0.5, 2.5, 1]
OM = np.eye(4)
OM = OM * np.diag(voxel_spacing)
header = Nifti1Header()
header.set_data_shape((512, 512, 128))
header.set_dim_info(slice=2)
header.set_xyzt_units('mm')
nifti = Nifti1Image(img_array, OM, header=header)
upd.
Save file using nibabel.save (or img.to_filename) and open it in MRIcron https://people.cas.sc.edu/rorden/mricron/index.html , gives the following result:

If you use SimpleITK to read the Dicom series, it will properly read the Dicom metadata.
Here's an example of how to read a Dicom image series:
https://simpleitk.readthedocs.io/en/master/link_DicomSeriesReader_docs.html
If the output file name has a '.nii' suffix, it will write out the volume as a Nifti file.

Related

Cluster analysis of a Rasterlayer

Is there a way that i can analyse a cluster of a rasterlayer directly? If modify my Raster into a Matrix it does not work.I used kmeans so far, after i turned my raster into a matrix. But still dont work. I also used this code: r <- getValues(r) to turn my raster into a matrix but still does not work.Another problem is that all my values are NA if i turn my Raster into a matrix. So i dont know how to handle this problem.
my Raster looks like this:
class : RasterLayer
dimensions : 23320, 37199, 867480680 (nrow, ncol, ncell)
resolution : 0.02, 0.02 (x, y)
extent : 341668.9, 342412.9, 5879602, 5880069 (xmin, xmax, ymin, ymax)
crs : +proj=utm +zone=33 +ellps=WGS84 +units=m +no_defs
source : r_tmp_2022-07-13_141214_9150_15152.grd
names : layer
values : 2.220446e-16, 1 (min, max)

as.polygons(SpatRaster, values=FALSE) seems to dissolve cells when it should not

Maybe there is something I do not understand. According to the help page, as.polygons() applied to a SpatRaster with the option values = FALSE should not dissolve cells. But:
library(terra)
# terra 1.5.21
r <- rast(ncols=2, nrows=2, vals=1)
as.polygons(r) # correctly gives a dissolved 1x1 polygon:
# class : SpatVector
# geometry : polygons
# dimensions : 1, 1 (geometries, attributes)
# extent : -180, 180, -90, 90 (xmin, xmax, ymin, ymax)
# coord. ref. : lon/lat WGS 84
# names : lyr.1
# type : <int>
# values : 1
as.polygons(r, values=FALSE) # improperly (?) gives a dissolved 1x1 polygon:
# class : SpatVector
# geometry : polygons
# dimensions : 1, 0 (geometries, attributes)
# extent : -180, 180, -90, 90 (xmin, xmax, ymin, ymax)
# coord. ref. : lon/lat WGS 84
whereas it should give an undissolved polygon, such as the one obtained with dissolve=FALSE (but without the values):
as.polygons(r,dissolve=FALSE)
# class : SpatVector
# geometry : polygons
# dimensions : 4, 1 (geometries, attributes)
# extent : -180, 180, -90, 90 (xmin, xmax, ymin, ymax)
# coord. ref. : lon/lat WGS 84
As you noted, the documentation is incorrect. If you do not want the cells to be dissolved, you need to use dissolve=FALSE.
If you do not want to dissolve, and do not want the values, you can do
library(terra)
r <- rast(ncols=2, nrows=2, vals=1)
p <- as.polygons(r, dissolve=FALSE, values=FALSE)
# or
p <- as.polygons(rast(r))
p
# class : SpatVector
# geometry : polygons
# dimensions : 4, 0 (geometries, attributes)
# extent : -180, 180, -90, 90 (xmin, xmax, ymin, ymax)
# coord. ref. : lon/lat WGS 84
The latter works the way it does, despite the default dissolve=TRUE because there is nothing to dissolve with since rast(r) has no values. If you want the extent you can do
as.polygons(r, extent=TRUE)
# class : SpatVector
# geometry : polygons
# dimensions : 1, 0 (geometries, attributes)
# extent : -180, 180, -90, 90 (xmin, xmax, ymin, ymax)
# coord. ref. : lon/lat WGS 84
That is a (much more) efficient approach that is otherwise equivalent to dissolving (aggregating) all cells.

Why when I use movement and turn together, the turn does not work

why if i use it code - rotation is working
p_partnew.Position = Vector3.new (i,p_coord_y, p_coord_z)
p_partnew.CFrame = p_partnew.CFrame*CFrame.Angles(p_angles_x,p_angles_y, p_angles_z)
if i use it code - rotation is NOT working
p_partnew.CFrame = CFrame.new (i,p_coord_y, p_coord_z)
p_partnew.CFrame = p_partnew.CFrame*CFrame.Angles(p_angles_x,p_angles_y, p_angles_z)
In the first example, only the position of the part is being modified and then the rotation is applied. The second example sets the whole CFrame to the position which will override the original rotation of the object, and then applies the rotation.
Simply put, #1 adds p_angles to the rotation, while #2 sets the rotation to p_angles.
To understand what's going on, take a look at Understanding CFrames.
A CFrame is a 4x3 matrix with components corresponding to the Part's Position and Orientation. When you get or set a Part's Position property, it is just reading and writing to that specific section of the CFrame's values.
Let's look at some example CFrames :
Example
CFrame Components
A Part located at (0, 0, 0) with no rotationPart.CFrame = CFrame.new(0,0,0)
0 0 0 10 0 0 1 0 0 0 1
A Part located at (1, 2, 3) with no rotationPart.CFrame = CFrame.new(1,2,3)
1 2 3 10 0 0 1 0 0 0 1
A Part located at (0, 0, 0) with (90, 0, 0) rotationPart.CFrame = CFrame.new(0,0,0) * CFrame.Angles(math.rad(90), 0, 0)
0 0 0 10 0 0 A-1 0 1 A
A Part located at (0, 0, 0) with (0, 90, 0) rotationPart.CFrame = CFrame.new(0,0,0) * CFrame.Angles(0, math.rad(90), 0)
0 0 0 A0 1 0 10 -1 0 A
A Part located at (0, 0, 0) with (0, 0, 90) rotationPart.CFrame = CFrame.new(0,0,0) * CFrame.Angles(0, 0, math.rad(90))
0 0 0 A-1 0 1 A0 0 0 1
A Part located at (1, 2, 3) with (90, 90, 90) rotationPart.CFrame = CFrame.new(1,2,3) * CFrame.Angles(math.rad(90), math.rad(90), math.rad(90))
1 2 3 10 A A B-1 0 1 B
Terms
Values
A
-4.3711388286738e-08
B
1.9106854651647e-15
In your first code sample, you are setting the Position first. This preserves the original CFrame, and updates just the values for Position.
-- imagine that p_partnew.CFrame looks like this :
-- ? ? ? ?
-- ? ? ? ?
-- ? ? ? ?
-- set just the position values in the CFrame, keep everything else
p_partnew.Position = Vector3.new(i, p_coord_y, p_coord_z)
-- p_partnew.CFrame now looks like this :
-- i p_coord_y p_coord_z ?
-- ? ? ? ?
-- ? ? ? ?
-- apply a transformation of angles
p_partnew.CFrame = p_partnew.CFrame * CFrame.Angles(p_angles_x, p_angles_y, p_angles_z)
In the second code sample, you are setting the entire CFrame first with just the position values. This wipes out all the other data that existed in that CFrame before.
-- set the entire CFrame
p_partnew.CFrame = CFrame.new(i, p_coord_y, p_coord_z)
-- p_partnew.CFrame now looks like this :
-- i p_coord_y p_coord_z 1
-- 0 0 0 1
-- 0 0 0 1
-- apply a transformation of angles
p_partnew.CFrame = p_partnew.CFrame * CFrame.Angles(p_angles_x, p_angles_y, p_angles_z)
So if the first example works with rotation, but the second doesn't, then the answer is that the original rotation information is getting lost when you set the CFrame. You could try saving that information first, then applying it to the new position, and then applying your changes (assuming that your changes are small increments). That would look something like this :
-- store the previous orientation
local o = p_partnew.Orientation
-- create a set of changes based on new angles
local angles = CFrame.Angles(math.rad(o.X) + p_angles_x, math.rad(o.Y) + p_angles_y, math.rad(o.Z) + p_angles_z)
-- set the new CFrame
p_partnew.CFrame = CFrame.new(i, p_coord_y, p_coord_z):ToWorldSpace(angles)

Keras back propagation

Suppose I have defined a network using Keras as follows:
model = Sequential()
model.add(Dense(10, input_shape=(10,), activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(9, activation='sigmoid'))
It has some weights:
[array([[ 0.33494413, -0.34308964, 0.12796348, 0.17187083, -0.40254939,
-0.06909397, -0.30374748, 0.14217842, 0.41163749, -0.15252003],
[-0.07053435, 0.53712451, -0.43015254, -0.28653857, 0.53299475, ...
When I give it some input:
[[ 0. 0.5 0. 0.5 1. 1. 0. 0.5 0.5 0.5]]
It produces some output:
[0.5476531982421875, 0.5172237753868103, 0.5247090458869934, 0.49434927105903625, 0.4599153697490692, 0.44612908363342285, 0.4727349579334259, 0.5116984844207764, 0.49565717577934265]
Whereas the desired output is:
[0.6776225034927386, 0.0, 0.5247090458869934, 0.0, 0.0, 0.0, 0.4727349579334259, 0.5116984844207764, 0.49565717577934265]
Making the Error Value:
[0.12996930525055106, -0.5172237753868103, 0.0, -0.49434927105903625, -0.4599153697490692, -0.44612908363342285, 0.0, 0.0, 0.0]
I can then calculate the evaluated gradients as follows:
outputTensor = model.output
listOfVariableTensors = model.trainable_weights
gradients = k.gradients(outputTensor, listOfVariableTensors)
trainingInputs = inputs
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
evaluated_gradients = sess.run(gradients, feed_dict={model.input: trainingInputs})
Which yeilds the evaluated gradients:
[array([[ 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ],
[ 0.01015381, 0. , 0. , 0.03375177, -0.05576257,
0.03318337, -0.02608909, -0.06644543, -0.03461133, 0. ],
[ 0.02030762, 0. , 0. , 0.06750354, -0.11152515,
0.06636675, -0.05217818, -0.13289087, -0.06922265, 0. ],...
I would like to use these gradients to adjust my model, but I am losing track of the math & theory of backpropagation. Am I on the right track?

Flip face in obj file

I'm dynamically creating a 3D model and writing an .obj file. I'm having a problem with flipping the visible side of faces.
I've made a simple example:
v 0.0 0.0 0.0
v 0.0 1.0 0.0
v 1.0 0.0 0.0
v 1.0 1.0 0.0
vn 0.0 0.0 -1.0
f 1//1 4//1 3//1
f 1//1 2//1 4//1
The above is a square divided into two triangles. The vn line is the face normal (the vector that is perpendicular to the face). I've read online that to flip the face, you can negate the normal vector. However if I multiply the normal vector by -1 and try the following...
v 0.0 0.0 0.0
v 0.0 1.0 0.0
v 1.0 0.0 0.0
v 1.0 1.0 0.0
vn 0.0 0.0 1.0
f 1//1 4//1 3//1
f 1//1 2//1 4//1
It doesn't actually flip the visible side of the face when I import it into Unity. The lighting changes a little bit, but the same side is still visible and the other side is still invisible.
When I orbit to the opposite side:
The normal only influences the lighting effect. To flip a face, you need to inverse the index order of the triangle like below.
f 3//1 4//1 1//1
f 4//1 2//1 1//1