SimpleITK getDirection() explained - simpleitk

Could someone explain the output of SimpleITK.GetDirection() and how is related to DICOM standard Image Orientation (Patient) header and NIFTI? Eventually, I would like to get the right Image Orientation (Patient) given a cut e.g. axial, sagittal, coronal.
I am aware of the example, https://simpleitk.readthedocs.io/en/next/Examples/DicomSeriesFromArray/Documentation.html. However, why Image Orientation (Patient) is set the way is set, it's not clear to me.

In the Xdrt library that I use they deal with it like this to write a dicom:
direction = sitk_image.GetDirection()
_direction = (direction[0], direction[3], direction[6], direction[1], direction[4], direction[7])
...
image_slice.SetMetaData("0020|0037", "\\".join(map(str, _direction))), # Image Orientation
You can watch their whole code here https://github.com/NKI-AI/xdrt/blob/ca3e83459dd76521bac597465c815cf6a3da35ad/xdrt/cli/utils.py#L157
May it can help you

Related

Change brightness of markers in Flutter

I made PNGs for custom markers on my GoogleMap view. By using e.g.:
BitmapDescriptor bikeBlack = await BitmapDescriptor.fromAsset(const ImageConfiguration(), "assets/images/bike_black.png")
I obtain an object that I can use as a marker directly. However, I need to be able to change the brighness as well for about 50 markers of 4 different types, during runtime. The only possible solution I have come up with so far is creating 1024 different PNGs. This will increase app size by about 2MB but it might be a lot of work to do..
I cannot really afford using await statements since they slow the app down considerably. But if I have to, I can force myself to live with that.
As far as I can tell, a marker icon has to be a BitmapDescriptor. But I cannot find a way to change the brightness of such a BitmapDescriptor.
I'm close to just giving up and just writing a python script that will generate the 1,024 PNGs for me. But there must be a nicer and more efficient solution. If you have one, please let me know.
[EDIT]:
I went with creating 1024 images. For anybody in the same situation, this is the script I used:
from PIL import Image, ImageEnhance
img = Image.open("../img.png")
enhancer = ImageEnhance.Brightness(img)
for i in range(256):
img_output = enhancer.enhance(i / 255)
img_output.save("img_{}.png".format(i), format="png")

How to persist markers using ADF in TangoARPoseController

I am trying to persist markers in an augmented reality game. Here is the gist of what I am doing:
I have my users recording and saving an area to an ADF. Then they drop marker’s into the scene and save out their position data in Unity World coordinates to a text file. I then restart the app, load and localize to the ADF and load the markers.
In order to get this working, I've modified the ARPoseController.cs file in the Unity demo package to use the Area Description as it's base frame. In the _UpdateTransformation method I've swapped out the frame pairs
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_START_OF_SERVICE;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
for
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
I've also added some code confirming that I'm successfully localizing to the ADF, but I'm noticing that my markers position in Unity World Space do not position properly relative to real environment.
I can confirm that my markers save and load properly based on START_OF_SERVICE origin so I assume that they are properly serializing and deserializing. What could be causing this? Am I wrong in assuming this should just work by switching the base framepair to Area_Description instead of START_OF_SERVICE?
I had a similar problem getting the AR and ADF integrated, I had to modify the TangoPointCloud to check if you're using an AreaDescription in OnTangoDepthAvailable() and adjust the baseFrame target as required.
i.e.:
if (m_tangoDeltaPoseController.m_useAreaDescriptionPose)
{
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
}
else
{
pair.baseFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_START_OF_SERVICE;
pair.targetFrame = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_DEVICE;
}
That way, the geometry of the point cloud adjusts itself based on the ADF offset instead of from device start.
After that change, when I'm using the sample code for AR to drop markers, it registers the surface properly so I'm placing the markers in the correct spots and orientation. I'm still encountering some flakiness with the markers not adjusting when relocalized though, have to look into the AreaLearningInGameController for loop closure events.
Hope that helps!

Colorizing an image in Swift

I am trying to figure out some basic operations in working with Swift and images (PNG and JPG).
I have gotten to the point where I can successfully load a given image, but am unsure how to properly apply image adjustments that will stick.
Specifically I am trying to be able to trigger the following:
colorize (HSB adjustment)
invert colors
From the samples I could find online it seems most code samples are for objective C, and I've been unable to get anything working in my current playground. It would seem from the documentation that I should be able to use filters (using CoreImage) but that is where I get lost.
Can anyone point me to or show me a valid (simple) approach that accomplishes this in Swift?
Many thanks in advance!
** EDIT ***
Here's the code I've got so far - working a bit better thanks to that link. However I still run into a crash when trying to output the results (that line is commented out)
So far all the examples I could find around the filtering code are objectiveC based.
import UIKit
var img = UIImage(named: "background.png")
var context = CIContext(options:nil)
var filter = CIFilter(name: "CIColorInvert");
filter.setValue(img, forKey: kCIInputImageKey)
//let newImg = filter.outputImage
Have you tried Google? "coreimage swift" gave me: http://www.raywenderlich.com/76285/beginning-core-image-swift
If this doesn't help, please post the code you've tried that didn't work.

CCBezierTo easeout

Working in Objective-c at the moment.
I am drawing a path for my sprite to follow and it all seems to be working fine but i just had one question that didnt seem to be answered anywhere.
My first two points in the Bezier are rather close together in relation to the third point and when my sprite animates along this path it seems like it is being eased in to the animation with an abrupt stop at the end.
Is there a way to control this i'd like to have the animation be one consistent speed or possibly be eased out?
id bezierForward = [CCBezierTo actionWithDuration:totalDistance/300.f bezier:bezier];
[turkey runAction:bezierForward];
Give this a try:
id bezierForward = [CCBezierTo actionWithDuration:totalDistance/300.f bezier:bezier];
id easeBezierForward = [CCEaseOut actionWithAction:bezierForward rate:2.0]
[turkey runAction:easeBezierForward];
You will want to play with the rate value to see what ends up looking best to you. You may have to try out some of the other CCEaseOut options like CCEaseSineOut
Link: Cocos2d Ease Actions Guide
Should probably be something like this, according to the docs:
id bezierForward = [CCEaseOut actionWithDuration:totalDistance/300.f bezier:bezier];
[turkey runAction:bezierForward];
As stated in the docs:
Variations
CCEaseIn: acceleration at the beginning
CCEaseOut: acceleration at the end
CCEaseInOut: acceleration at the beginning / end

Texture feature extraction using Gray Level Cooccurence Matrix

I'm doing a project in liver tumor classification. I used this code and it gave some output. I don't know whether I'm correct.
Actually I initially used Region Growing method for liver segmentation and from that I segmented tumor using FCM. So, to this GLCM program, I gave the tumor segmented image as input. Was I correct? If so, I think, then, my output will also be correct.
I gave the parameters exactly as in the example. Actually what do they mean? Do I need to change them for different images? If so, how to give the parameters? I'm completely new to this. So, kindly guide me.
I got this output. Am I correct?
stats =
autoc: [1.857855266614132e+000 1.857955341199538e+000]
contr: [5.103143332457753e-002 5.030548650257343e-002]
corrm: [9.512661919561399e-001 9.519459060378332e-001]
corrp: [9.512661919561385e-001 9.519459060378338e-001]
cprom: [7.885631654779597e+001 7.905268525471267e+001]
cshad: [1.219440700252286e+001 1.220659371449108e+001]
dissi: [2.037387269065756e-002 1.935418927908687e-002]
energ: [8.987753042491253e-001 8.988459843719526e-001]
entro: [2.759187341212805e-001 2.743152140681436e-001]
homom: [9.930016927881388e-001 9.935307908219834e-001]
homop: [9.925660617240367e-001 9.930960070222014e-001]
maxpr: [9.474275457490587e-001 9.474466930429607e-001]
sosvh: [1.847174384255155e+000 1.846913030238459e+000]
savgh: [2.332207337361002e+000 2.332108469591401e+000]
svarh: [6.311174784234007e+000 6.314794324825067e+000]
senth: [2.663144677055123e-001 2.653725436772341e-001]
dvarh: [5.103143332457753e-002 5.030548650257344e-002]
denth: [7.573115918713391e-002 7.073380266499811e-002]
inf1h: [-8.199645492654247e-001 -8.265514568489666e-001]
inf2h: [5.643539051044213e-001 5.661543271625117e-001]
indnc: [9.980238521073823e-001 9.981394883569174e-001]
idmnc: [9.993275086521848e-001 9.993404634013308e-001]
Kindly guide me. Thank you
its ok but i don't think we usually need all this extra information i usually prefer to use the following code
GLCM2 = graycomatrix(img,'Offset',[1 1]);
stats = graycoprops(GLCM2);
i hope it will help you