Vega-Lite - Error in the scales of circles and lines - visualization

I'm trying to use Vega-Lite to do a plot similar to a graph, where nodes are connected to other nodes by edges. The nodes vary in size according to a variable,while the edges width also vary.
The information for the nodes is stored in a different dataset than the information for the edges.
The problem is, once I try to plot everything together ("nodes" + "edges"), Vega-Lite is marking the size of the nodes very small (almost not visible). If I don't use the "size" for my edges plot, everything comes out normal.
Here is an example of what I'm doing (note that the notation is a bit different from native Vega-Lite, because I'm programming on Julia):
v1 = #vlplot(
mark={:circle,opacity=1},
x={μ[:,1],type="quantitative",axis=nothing},
y={μ[:,2],type="quantitative",axis=nothing},
width=600,
height=400,
size={μ_n,legend=nothing,type="q"})
v2 = #vlplot(
mark={"type"=:circle,color="red",opacity=1},
x={ν[:,1],type="quantitative",axis=nothing},
y={ν[:,2],type="quantitative",axis=nothing},
width=600,
height=400,
size={ν_m,legend=nothing,type="q"})
Then, when I create the visualization for the edges, and plot everything together:
v3 = #vlplot(
mark={"type"=:line,color="black",clip=false},
data = df,
encoding={
x={"edges_x:q",axis=nothing},
y={"edges_y:q",axis=nothing},
opacity={"ew:q",legend=nothing},
size={"ew:q",scale={range=[0,2]},legend=nothing},
detail={"pe:o"}},
width=600,
height=400
)
#vlplot(view={stroke=nothing})+v3+v2+v1
Any ideas on why this is happening and how to fix it?
(Note that the "scale" attribute is not the reason. Even if I remove it, the problem persists).

When rendering compound charts, Vega-Lite uses shared scales by default (see https://vega.github.io/vega-lite/docs/resolve.html): it looks like when your size scale is shared between the line and circle plots, it leads to poor results.
I'm not familiar with the VegaLite.jl syntax, but the JSON specification you'll want on the top-level chart is:
"resolve": {"scale": {"size": "independent"}}

Related

Interpolation between two images with different pixelsize

For my application, I want to interpolate between two images(CT to PET).
Therefore I map between them like that:
[X,Y,Z] = ndgrid(linspace(1,size(imagedata_ct,1),size_pet(1)),...
linspace(1,size(imagedata_ct,2),size_pet(2)),...
linspace(1,size(imagedata_ct,3),size_pet(3)));
new_imageData_CT=interp3(imagedata_ct,X,Y,Z,'nearest',-1024);
The size of my new image new_imageData_CT is similar to PET image. The problem is that data of my new image is not correct scaled. So it is compressed. I think the reason for that is that the pixelsize between the two images is different and not involved to the interpolation. So for example :
CT image size : 512x512x1027
CT voxel size[mm] : 1.5x1.5x0.6
PET image size : 192x126x128
PET voxel size[mm] : 2.6x2.6x3.12
So how could I take care about the voxel size regarding to the interpolation?
You need to perform a matching in the patient coordinate system, but there is more to consider than just the resolution and the voxel size. You need to synchronize the positions (and maybe the orientations also, but this is unlikely) of the two volumes.
You may find this thread helpful to find out which DICOM Tags describe the volume and how to calculate transformation matrices to use for transforming between the patient (x, y, z in millimeters) and volume (x, y, z in column, row, slice number).
You have to make sure that the volume positions are comparable as the positions of the slices in the CT and PET do not necsesarily refer to the same origin. The easy way to do this is to compare the DICOM attribute Frame Of Reference UID (0020,0052) of the CT and PET slices. For all slices that share the same Frame Of Reference UID, the position of the slice in the DICOM header refers to the same origin.
If the datasets do not contain this tag, it is going to be much more difficult, unless you just put it as an assumption. There are methods to deduce the matching slices of two different volumes from the contents of the pixel data referred to as "registration" but this is a science of its own. See the link from Hugues Fontenelle.
BTW: In your example, you are not going to find a matching voxel in both volumes for each position as the volumes have different size. E.g. for the x-direction:
CT: 512 * 1.5 = 768 millimeters
PET: 192 * 2.6 = 499 millimeters
I'll let to someone else answering the question, but I think that you're asking the wrong one. I lack context of course, but at first glance Matlab isn't the right tool for the job.
Have a look at ITK (C++ library with python wrappers), and the "Multi-modal 3D image registration" article.
Try 3DSlicer (it has a GUI for the previous tool)
Try FreeSurfer (similar, focused on brain scans)
After you've done that registration step, you could export the resulting images (now of identical size and spacing), and continue with your interpolation in Matlab if you wish (or with the same tools).
There is a toolbox in slicer called PETCTFUSION which aligns the PET scan to the CT image.
you can install it in slicer new version.
In the module's Display panel shown below, options to select a colorizing scheme for the PET dataset are provided:
Grey will provide white to black colorization, with black indicating the highest count values.
Heat will provide a warm color scale, with Dark red lowest, and white the highest count values.
Spectrum will provide a warm color scale that goes cooler (dark blue) on the low-count end to white at the highest.
This panel also provides a means to adjust the window and level of both PET and CT volumes.
I normally use the resampleinplace tool after the registration. you can find it in the package: registration and then, resample image.
Look at the screensht here:
If you would like to know more about the PETCTFUSION, there is a link below:
https://www.slicer.org/wiki/Modules:PETCTFusion-Documentation-3.6
Since slicer is compatible with python, you can use the python interactor to run your own code too.
And let me know if you face any problem

Color nodes in Networkx Graph based on specific values

I have looked a bit into the node_color keyword parameter of the nx.draw() method. Here are two different graphs colored using node_colors.
node_colors = [.5,.5,0.,1.]. Colors appear as expected
node_colors = [.9,1.,1.,1.]. Colors do not appear as expected
In the second image, I would expect the color of node 1 to be almost as dark. I assume what is happening is the colormap is getting scaled from the minimum value to the maximum value. For the first example, that's fine, but how can I set the colormap to be scaled from: 0=white, 1=blue every time?
You are correct about the cause of the problem. To fix it, you need to define vmin and vmax.
I believe
nx.draw(G, node_color=[0.9,1.,1.,1.], vmin=0, vmax=1)
will do what you're after (I would need to know what colormap you're using to be sure).
For edges, there are similar parameters: edge_vmin and edge_vmax.

Create label for conical surface

Is it possible to create labels for conical surfaces in MS-word.
I have a label ready, but it needs to adapt to that the printed label can be pasted on a conical surface (e.g. a coffee cup)
I doubt it. You could try using WordArt: these are predefined shapes, maybe one of those matches what you want to do.
I suspect you'd be better off using a program like Adobe Illustrator, which can convert text to vector images which you can distort any way you like.
The bigger problem is that the label won't fit properly on a shape that is curved in two dimensions: you'll always have folds somewhere.

How to apply horizontal break to a d3.js bar chart

I am using Rickshaw (based on d3.js) to plot stacked bar charts. The problem is that the first bar is usually way more higher than the others, ruining the visual feedback.
Using logarithmic scale is (I guess) not an option here, because then the proportions between stacks in a bar will get broken. I wanted to introduce a horizontal break like in following image:
However, I cannot find any out-of-the box feature of Rickshaw or d3.js to do something like this. Any suggestions on how to make one?
This would require quite a bit of additional work. Here's an outline of what you would have to do.
Create two scales, one for the lower part and one for the upper. Set domains and ranges accordingly.
Pass values to the lower scale, capping them at the maximum domain value such that bars that are longer will reach the maximum.
Pass values to the upper scale, filtering those that are lower than the minimum.
You basically need to create two graphs that are aligned with each other to give the impression that there's just one. If you keep that in mind, doing it shouldn't be too difficult.
Here's a quick and dirty proof of concept that uses the linear scale's .clamp(true) to prevent the bars from becoming too long for values outside the domain.
The d3fc-discontinuous-scale component adapts any other scale (for example a d3 linear scale) and adding the concept of discontinuities. These discontinuities are determined via a 'discontinuity provider', which can be used to create one or more 'gaps' in a scale.
For example, to remove a range, you can construct a scale as follows:
var scale = scaleDiscontinuous(scaleLinear())
.discontinuityProvider(discontinuityRange([50, 75]))
Here is a complete example that shows how to use this to create a 'break' in a scale in order to render values that have large gaps in their overall range.
https://bl.ocks.org/ColinEberhardt/b60919a17c0b14d745c881f48effe681

Location based segmentation of objects in an image (in Matlab)

I've been working on an image segmentation problem and can't seem to get a good idea for my most recent problem.
This is what I have at the moment:
Click here for image. (This is only a generic example.)
Is there a robust algorithm that can automatically discard the right square as not belonging to the group of the other four squares (that I know should always be stacked more or less on top of each other) ?
It can sometimes be the case, that one of the stacked boxes is not found, so there's a gap or that the bogus box is on the left side.
Your input is greatly appreciated.
If you have a way of producing BW images like your example:
s = regionprops(BW, 'centroid');
centroids = cat(1, s.Centroid);
xpos = centroids(:,1); should then be the x-positions of the boxes.
From here you have multiple ways to go, depending on whether you always have just one separated box and one set of grouped boxes or not. For the "one bogus box far away, rest closely grouped" case (away from Matlab, so this is unchecked) you could even do something as simple as:
d = abs(xpos-median(xpos));
bogusbox = centroids(d==max(d),:);
imshow(BW);
hold on;
plot(bogusbox(1),bogusbox(2),'r*');
Making something that's robust for your actual use case which I am assuming doesn't consist of neat boxes is another matter; as suggested in comments, you need some idea of how close together the positioning of your good boxes is, and how separate the bogus box(es) will be.
For example, you could use other regionprops measurements such as 'BoundingBox' or 'Extrema' and define some sort of measurement of how much the boxes overlap in x relative to each other, then group using that (this could be made to work even if you have multiple stacks in an image).