Breaking visualisation of images in test set of Cityscapes - python-imaging-library

I am working on Cityscapes dataset. But I have a problem to visualise the rgb masks in test set. Something that I am doing is as follows:
Reading the masks from the Root:
root_path = './Baysian_Seg/Inputs/Cityscapes/test_labels'
mask_root = os.path.join(root_path,'ex_test_labels')
berlin_mask_list= sorted(os.listdir(os.path.join(mask_root,'berlin')))
mask_lists = ['berlin_000010_000019_gtFine_color.png',
'berlin_000010_000019_gtFine_instanceIds.png',
'berlin_000010_000019_gtFine_labelIds.png',
'berlin_000010_000019_gtFine_polygons.json']
mask1 = Image.open(os.path.join(mask_root,'berlin',mask_lists[0]))
plt.imshow(mask1)
But I would get only a black image. I do not know where I am making mistake. I did the same for training and validation sets and I could visualise the masks correctly. I would appreciate any help.
P.S. I also tried to convert the PIL mask using : maks1.convert('RGB'), but it also did not help.

You can't test cityscapes, the test dataset is not public.So the images in test set are all black.

Related

Problem with creating a 3D mask of DICOM-RT contour data using MATLAB

I have troubles extracting a tumor using a RT mask from a dicom image. Due to GDPR I am not allowed to share the dicom images even though they are anonymized. However I am allowed to share the images themself. I want to extract the drawn tumor from the CT images using the draw GTV stored as a RT structure using MATLAB.
Lets say that the file directory where my CT images are stored is called DicomCT and that the RT struct dicom file is called rtStruct.dcm.
I can read and visualize my CT images as follows:
V = dicomreadVolume(“DicomCT”);
V = squeeze(V);
volshow(V)
volume V - 3D CT image
I can load my rt structure using:
Info = dicominfo(“rtStruct.dcm”);
rtContours = dicomContours(Info);
I get the plot giving the different contours.
plotContour(rtContours)
Contours for the GTV of the CT image
I used this link for the information on how to create the mask such that I can apply it to the 3D CT image: https://nl.mathworks.com/help/images/create-and-display-3-d-mask-of-dicom-rt-contour-data.html#d124e5762
The dicom information tells mee the image should be 3mm slices, hence I took 3x3x3 for the referenceInfo.
referenceInfo = imref3d(size(V),3,3,3);
rtMask = createMask(rtContours, 1, referenceInfo)
When I plot my rtMask, I get a grey screen without any trace of the mask. I think that something is wrong with the way that I define the referenceInfo, but I have no idea how to fix it or what is wrong.
volshow(rtMask)
Volume plot of the RT mask
What would be the best way forward?
i was actually having some sort of similar problem to you a couple of days ago. I think you might have two possible problems (none of them your fault).
Your grey screen might be an error rendering that it's not showing because of how the actual volshow() script works. I found it does some things i don't understand with graphics memory and representing numeric type volumes vs logic volumes. I found this the hard way in my job PC where i only have intel HD graphics. Using
iptsetpref('VolumeViewerUseHardware',true)
for logical volumes worked fine for me. You an also test this by trying to replot the mask as double instead of logical by
rtMask = double(rtMask)
volshow(rtMask)
If it's not a rendering error caused by the interactions between your system and volshow() it might be an actual confusion and how the createMask and the actual reference info it needs (created by an actual bad explanation in the tutorial you just linked). Using pixel size info instead of actual axes limits can create partial visualization in segmentation or even missing it bc of scale. This nice person explained more elegantly in this post by using actual geometrical info of the dicom contours as limits.
https://es.mathworks.com/support/search.html/answers/1630195-how-to-convert-dicom-rt-structure-to-binary-mask.html?fq%5B%5D=asset_type_name:answer&fq%5B%5D=category:images/basic-import-and-export&page=1
basically use
plotContour(rtContours);
ax = gca;
referenceInfo = imref3d(size(V),ax.XLim,ax.YLim,ax.ZLim);
rtMask = createMask(rtContours, 1, referenceInfo)
In addition to your code and it might work.
I hope this could be of help to you.

Difference in image superimposition achieved with MATLAB function imfuse and that using ImageJ composite image

I have two time-lapse images of a membrane surface. Both images were supposed to show the same region. But while adjusting focus, captured field might have drifted a bit. I used two routes to visualize the amount of drift - MATLAB v 2021a and ImageJ. With MATLAB, first I tried superimposing two images using imshowpair. Original grayscale images are fix
and mov. imshowpair(mov,fix) yields imshowpair_comp. It clearly shows possible drift. Then I tried using imfuse function as follows:
RF = imref2d(size(fix));
RM = imref2d(size(mov));
RM.XWorldLimits = RF.XWorldLimits;
RM.YWorldLimits = RF.YWorldLimits;
comp = imfuse(fix,RF,mov,RM,'falsecolor','Scaling','joint','ColorChannels',[1 2 0]);
It gave the composite image imfuse_comp. Next, I carried out image registration and I got imregcorr_comp.
tForm = imregcorr(mov,fix,"similarity");
movTransform = imwarp(mov,tForm,"OutputView",RF);
imshowpair(movTransform,fix)
This image shows properly aligned composite image. I tried doing the same using ImageJ.
Open fix.tiff in ImageJ. Image->Colors->Channels Tool->Composite->Red.
Open mov.tiff in ImageJ. Image->Colors->Channels Tool->Composite->Green.
Image->Colors->Merge Channels
This gave the composite image imagej_comp. This composite image obtained from ImageJ clearly shows that there was no misalignment in the two images to begin with!
I am unable to figure out where I went wrong. Now I am really confused between two routes - and which route to believe in. Can someone please help me out?
Thanks!

The mask codes appear different for same label in different images in fastai lesson 3-camvid.ipynb example. Is this an issue?

I'm trying the fastai example, lesson 3-camvid.ipynb, and there is a verification in the beginning of the example, about the images and labels. Where we can see the original image and a mask (ground thruth semantic segmentation) from that original image.
Example, image 150 from the camvid dataset:
img_f = fnames[150]
img = open_image(img_f)
img.show(figsize=(5,5))
get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}'
mask = open_mask(get_y_fn(img_f))
mask.show(figsize=(5,5), alpha=1)
But, if I change the image, for example to image 250 from the camvid dataset:
The mask label changes, eg. the road label has a different color from the previous image:
Apparently, it matters the order in which each label occurs on each image.
So, is this an issue? Is it something I should fix somehow?
Thanks in advance!
According to the official CamVid labels Road has to be the color as in the image 250.
Camvid Class Labels
You can leave the data set as it is, but if you looking for increasing model accuracy you can change the labels of the corresponding pixels. The Model is capable of identifying the road by other examples in data set.

counting the number of objects on image with MatLab

I need to count the number of chalks on image with MatLab. I tried to convert my image to grayscale image and than allocate borders. Also I tried to convert my image to binary image and do different morphological operations with it, but I didn't get desired result. May be I did something wrong. Please help me!
My image:
You can use the fact that chalk is colorful and the separators are gray. Use rgb2hsv to convert the image to HSV color space, and take the saturation component. Threshold that, and then try using morphology to separate the chalk pieces.
This is also not a full solution, but hopefully it can provide a starting point for you or someone else.
Like Dima I noticed the chalk is brightly colored while the dividers are almost gray. I thought you could try and isolate gray pixels (where a gray pixel says red=blue=green) and go from there. I tried applying filters and doing morphological operations but couldn't find something satisfactory. still, I hope this helps
mim = imread('http://i.stack.imgur.com/RWBDS.jpg');
%we average all 3 color channels (note this isn't exactly equivalent to
%rgb2gray)
grayscale = uint8(mean(mim,3));
%now we say if all channels (r,g,b) are within some threshold of one another
%(there's probabaly a better way to do this)
my_gray_thresh=25;
graymask = (abs(mim(:,:,1) - grayscale) < my_gray_thresh)...
& (abs(mim(:,:,2) - grayscale) < my_gray_thresh)...
& (abs(mim(:,:,3) - grayscale) < my_gray_thresh);
figure(1)
imshow(graymask);
Ok so I spent a little time working on this- but unfortunately I'm out of time today and I apologize for the incomplete answer, but maybe this will get you started- (if you need more help, I'll edit this post over the weekend to give you a more complete answer :))
Here's the code-
for i=1:3
I = RWBDS(:,:,i);
se = strel('rectangle', [265,50]);
Io = imopen(I, se);
Ie = imerode(I, se);
Iobr = imreconstruct(Ie, I);
Iobrd = imdilate(Iobr, se);
Iobrcbr = imreconstruct(imcomplement(Iobrd), imcomplement(Iobr));
Iobrcbr = imcomplement(Iobrcbr);
Iobrcbrm = imregionalmax(Iobrcbr);
se2 = strel('rectangle', [150,50]);
Io2 = imerode(Iobrcbrm, se2);
Ie2 = imdilate(Io2, se2);
fgm{i} = Ie2;
end
fgm_final = fgm{1}+fgm{2}+fgm{3};
figure, imagesc(fgm_final);
It does still pick up the edges on the side of the image, but from here you're going to use connected bwconnectedcomponents, and you'll get the lengths of the major and minor axes, and by looking at the ratios of the objects it will get rid those.
Anyways good luck!
EDIT:
I played with the code a tiny bit more, and updated the code above with the new results. In cases when I was able to get rid of the side "noise" it also got rid of the side chalks. I figured I'd just leave both in.
What I did: In most cases a conversion to HSV color space is the way to go, but as shown by #rayryeng this is not the way to go here. Hue works really well when there is one type of color- if for example all chalks were red. (Logically you would think that going with the color channel would be better though, but this is no the case.) In this case, however, the only thing all chalks have in common is the relative shape. My solution basically used this concept by setting the structuring element se to something of the basic shape and ratio of the chalk and performing morphological operations- as you originally guessed was the way to go.
For more details, I suggest you read matlab's documentation on these specific functions.
And I'll let you figure out how to get the last chalk based on what I've given you :)

How to fix the edges in the image

I have got a result as shown in the following image. As you can see, there are some edges which are not all straight. I want this image to be similar to this one (I'm not sure why the grey shade appears. Maybe because I manually extracted it?). But, the main thing here is to be similar to the white edges. I tried using morphological operations, but with not much improvements.
Any ideas how to fix this issue?
Thanks.
I loaded your data into a variable called "toBeSolved."
rawData1 = importdata('to be solved.JPG');
[~,name] = fileparts('to be solved.JPG');
newData1.(genvarname(name)) = rawData1;
% Create new variables in the base workspace from those fields.
vars = fieldnames(newData1);
for i = 1:length(vars)
assignin('base', vars{i}, newData1.(vars{i}));
end
Now this is an indexed image so there are 3 frames, as can be seen from:
>> size(toBeSolved)
ans =
452 440 3
The data content of each frame appears to be identical, so maybe all you care about is the grayscale information from 1-frame? If thats the case lets just take the first frame:
data1 = im2double(toBeSolved(:,:,1));
And then normalize the data to the max value in the image:
data1 = data1 / max(data1(:));
Now take a look at a mesh view and we see that, as expected, there is significant noise and corruption around the edges:
The appearance about the edges suggests trying a thresholding operation to the data. I experimented with the threshold value and found that 0.13 produces some improvement:
data2 = double(data1 > 0.13);
which gives:
or the grayscale, imshow(data2):
I don't know if this is acceptable to your application, the edges are not perfect, but it does seem improved over what you started with.
By the way, I checked out your "solved" data as well and that appears to also have the same underlying level of noise and edge defects as the "toBeSolved" file, but at least visually, the corruption in that image is harder to see duo to the gray-scale values around the edges.