Could Matlab use system time as X axis value? - matlab

I have a 3D camera that could give Matlab information about the distance of an object in real time after the command distance=function1(Device)(I have omit this function since it is not important here).
I would like to make a program that shows the object moving when time changes.
I have already succeeded in using:
t1=clock;
while......(in a loop)
distance(i)=step(Device);
t2=clock;
times(i)=etime(t2,t1);
plot(times,distance);
end
to show the object moving. However, the X axis in this figure is the comparative time, which means the x axis begins with 0 seconds and ends with (t2-t1) seconds .
Now I try to find a way to change the X axis to the absolute time or the cpu time.
Like the following picture:
I'd like to change the comparative time(in black fonts) to the absolute system time (in red fonts).
I have try ' datetick' but it doesn't work properly?
Is there any way to do this?

The following creates a string out of the clock, plots distance, then attempts to change the labels based on the text strings each iteration. I tested a simple instance of it in MATLAB.
while ... (in a loop)
distance(i)=step(Device);
t2=clock;
tint = int8(t2(4:6)))
stamp{i} = strcat(int2str(tint(1)),':',int2str(tint(2)),':',int2str(tint(3)))
plot(distance)
set(gca,'XTick',1:i,'XTickLabel',stamp)
end
As far as getting the AM and PM to show, well, you'll have to do some extra witchcraft but it should be possible, based on the clock values. Then just append AM or PM into the stamp{i} assignation.
Hint, if t2(4) > 12 you are looking at PM. Else, AM.
One limitation with this is that the more iterations you do, the more crowded your X-Axis is going to become.

Related

Vertical synchronized bar on multiple plots, following user cursor

I'm trying to create an app for data analysis.
I have multiple channels of acquisition of the same length, including a time vector.
Vectors as therefore synchronized (same index value corrispond to the same time instant for every array).
I display data on different figures (UIAxes) in my app figure.
To better and more easily use the application I would like the following to happen: each time the user hovers the cursor on any plot a vertical bar at the corrisponding x position of the cursor is displayed, on all the figures. All the figures display the same range of x values.
I also have a scatter figure with points from gps, still with the same length of the other arrays.
On that figure I'd really like to have, instead of the vertical line, a crosshair.
I've found this part of code here but I can't adapt it to App Designer and it's also pretty old.
Any kind of help is very welcome.
EDIT: Matlab Version 2020b

How to open a txt file of IR temperatures as an image in matlab or other analysis software

I am using a therm-app camera to take infra-red photos of bats. I would like to draw around parts of the bat and find the hottest, coldest and average temperature and do further analysis.
The software that comes with the camera doesn't let me draw polygons so I would like to load the image in another program such as MATLAB or maybe imageJ (also happy to use Python or other if that would work).
The camera creates 4 files total:
I have a .jpg file, however when I open this in MATLAB it just appears as an image and I think it is just opening as a normal image, not sure how to accurately get the temperatures from this. I used the following to open it:
im=imread('C:\18. Bats\20190321_064039.jpg');
imshow(im);
I also have three other files, two are metadata (e.g. show date-time emissivity settings etc.) and one is a text file.
The text file appears to show the temperature of every pixel in the image.
e.g. (for a photo that had a minimum temperature of 15deg and max of 20deg it would be a text file with a minimum value of 1500 and maximum value of 2000)
1516 1530 1530 1540 1600 1600 1600 1600 1536 1536 ........
This file looks very useful, just wondering if there is some way I can open this as an image, probably in a program like MATLAB, which I think has image analysis so that I could draw around certain parts of the image (e.g. the wing of the bat) and find the average, max, min etc.
Has anyone had experience with this type of thing, can I just assign colours to numbers somehow? Or maybe other people have done it already and there is a much easier way. I will keep searching on the internet also and try to find out.
Alternatively maybe I need to open the .jpg image, draw around different parts, write a program to find out which pixels I drew around, find these in the txt file and then do averaging etc? Or somehow link the values in the text file to the .jpg file.
Sorry if this is the wrong place to ask, I can't find an image processing site on stack exchange.
All help is really appreciated, I will continue searching on the internet in the meantime.
the following worked in the end, it was much much easier than I thought it would be. Now a big fan of MATLAB, I thought it could take days to do this.
Just pasting here in case it is useful to someone else. I'm sure there is a more elegant way to write the code, however this is the first time I've used MATLAB in 20 years :p Use at your own risk, I haven't double checked I'm getting the correct results yet (though will do before I use it for anything important).
edit, since writing this I've found that the output .txt file of temperatures is actually sensor temperatures which need to be corrected for emissivity and background temperature to obtain the target temperatures. (One way to do this is to use the software which comes free with the camera to create new output .csv files of temperatures and use those instead).
Thanks to bla who put me on the right track with dlmread.
M=dlmread('C:\18. Bats\20190321_064039\20190321_064039_temps.txt') % read in the text file as a matrix (call it M)
% note that file seems to be a list of temperature values for each pixel
% e.g. 1934 1935 1935 1960 2000 2199...
M = rot90( M , 1 ) % rotate M anti-clockwise by 1*90 (All the pictures were saved sideways for some reason so rotate for easier viewing)
a = min(M(:)); % find the minimum temperature in the image
b = max(M(:)); % find the maximum temperature in the image
imresize(M,1.64); % resize the image to fit the computer screen prior to showing it on the screen
imshow(M,[a b]); % show image on the screen and fit the colours so that white is the value with the highest temperature in the image (b) and black is the lowest (a).
h = drawpolygon('FaceAlpha',0); % Let the user draw a polygon around the region of interest (ROI)
%(this stops code until polygon is drawn)
maskOfROI = h.createMask(); % For each pixel in the image assign a binary number, pixels inside the polygon (ROI) area are given 1 outside are 0
selectedValues = M(maskOfROI); % Now get the image values for all pixels where the mask value is '1' (i.e. all pixels within the polygon) and call this selectedValues.
averageTemperature = mean(selectedValues); % Get the mean of selectedValues (i.e. mean of the temperatures inside the polygon area)
maxTemperature = max(selectedValues); % Get the max of selectedValues
minTemperature = min(selectedValues); % Get the min of selectedValues

Using the Overlay feature in panel but showing only an averaged line

I have created an overlay of 100 curves. I thought the image looked impressive but one of my reviewers stated that I should only show the averaged line. Meaning, only show a line that is the average of the 100 curves. Is this possible using the xtline feature, or do I need to get deeper into programming code to produce the graphic? Alternatively, it would be great if I could show both (the 100 curves and the averaged curve) in the same graphic.
You don't need anything complicated here. Just calculate the mean across the panel and show it directly. Here is some technique. I am guessing that in your real example with 100 curves a legend is pointless. Note the c(L) and read the help to find what it does.
webuse grunfeld, clear
set scheme s1color
gen log_invest = log(invest)
egen mean_log_invest = mean(log_invest), by(year)
line log_invest year, legend(off) lc(gs12) c(L) || line mean_log_invest year, c(L) scheme(s1color) ytitle(something sensible)

Matlab imshow update image at old position and magnification

I currently use Matlab's imshow to output an image at every iteration of a diffusion filter process, i.e. multiple times per second.
Sometimes during filtering I want a closer look at specific image parts.
However, when using the ('Parent', handle) name-value pair for imshow the magnification and position gets reset.
Is there a way to update the underlying image but having the magnification and position intact?
You can update the cdata in the current axis to your new data matrix which will keep all other settings the same. If this is in a loop, you probably need to call drawnow. E.g:
x=randn(100);
figure;imagesc(x);
Now zoom / pan / do whatever manipulations you want.
f=gca;
x=randn(100);
f.Children.CData = x;
This method of updating of child data is recommended by Matlab as more efficient than destroying the axis child Image and recreating each frame (can't remember the source, it was in one of the help files).
Edit: Just remembered that this syntax won't work on older versions of matlab (pre 2015 or so). In that case, use get/set syntax:
set(get(gca,'Children'),'CData',x);

artifacts in processed images

This question is related to my previous post Image Processing Algorithm in Matlab in stackoverflow, which I already got the results that I wanted to.
But now I am facing another problem, and getting some artefacts in the process images. In my original images (stack of 600 images) I can't see any artefacts, please see the original image from finger nail:
But in my 10 processed results I can see these lines:
I really don't know where they come from?
Also if they belong to the camera's sensor why can't I see them in my original images? Any idea?
Edit:
I have added the following code suggested by #Jonas. It reduces the artefact, but does not completely remove them.
%averaging of images
im = D{1}(:,:);
for i = 2:100
im = imadd(im,D{i}(:,:));
end
im = im/100;
imshow(im,[]);
for i=1:100
SD{i}(:,:)=imsubtract(D{i}(:,:),im(:,:))
end
#belisarius has asked for more images, so I am going to upload 4 images from my finger with speckle pattern and 4 images from black background size( 1280x1024 ):
And here is the black background:
Your artifacts are in fact present in your original image, although not visible.
Code in Mathematica:
i = Import#"http://i.stack.imgur.com/5hM3u.png"
EntropyFilter[i, 1]
The lines are faint, but you can see them by binarization with a very low level threshold:
Binarize[i, .001]
As for what is causing them, I can only speculate. I would start tracing from the camera output itself. Also, you may post two or three images "as they come straight from the camera" to allow us some experimenting.
The camera you're using is most likely has a CMOS chip. Since they have independent column (and possibly row) amplifiers, which may have slightly different electronic properties, you can get the signal from one column more amplified than from another.
Depending on the camera, these variability in column intensity can be stable. In that case, you're in luck: Take ~100 dark images (tape something over the lens), average them, and then subtract them from each image before running the analysis. This should make the lines disappear. If the lines do not disappear (or if there are additional lines), use the post-processing scheme proposed by Amro to remove the lines after binarization.
EDIT
Here's how you'd do the background subtraction, assuming that you have taken 100 dark images and stored them in a cell array D with 100 elements:
% take the mean; convert to double for safety reasons
meanImg = mean( double( cat(3,D{:}) ), 3);
% then you cans subtract the mean from the original (non-dark-frame) image
correctedImage = rawImage - meanImg; %(maybe you need to re-cast the meanImg first)
Here is an answer that in opinion will remove the lines more gently than the above mentioned methods:
im = imread('image.png'); % Original image
imFiltered = im; % The filtered image will end up here
imChanged = false(size(im));% To document the filter performance
% 1)
% Compute the histgrams for each column in the lower part of the image
% (where the columns are most clear) and compute the mean and std each
% bin in the histogram.
histograms = hist(double(im(501:520,:)),0:255);
colMean = mean(histograms,2);
colStd = std(histograms,0,2);
% 2)
% Now loop though each gray level above zero and...
for grayLevel = 1:255
% Find the columns where the number of 'graylevel' pixels is larger than
% mean_n_graylevel + 3*std_n_graylevel). - That is columns that contains
% statistically 'many' pixel with the current 'graylevel'.
lineColumns = find(histograms(grayLevel+1,:)>colMean(grayLevel+1)+3*colStd(grayLevel+1));
% Now remove all graylevel pixels in lineColumns in the original image
if(~isempty(lineColumns))
for col = lineColumns
imFiltered(:,col) = im(:,col).*uint8(~(im(:,col)==grayLevel));
imChanged(:,col) = im(:,col)==grayLevel;
end
end
end
imshow(imChanged)
figure,imshow(imFiltered)
Here is the image after filtering
And this shows the pixels affected by the filter
You could use some sort of morphological opening to remove the thin vertical lines:
img = imread('image.png');
SE = strel('line',2,0);
img2 = imdilate(imerode(img,SE),SE);
subplot(121), imshow(img)
subplot(122), imshow(img2)
The structuring element used was:
>> SE.getnhood
ans =
1 1 1
Without really digging into your image processing, I can think of two reasons for this to happen:
The processing introduced these artifacts. This is unlikely, but it's an option. Check your algorithm and your code.
This is a side-effect because your processing reduced the dynamic range of the picture, just like quantization. So in fact, these artifacts may have already been in the picture itself prior to the processing, but they couldn't be noticed because their level was very close to the background level.
As for the source of these artifacts, it might even be the camera itself.
This is a VERY interesting question. I used to deal with this type of problem with live IR imagers (video systems). We actually had algorithms built into the cameras to deal with this problem prior to the user ever seeing or getting their hands on the image. Couple questions:
1) are you dealing with RAW images or are you dealing with already pre-processed grayscale (or RGB) images?
2) what is your ultimate goal with these images. Is the goal to simply get rid of the lines regardless of the quality in the rest of the image that results, or is the point to preserve the absolute best image quality. Are you to perform other processing afterwards?
I agree that those lines are most likely in ALL of your images. There are 2 reasons for those lines ever showing up in an image, one would be in a bright scene where OP AMPs for columns get saturated, thus causing whole columns of your image to get the brightest value camera can output. Another reason could be bad OP AMPs or ADCs (Analog to Digital Converters) themselves (Most likely not an ADC as normally there is essentially 1 ADC for th whole sensor, which would make the whole image bad, not your case). The saturation case is actually much more difficult to deal with (and I don't think this is your problem). Note: Too much saturation on a sensor can cause bad pixels and columns to arise in your sensor (which is why they say never to point your camera at the sun). The bad column problem can be dealt with. Above in another answer, someone had you averaging images. While this may be good to find out where the bad columns (or bad single pixels, or the noise matrix of your sensor) are (and you would have to average pointing the camera at black, white, essentially solid colors), it isn't the correct answer to get rid of them. By the way, what I am explaining with the black and white and averaging, and finding bad pixels, etc... is called calibrating your sensor.
OK, so saying you are able to get this calibration data, then you WILL be able to find out which columns are bad, even single pixels.
If you have this data, one way that you could erase the columns out is to:
for each bad column
for each pixel (x, y) on the bad column
pixel(x, y) = Average(pixel(x+1,y),pixel(x+1,y-1),pixel(x+1,y+1),
pixel(x-1,y),pixel(x-1,y-1),pixel(x-1,y+1))
What this essentially does is replace the bad pixel with a new pixel which is the average of the 6 remaining good pixels around it. The above is an over-simplified version of an algorithm. There are certainly cases where a singly bad pixel could be right next the bad column and shouldn't be used for averaging, or two or three bad columns right next to each other. One could imagine that you would calculate the values for a bad column, then consider that column good in order to move on to the next bad column, etc....
Now, the reason I asked about the RAW versus B/W or RGB. If you were processing a RAW, depending on the build of the sensor itself, it could be that only one sub-pixel (if you will) of the bayer filtered image sensor has the bad OP AMP. If you could detect this, then you wouldn't necessarily have to throw out the other good sub-pixel's data. Secondarily, if you are using an RGB sensor, to take a grayscale photo, and you shot it in RAW, then you may be able to calculate your own grayscale pixels. Many sensors when giving back a grayscale image when using an RGB sensor, will simply pass back the Green pixel as the overall pixel. This is due to the fact that it really serves as the luminescence of an image. This is why most image sensors implement 2 green sub-pixels for every r or g sub-pixel. If this is what they are doing (not ALL sensors do this) then you may have better luck getting rid of just the bad channel column, and performing your own grayscale conversion using.
gray = (0.299*r + 0.587*g + 0.114*b)
Apologies for the long winded answer, but I hope this is still informational to someone :-)
Since you can not see the lines in the original image, they are either there with low intensity difference in comparison with original range of image, or added by your processing algorithm.
The shape of the disturbance hints to the first option... (Unless you have an algorithm that processes each row separately.)
It seems like your sensor's columns are not uniform, try taking a picture without the finger (background only) using the same exposure (and other) settings, then subtracting it from the photo of the finger (prior to other processing). (Make sure the background is uniform before taking both images.)