I am trying to save temperature data from infiray t2L thermal camera. The image from thermal camera includes 4 additional rows (i.e. 4x256 pixels), the original resolution of camera is (192x256). I guess that some data is stored in the additional rows of image, but the camera does not provide temperature values of pixel.
How can I decode 4 additional rows included in the image in order to get access to the temperature of each pixel using MATLAB and save it to a csv file.
Related
Gnuplot is a very powerful library that supports plotting of functions with numerous scientific operations. What my case is I want to read a single channel grayscale image just as we read in matlab or python using imread and store it into a 2D data grid using gnuPlot.
Basically I want to make contours of image gray scale intensities.To do that I am exporting the single channel luminance data of the image as a .dat file using matlab once it is exported I splot it using:
set contour base
splot 'greyScaleImagePixelByPixelData.dat' matrix
This works fine but in case I dont want to use Matlab to export the pixel by pixel data to surface plot the image what is the way around?
The example below has been tested with 8-bit and 16-bit grayscale png images (no alpha channel). If your particular images do not match this, please provide a more complete description of how they are encoded.
You haven't said exactly what you want to do with the pixel data after reading it in, so I show the obvious example of displaying it as an image (e.g. a regular array of pixels). If you want to do further manipulation of the values before plotting, please expand the question to give additional details.
[~/temp] file galaxy-16bitgrayscale.png
galaxy-16bitgrayscale.png: PNG image data, 580 x 363, 16-bit grayscale, non-interlaced
[~/temp] gnuplot
set autoscale noextend
plot 'galaxy-16bitgrayscale.png' binary filetype=png with rgbimage
Note that gnuplot does not have any separate storage mode for grayscale vs. RGB image data. In this case it loads 3 copies of each 16-bit grayscale value into parallel storage as if it were separate R/G/B pixel data.
[2nd edit: show both grayscale image and contour levels]
set autoscale noextend
set view map
set contour surface
set cntrparam levels discrete 100, 200
set cntrparam firstlinetype 1
set key outside title "Contour levels"
splot 'galaxy16bit.png' binary filetype=png with rgbimage notitle, \
'' binary filetype=png with lines nosurface title ""
I have a depth image taken from Kinect V2 which is given below. I want to extract the pixel value x at any specific coordinate and its depth value in Matlab. I used the follwing code in Matlab but it gives me 16-bit value. However, I'm not sure is it pixel value or depth value of pixel x.
im=imread('image_depth.png');
val=im(88,116);
display(val);
Result
val= (uint16) 2977
Would someone please help me that, how to extract both pixel and depth value in Matlab?
The image name hints it is a depth map. The color map is stored usually in separate file usually in different resolution and with some offset present if not aligned already. To align RGB and Depth images see:
Align already captured rgb and depth images
and the sub-link too...
The image you provided peaks with color picker the same 0x000B0B0B color for the silhouette inside. That hints it is either not Depth map or it has too low bit-width or the SO+Brownser conversion lose precision. If any pixel inside returns the same number for you too then the depth map is unusable.
In case your peaks returns 16 bit value it hints RAW Kinect depth values. If the case see:
Kinect raw depth to distance in meters
Otherwise it could be just scaled depth so you can convert your x value to depth like:
depth = a0 + x*(a1-a0)
where <a0,a1> is the depth range of the image which should be stated somewhere in your dataset source ...
From your description, and the filename, the values at each location in your image are depth values.
If you need actual color values for each pixel, they would likely be stored in a separate image file, hopefully of the same dimension.
What you are seeing here is likely the depth values normalized to a displayable region with MATLAB or your picture viewing software.
You will need to look at the specs to see how a value like 2977 converts to the physical world (e.g. cm). Hopefully it is just a scalar value you can multiply to get that answer.
I have collected data using Kinect v2 sensor and I have a depth map together with its corresponding RGB image. I also calibrated the sensor and obtained the rotation and translation matrix between the Depth camera and RGB camera.
So I was able to reproject the depth values on the RGB image and they match. However, since the RGB image and the depth image are of different resolutions, there are a lot of holes in the resulting image.
So I am trying to move the other way, i.e. mapping the color onto the depth instead of depth to color.
So the first problem I am having is that the RGB image has 3 layers and I have to convert the RGB image to grayscale to do it and I am not getting the correct results.
Can this be done?
Has anyone tried this before?
Why can't you fit the Z-depth to the RGB?
To fit the low res image to the high- res should be easy, as long as both represent the same size of data (i.e. corners of both images are the same point)
It should be as easy as:
Z_interp=imresize(Zimg, [size(RGB,1) size(RGB,2)])
Now Z_interp should have the same amount of pixels as RGB
If you still want to do it the other way around, well, use the same approach:
RGB_interp=imresize(RGB, [size(Zimg,1) size(Zimg,2)])
The Image Acquisition Toolbox now officially supports Kinect v2 for Windows. You can get a point cloud out from Kinect using pcfromkinect function in the Computer Vision System Toolbox.
I am trying to map the Depth map onto the RGB image on the Kinec using MATLAB.
So here are the steps that I took:
(1) Obtain the images using a C++ program.
(2) Using the depth value from each pixel on MATLAB, I was able to obtain the XYZ distances of all the pixels in mm.
(3) Then using some equations, I was able to obtain the XY pixel coordinates of those depth pixels on the RGB image.
So I am left with a huge cell containing all the locations of the depth map w.r.t the color image.
So my question is now if I want to overlay the depth image on the color image, how can I do that?
Can anyone help me?
Thanks;
So I used Kinect to obtain some depth images and they are now saved. So if I want to process this depth image to get the Z value (i.e. the distance from the object to the Kinect) how should I do that?
I have been doing some research online and found out that I need to be saving the image as a 16bit depth image for the depth values to be stored instead of an 8 bit depth image which can only store up to 256 values. based on: Save Kinect depth image in Matlab?
But I still do not quite understand the image I am getting. When I use imread and uses the data cursor to look at the individual pixels, I only obtain the XY coordinates and an Index value. the index value does not seem to represent the distance in mm.
Can anyone clear this part for me please.
Thanks.
looks like your are reading an indexed image with imread()
try this:
[idx, map] = imread('yourImage');
RGB = ind2rgb(idx, map);
and see if RGB contains the correct value.