How to smooth and extract an object from depth Image - matlab

I am using a dataset which provides depth images of human, I need to extract the object from this image or at least remove the other distortion in the image that not belong to the human body In Matlab.
a sample of images is shown below:
This is the output when I used
I = imread ('39.jpg');
human = sum(I,3)>10+10;
human
Any way to do that please?
thanks in Advance

For the image you show, where everything is grayscale but something is red, then just do:
so=imread('https://i.stack.imgur.com/hZOQv.jpg');
human=sum(abs(diff(single(so),1,3)),3)>20;
This essentially compares the difference in RGB values of the pixels, and gets the one above a threshold. If you have proper pngs, then the threshold should just be 1, however with jpg artifacts you may need a higher value, for this image 20 does the job.
There are some tiny artefacts in the result image, very likely due to jpg. When you do science, you need to store in png. If you have absolutely no other choice than jpg, then you may have artefacts.

Related

Rectified images of same size as the initial ones

I want to rectify a stereo image pair in MATLAB. To rectify, I use the following call:
[J1,J2] = rectifyStereoImages(I1,I2, cameraParamsStereo);
If I do this, then I only get the so called valid part of each image which is smaller than the initial image size. If I specify the argument OutputView as full, then I get rectified images which are larger than the original ones.
Is there a way to get rectified images that have the same size as the original ones?
It is possible in principle, but rectifyStereoImages does not support this.

MATLAB: display RGB values of a fits image

I want to read a .fits image of wide field sky and display the RGB values contained in a star. Can you please suggest a method to do so?
I have used fitsread to read in the image but i am not able to show the RGB values for specific locations(star).
In order to do this, you'll need a proper rgb fits file. The only .fits viewer I know of, ds9, does not support saving rgb fits files, but rather as the three separate (r,g,b) fits images. You can use "getpix" from wcstools (http://tdc-www.harvard.edu/wcstools/) or scisoft (http://www.eso.org/sci/software/scisoft/) on the individual frames. Note that "getpix" returns the pixel value given an image (x,y) location. ds9 does not provide the physical image location, but rather the wcs coordinates, so you may have to convert to image coordinates before calling getpix.

Why smaller PNG image takes up more space than the original after getting resized by GraphicsMagic

The original PNG image is 800x1200 and takes up about 34K. After the images is resized by GraphicsMagick to 320x480 size, the resulting images takes up approximately 37K. (For comparison, if the image is resized with Paint on Windows 7 then the resulting image is 40K.) What gives? The whole point of resizing an image was to save space. How should GraphicsMagick be used to shrink the image size?
PNG is a lossless format and compresses the image data by first performing a step called prediction and then applying the same algorithm used in zlib. The prediction step is a crucial one in order to effectively compress the file, and it is based on the values of earlier neighbors pixels.
So, suppose you have a large PNG in black & white (by that I really mean only black and white, some people confuse that by grayscale sometimes). Also suppose it is not a tiny checkerboard pattern. In many regions of this image, you will have a relatively large white region, and then a relatively large black region, and so on. When the predictor is inside one of these large regions, it has no trouble to correctly predict that the current pixel intensity is exactly equal to the last one. This makes it easier to better compress the data describing your image.
Now, let us downscale this black & white image using some resampling filter different than nearest neighbor (let's say Lanczos). This has a great chance to turn your black & white image into a grayscale one, which has a much greater intensity range. This potentially makes the job of the predictor much harder, and thus the final file size might be larger.
For instance here is a black & white 256x256 PNG image which takes 5440 bytes, a resizing of it (using 3-lobed Lanczos) to 120x120 which now takes 7658 bytes, and another resizing (using nearest neighbor) to 120x120 which occupies 2467 bytes.
PNG is a compressed format. Sometimes trying to compress a maximally compressed item actually results in a larger item. So if the 800x1200 is resized to a smaller size, but the result retains everything that was in the original, because the original is already as minimal as possible, you could see this happen. To demonstrate this, try using 7zip to compress some data with ultra compression. Then try compressing the compressed file. Often the second compressed file will be larger than the first.

Transparency with JPEGs

JPEGs are smaller in size than PNGs. So, I thought that if I can make a specific region in a JPEG-file transparent, with some code, maybe I can save some bytes.
So does anyone know how to achieve this with for example PHP or JavaScript?
No. You can't do this. JPGs do not support alpha channels and have no capacity to designate certain colors as transparent either (GIF-style).
There's several issues with this, all of them have to do with that JPEG is a lossy compression format. The JPEG format is optimized for natural images and sharp edges will get blurred. If you intend that a specific pixel should have the value #d67fff there's no guarantee that after color conversion, FDCT, quantization, IDCT and color conversion, the pixel still will have that value. There's also a strong possibility that that pixel value will occur in areas that you don't want.
No. JPEG does not support transparency and is not likely to do so any time
soon. http://www.faqs.org/faqs/jpeg-faq/part1/section-12.html
You cannot do that, the client renders the image and doesn't know that you want it to treat that color as transparent (plus various compression methods on jpeg wouldn't work well with transparencies anyway).
I believe you can go with an 8-bit custom-pallet png, should save you a lot of space. Otherwise 24-bit PNG is your only high color option.
You can convert your image to SVG containing a color information as JPEG and an alpha channel as grayscale mask. Here is a tool I wrote to do it https://github.com/igrmk/transpeg

Histogram equalization with color correction (iPhone/objective-C)

I am trying to implement a histogram equalization method (HE) for a UIImage in my iphone app.
I read the following:
http://en.wikipedia.org/wiki/Histogram_equalization
But it says:
Still, it should be noted that applying the same method on the Red, Green, and Blue components of an RGB image may yield dramatic changes in the image's color balance since the relative distributions of the color channels change as a result of applying the algorithm. However, if the image is first converted to another color space, Lab color space, or HSL/HSV color space in particular, then the algorithm can be applied to the luminance or value channel without resulting in changes to the hue and saturation of the image.
So would this be a feasible approach?
Grab UIImage data and convert from RGB to HSL
Apply HE on luminance channel
convert data back to RGB
Create new UIImage from data
Will this be slow, I wonder? Also, will I have to deal with 8/16/24 bit data differently, as I have no idea what kind of image will be used with my app? Or can I assume 24 bit for images in the iPhone?
I would appreciate any pointers to objective-C code that does color corrected histogram equalization.
I have looked at the library below, but it does not do any color correction for HE:
http://code.google.com/p/simple-iphone-image-processing/source/browse/#svn/trunk/Classes%3Fstate%3Dclosed
Thanks!
Yes you can do it this way, that will work. Yes it will "cost more" since you have to do the conversion back and forth - but that's the price you will have to pay if you don't want to affect the hue and saturation. Is that worth it for the images you're correcting? It would depend on your application, are you OK with a hit in performance vs best quality? You will likely only have to deal with 8bit color components, you can assume "24 bit" for images but that's 3 x 8bit components The only way to know your answers though is to try.
I recommend using YUV Colorspace. Both for accuracy and for computation simplicity (Linear Combination).
One method would be applying the histogram equalization on the RGB image (Image2).
Then let the user to chose what he wants, Apply only on luminosity or all 3 channels.
For the first choice take UV channels of the original image with the Y channel of the equalized image and convert back to RGB.
For the second choice just leave the user with Image2.
Since after transformation, you deal with I/V as being continuous values, you will have to apply some binning strategy, which results in a step Histogram for the quantity you wish to equalize. Therefore, you might speed this up by reducing the bin size?
Just write the codes and model applying HE to each of the RGB component. Although there are much calculation for its 3 components, but programming speed is OK. In most of the cases, the contrast is improved, but the "look" of the image is changed. So agree to transform the RGB into another space then apply the HE again. I am looking for the formula and also the correct color space for the HE. Which color space is easier?
I write the HE in the iPad platform, but I find after opening a big image taken from my Canon, the whole program crashes after UIPopoverContoller, UIImagePickerController functions. I think it maybe due to I am pushing too much on the phone's OS, or the OS allocates only a limit amount of memory space for each of the apps. If apps is using more than pre-set memory, then the iOS just kills the apps right away. So must take care of the size of the input image, and the garbage collection of unused memory, and memory leak. Using XCode's instrument tool to check for leakage is a must.