How to convert grayscale image to rgb with full colors? [closed] - matlab

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I'm trying to convert a grayscale image to rgb image . I searched the net and found this code:
rgbImage = repmat(grayImage,[1 1 3]);
rgbImage = cat(3,grayImage,grayImage,grayImage);
but what this does is only give the image in gray-scale with 3-D matrix.
I want to have a way that i can convert it into true color image.

It's impossible to directly recover a colour image from the grey scale version of it. As correctly said by #Luis Mendo in the comments, the needed information are not physically stored there.
What you can do is try to come up with a mapping between intensity level and colour, then play around with interpolation. This however will not produce a good result, but just some colour mapping information that may be very far from what you want.
If you have another colour image and you want to fit its colours to your grey scale image, you may want to have a look at: http://blogs.mathworks.com/pick/2012/11/25/converting-images-from-grayscale-to-color/ .
In particular the function that is there cited can be found here: http://www.mathworks.com/matlabcentral/fileexchange/8214-gray-image-to-color-image-conversion#comments.
Bare in mind that this will be slow and will not produce a great result, however I don't think you can do much more.

yes it's impossible to directly convert gray scale image to rgb..but it is possible to add features or show something like an edge detected on gray scale image in rgb by adding it..if you want you can use this..
rgb = zeros([size(I_temp) 3]);
rgb(:,:,1) = im2double(rr);
rgb(:,:,2) = im2double(rg)+.05*double(I_temp) ;
rgb(:,:,3) = im2double(rb);
where rr,rg,rb are an rgb base image..

Related

Is it possible to tell the human readable color (ex. pink, white) from RGB/Hex code in Swift? [duplicate]

This question already has answers here:
Check if color is blue(ish), red(ish), green(ish),
(3 answers)
Closed 1 year ago.
I'm trying to build an app for a class project that finds the most dominant color of an image. While it's not hard to extract the most dominant colors RBG code, I was wondering if there is a way to use this code to get us the name of the color like red color or blue color.
I understand this would be technically complex since there are so many different RGB values but I was wondering if that had been done before. I'm using Swift to develop this app.
This question is not swift related but more a general programming problem. There are multiple ways to solve this.
The first approach would be to create a list of colors you want to separate, or get it from somewhere. Then create a function that maps a random RGB value onto the nearest values from the list (you can do least squares or any kind of definition of 'nearest values').
Another solution would be to again use a mapping but based on the angle of the rgb values when mapped into an rgb color wheel (https://www.pikpng.com/pngl/b/113-1130205_alt-text-rgb-led-color-mixing-chart-clipart.png, http://www.procato.com/rgb+index/).
Anyway, there are multiple solutions online and also on stack overflow (RGB color space to raw color name mapping, https://github.com/ayushoriginal/Optimized-RGB-To-ColorName)

How to remove grainy details from an image [duplicate]

This question already has answers here:
Best method to find edge of noise image
(2 answers)
Closed 6 years ago.
I have used adapthisteq to improve the visibility of the foreground objects. However, this seems to have created grainy noisy details. How can I remove these grainy details from the image? I have tried Gaussian blurring through imgaussfilt and while it does remove some of the grainy details, the shape of the cells in the image become less defined. The second image shows the binary image of the first image.
You can use a filter that takes into consideration the edge information like bilateral filter. https://en.wikipedia.org/wiki/Bilateral_filter
The bilateral filter doesn't only weighs the value according to the distance in pixels (like a regular Gaussian blurring) but also according to the distance in color between the pixels.
taken from: http://www.slideshare.net/yuhuang/fast-edge-preservingaware-high-dimensional-filters-for-image-video-processing
An Matlab implementation you can find here:
https://www.mathworks.com/matlabcentral/fileexchange/12191-bilateral-filtering

Matlab - Hide a 1MB file in an Image's invaluable bits (Watermarking) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have to store a 1MByte word file into a 512x512 pixels image using Matlab and extract it again. The only thing that I know is that we have to remove the invaluable bits of the image (the ones that are all noise) and store our fie there.
Unfortunately I know nothing about both Matlab and Image Processing.
Thanks All.
Given the numbers provided, you can't. 512x512 give 6.2MBit given 24 bits per pixel. So your doc is larger than the image you are hiding it in.
If we ignore the above, then this is what you have to do:
Load the image and convert to uints.
Mask out a number of LSB bits in each pixel.
Load the doc as binary and fill those bits in where you
masked the others out.
Now, from the above to actual code is a bit of work. If you have no experience with matlab it won't be easy. Try reading up on imread() and bit operations in matlab. When you have some code up and running, then post it here for help.
Regards
In matlab you can read images with imread()
(details on: http://de.mathworks.com/help/matlab/ref/imread.html?s_tid=gn_loc_drop )
Image = imread("Filename.jpg")
figure()
imshow(Image)
This code would show you the Image in a Window.
I think what you're looking for is steganography instead of watermarking.
Steganography:
https://en.wikipedia.org/wiki/Steganography
Here is an example of an image with a file inside it:
http://marvinproject.sourceforge.net/en/plugins/steganography.html
Related topic:
Image Steganography

how to extract contour using freeman chain code using matlab? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I work in my project on the problem of writer recognition from handwritten Arabic documents.
to identify the writer, I used a database image,
My problem is how to extract features from these images. I'm new in matlab and I do not have much knowledge in image processing.
please help me, I need to extract the contour from image and then encode it using freeman chain codes.
The following link contains freeman code in matlab but I do not know how to use it.
I welcome your suggestion and thank you in advance
You can use the imcontour function.
For instance, if you load this sample image
Img = imread('test.png');
You can get the contour with the command:
C = imcontour(Img, 1);
Then you can use the freeman function you cite with C as the first input.
Another example could be to use bwperim. This essentially takes a look at all of the distinct binary objects in an image and extracts the perimeter of each object. This only works for objects that are white, and so using #Crazy rat's example, we can thus do:
im = ~im2bw(imread('http://i.stack.imgur.com/p9BZl.png'));
out = ~bwperim(im);
The above will read in the image and convert it into binary / logical. Next, we need to invert the image so that the object / text is white while the background is black. After, call bwperim so that you extract the perimeter of the objects, and then to convert back so that the object text is black, we re-invert.
The output I get is:
The distinct advantage with bwperim over imcontour is that bwperim provides the actual output image whereas imcontour only draws a figure for you. You can certainly extract the image data from the figure, like using the h = gcf; out = h.cdata; idiom, but this will include some of the figure background in the result. I suspect you would like the actual raw image instead, and so I would recommend using bwperim.
How do we use this with the Freeman code you linked?
If you look at the source code, it takes in two inputs:
b, which is a N x 2 matrix of coordinates that determine the boundary of the shape you want to encode
unwrap - An optional parameter
If you want to use the function that you have linked us to, simply extract the row and column coordinates of those pixels that are along the boundary of your image. As such, this is another limitation of imcontour, as you won't be able to determine these locations without the raw contour image itself. Therefore, all you really have to do is:
[y,x] = find(out == 0);
cc = chaincode([y x]);

how to convert the small image into big images without affect the resolution in Photoshop? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am using Photoshop CS6.
I have images in small sizes(3.5mm X 3.5mm).
I enlarge the image size in(10cm X 8 cm).
Then the image quality are going low..
SO how to enlarge the images without affect the resolution.
BiCubic Smoother is not satisfied me..
Is there any way to resize images to high resolutions without losing pixels.
If you enlarge an image with a factor of 6:1 (as in this case) you will have an image missing 5/6 of information that need to be "filled" with constructed information by mathematical means. In most cases interpolation (bi-cubic or otherwise) is used.
Unfortunately this will never result in anything sharp and high quality due to the nature of interpolating (basically averaging the constructed color points between the actual pixels). The picture will appear blurry no matter what you try to do in a case like this.
You can always throw a sharpening convolution on it, but the result will never be ideal.
For example, lets say I have a 2x1 pixel image that looks like this (enlarged for example):
If I now want to enlarge this image using interpolation I will end up with an image containing information like this:
As you can see two points between the black and white needs to be reconstructed. As there is no way of knowing how these points would look like (as they never existed in the image in the first place) we need to guess how they would look like by averaging the black and white points.
This will result in a "gray scale" that will result in the image looking blurry.
The more complex interpolation algorithms can make a better guess by using more points to get a Bezier approach for the non-existing points and so forth, but it will always be a good guess at best.
Now, this example uses 2:1 enlarging. You can probably by now imagine then how 6:1 scale will appear.
It is impossible in this way. you will lose quality because your image and Photoshop are pixel based.
you can convert your picture to vector using softwares like corel draw.