I am using CGColorSpaceCreateDeviceRGB() which returns a png image(pixel format)? Is there a way to save this image as a vector image. Basically I need to save the pictures drawn by the GLPaint application as a vector image.
I don't know anything about this function or about GLPaint, but you can't take a pixellized image and turn it into a vector image. Only humans and highly clever algorithms can do that ( see http://vectormagic.com/ )
If you have access to the input (gestures?) of GLPaint, you should convert them to SVG directly instead of passing through an RGB image.
Related
I would like to:
read a RAW image in Swift,
access the image's Bayer matrix as a MTLTexture,
modify the Bayer matrix,
demosaic the modified image.
However, I cannot manage to read a RAW image without demosaicing it. E.g., when I load the image using a CIFilter, I can only access the demosaiced image, not the Bayer matrix.
Any help would be greatly appreciated.
I am using a dataset which provides depth images of human, I need to extract the object from this image or at least remove the other distortion in the image that not belong to the human body In Matlab.
a sample of images is shown below:
This is the output when I used
I = imread ('39.jpg');
human = sum(I,3)>10+10;
human
Any way to do that please?
thanks in Advance
For the image you show, where everything is grayscale but something is red, then just do:
so=imread('https://i.stack.imgur.com/hZOQv.jpg');
human=sum(abs(diff(single(so),1,3)),3)>20;
This essentially compares the difference in RGB values of the pixels, and gets the one above a threshold. If you have proper pngs, then the threshold should just be 1, however with jpg artifacts you may need a higher value, for this image 20 does the job.
There are some tiny artefacts in the result image, very likely due to jpg. When you do science, you need to store in png. If you have absolutely no other choice than jpg, then you may have artefacts.
I want to read a .fits image of wide field sky and display the RGB values contained in a star. Can you please suggest a method to do so?
I have used fitsread to read in the image but i am not able to show the RGB values for specific locations(star).
In order to do this, you'll need a proper rgb fits file. The only .fits viewer I know of, ds9, does not support saving rgb fits files, but rather as the three separate (r,g,b) fits images. You can use "getpix" from wcstools (http://tdc-www.harvard.edu/wcstools/) or scisoft (http://www.eso.org/sci/software/scisoft/) on the individual frames. Note that "getpix" returns the pixel value given an image (x,y) location. ds9 does not provide the physical image location, but rather the wcs coordinates, so you may have to convert to image coordinates before calling getpix.
I'll do the medical image processing with CLAHE method (I use the code in http://www.mathworks.com/matlabcentral/fileexchange/22182-contrast-limited-adaptive-histogram-equalization-clahe/all_files ) and region growing ( http://www.mathworks.com/matlabcentral/fileexchange/19084-region-growing/content/regiongrowing.m )
that function can run if i use double data type for image. but converting image to double make its to be the binary image.
anyone know how to make image still in double but not to be a binary image?
If your image is img then do im2double(img). See im2double on the mathworks reference site.
If I've understood your comment correctly, you're trying to convert a binary image to a gray scale image. If so, this is not possible, as you've thrown away all the intensity information in lieu of a simple 0/1 image.
If your question was on how to convert a color/grayscale image to double, then LightningIsMyName has the answer for you. Here's a small example that you can play around with to see what you really want:
img=imread('peppers.png'); %#read in MATLAB's stock image
imgDouble=im2double(img); %#convert uint8 to double
imgGray=rgb2gray(img); %#convert RGB image to grayscale
imgGrayDouble=im2double(imgGray);%#convert grayscale image to double.
Here's how your color and grayscale images should look like:
I want to be able to take a UIImage instance, modify some of its pixel data, and get back a new UIImage instance with that pixel data. I've made some progress with this, but when I try to change the array returned by CGBitmapContextGetData, the end result is always an overlay of vertical blue lines over the image. I want to be able to change the colors and alpha values of pixels within the image.
Can someone provide an end-to-end example of how to do this, starting with a UIImage and ending with a new UIImage with modified pixels?
This might answer your question: How to get the RGB values for a pixel on an image on the iphone
I guess you'd follow those instructions to get a copy of the data, modify it, and then create a new CGDataProvider (CGDataProviderCreateWithCFData), use that to create a new CGImage (CGImageCreate) and then create a new UIImage from that.
I haven't actually tried this (no access to a Mac at the moment), though.