I would like to:
read a RAW image in Swift,
access the image's Bayer matrix as a MTLTexture,
modify the Bayer matrix,
demosaic the modified image.
However, I cannot manage to read a RAW image without demosaicing it. E.g., when I load the image using a CIFilter, I can only access the demosaiced image, not the Bayer matrix.
Any help would be greatly appreciated.
Related
I am having difficulties converting a png file of simple black-coloured patterns I made using Illustrator into a bitmap. I need to do this in order to 3D print it (vector printer).
I was instructed to use MATLAB to do it however I tried using imread and imwrite but I'm rather confused as to what the first argument of imwrite, A, should be? Is there a particular format I need to use for it to work?
I tried doing it with an online converter and it gave me the same exact image but of type .bmp. Is that what's meant to happen?
I would appreciate any insight on the problem.
Use imread to read your png, then imwrite to save it in bmp format.
Implementation:
pic = imread('mypic.png');
imwrite(pic,'mypic.bmp','bmp');
I want to rectify a stereo image pair in MATLAB. To rectify, I use the following call:
[J1,J2] = rectifyStereoImages(I1,I2, cameraParamsStereo);
If I do this, then I only get the so called valid part of each image which is smaller than the initial image size. If I specify the argument OutputView as full, then I get rectified images which are larger than the original ones.
Is there a way to get rectified images that have the same size as the original ones?
It is possible in principle, but rectifyStereoImages does not support this.
I have a small problem with finding the pixel size of an image. I am to find size of nano and micro particles on my BW image. I used regionprops to get the area - then the diameter. Now i know the value in pixels. How do i convert to micro or nano meter scale? Do I take into account the sensor size(6.5umx6.5um) of my camera?
I use MATLAB for image processing.
Thank you
there is a function called imfinfo which will return a struct. In this struct you will maybe find three fields (it depends on the coder that you used for the image format) called XResolution, YResolution and ResolutionUnit. Using this 3 fields you can easily get pixel size, for example if XResolution=10, YResolution=10 and ResolutionUnit='meter' then you have a 100cm2 pixels (its a bit unreal i know :))
I hope this helps and that your image file contains the XResolution and YResolution information in your header.
I want to read a .fits image of wide field sky and display the RGB values contained in a star. Can you please suggest a method to do so?
I have used fitsread to read in the image but i am not able to show the RGB values for specific locations(star).
In order to do this, you'll need a proper rgb fits file. The only .fits viewer I know of, ds9, does not support saving rgb fits files, but rather as the three separate (r,g,b) fits images. You can use "getpix" from wcstools (http://tdc-www.harvard.edu/wcstools/) or scisoft (http://www.eso.org/sci/software/scisoft/) on the individual frames. Note that "getpix" returns the pixel value given an image (x,y) location. ds9 does not provide the physical image location, but rather the wcs coordinates, so you may have to convert to image coordinates before calling getpix.
I am using CGColorSpaceCreateDeviceRGB() which returns a png image(pixel format)? Is there a way to save this image as a vector image. Basically I need to save the pictures drawn by the GLPaint application as a vector image.
I don't know anything about this function or about GLPaint, but you can't take a pixellized image and turn it into a vector image. Only humans and highly clever algorithms can do that ( see http://vectormagic.com/ )
If you have access to the input (gestures?) of GLPaint, you should convert them to SVG directly instead of passing through an RGB image.