I have CMOS camera to capture image and i want to display the captured image on VGA monitor. I am receiving data from CMOS camera in format RGB 565 Like RRRGGGBBB. I have FPGA board with VGA connector RGB (3)pins.how do we convert RGB 565 to single Bit RGB?
You don't really mean "single bit" RGB.
You want to produce single analogue output for each color component - R/G/B.
You'll need to build some kind of DAC outside of FPGA if you want shades. It can be a resistor ladder, for example. Choose its size to reflect the quality you need.
In simplest case you can have just a single resistor in series with R/G/B lines, that will limit your output quality down to a "true single bit" RGB :)
At least let us know what board do you have.
Related
I was able to view and capture the image from the depth stream in MATLAB (using the webcam from the Hardware Support Package) from an F200 Intel Realsense camera. However, it does not look the same way as it does in the Camera Explorer.
What I see from MATLAB -
I have also linked Depth.mat that contains the image in the variable "D".
The image is returned as a 3 dimensional array of uint8. I assumed that the depth stream is a larger number that is broken in bits in each plane and tried bitshifting each plane and adding it to the next while taking care of the datatypes. Then displayed it using imagesc, but did not get a proper depth image.
How do I properly interpret this image? Or, is there an alternate way to capture images in MATLAB?
So I was trying to calibrate the IR camera of the new Kinect v2 sensor. So I am following all the steps from here http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html
The problem I am having is the following:
The IR image looks fine but once I put it through the program the image I am getting is a mostly white(bright) image. See pics below
Anyone encountered this issue before?
Thanks
You are not reading the IR image pixels correctly. The format is 16 bits per pixel, with only the high 10 ones used (see specification here). You are probably visualizing them as if they were 8bpp images, and therefore they end up white-saturated.
The simplest thing you can do is downshift the values by 8 bits (i.e. divide by 256) before interpreting them in a "standard" 8bpp image.
However, in Matlab you can simply use imagesc to display them with color scaling.
I am using an optical microscope and camera to record some videos which are post-processed in MATLAB.
Real time acquisition and pixel statistics would be extremely helpful, because some of what I am looking at absorbs very little light (I am using transmission mode). An example is that a blank (background) sample would give me an an average pixel value across a 512x512 ccd array of something like 144 (grayscale). An actual sample might have an average value of 140 or so. This subtle shift in pixel intensity would be useful in helping me focus the microscope.
Unfortunately, my camera setup is not supported by MATLAB, so I cannot use the image acquisition toolbox for real time. So I was wondering, is there a way that I could 'fake' real time image acquisition by selecting say a rectangle of my current desktop (the rectangle that is the video output of the microscopes camera), for matlab to record in real time?
Thanks
I've tried using pngs with gradients as textures in my OpenGL ES based iPhone game. The gradients are not drawn correctly. How can I fix this?
By "not drawn correctly" I mean the gradients are not smooth and seem to degrade to sections of a particular color rather a smooth transition.
The basic problem is having too few bits of RGB in your texture or (less likely) your frame buffer. The PNG file won't be used directly by the graphics hardware -- it must be converted into some internal format. I do not know the OpenGL ES API, but presumably you either hand it the .PNG file directly or you first do some kind of conversion step and hand the converted data to Open GL ES. In either case consult the relevant documentation to ensure that the internal format used is of sufficient depth. For example, a 256 color palettized image will be sufficient as would a 24 bit RGB or 32 bit RGBA format. I strongly suspect your PNG is converted to RGB15 or RGB16 which has only 5 or 6 bits per color component -- not nearly enough to display a smooth gradient.
Can I get paletted textures with RGB palette and 8-bit alpha channel in OpenGL ES? (I am targetting the iPhone OpenGL ES implementation.) After peeking into the OpenGL documentation it seems to me that there is support for paletted textures with alpha in the palette, ie. the texture contains 8-bit indexes into a palette with 256 RGBA colors. I would like a texture that contains 8-bit indexes into an RGB palette and a standalone 8-bit alpha channel. (I am trying to save memory and 32-bit RGBA textures are quite a luxury.) Or is this supposed to be done by hand, ie. by creating two independent textures, one for the colors and one for the alpha map, and combining them manually?
No. In general, graphics chips really don't like palletized textures (why? because reading any texel from it requires two memory reads, one of the index and another into the palette. Double the latency of a normal read).
If you're only after saving memory, then look into compressed texture formats. In iPhone specifically, it supports PVRTC compressed formats with 2 bits/pixel and 4 bits/pixel. The compression is lossy (just like palette compression is often lossy), but memory and bandwidth savings are substantial.
Palette textures will be expanded on load for the GPU in the iPhone so you won't gain any advantage by using them other than storage size. Your best bet for a cartoon style game is to explore using PVRTC compressed textures (like NeARAZ says) and 16-bit textures (RGBA4444 for alpha, RGBA5551 for punch-through, RGB565 for opaque textures). Using a mixture of both is my own approach.
Disclaimer:
This info is based on the official OpenGL|ES spec. I have no idea of the OpenGL|ES implementation on the iphone supports compressed textures or not. In theory it should at least simulate compressed textures, but you'll never now.
You can load 8 bits/pixel textures via the glCompressedTexImage2D call. The compression type you most likey want to use is GL_PALETTE8_RGBA8_OES.
Storing an additional alpha-channel along with the palette is not directly possible. Either you use the second texture unit for the alpha-component, or you take the alpha during color quantization into account and use a palettized format that contains alpha.
There is a command-line color quantization tool called BRIGHT out there somewhere on the net. It does an incredible job at quantizing images with alpha. You may want to give it a try..