Segment Characters - character

i am facing with the problem of segmenting the characters on complicated background. I have tried split image into 4 channels C,M,Y,K but it segmenting still have poor quality
If anyone can suggest some ideas, if would be really great.
this is my source image
enter image description here

Easy way is Gaussian Blur. After that, enchance the contrast of an image. After that, the image became OCRable with my OCR software (propietary). Though it's not clear if the trick will work for your OCR software.

Related

How to reduce filesize of gradient PNG?

I am trying to create a background image on a webpage, which is similar to the 404 page used on tumbler...
http://testing404image.tumblr.com/
Here we can see a PNG which is 1623*1064 pixels, yet appears reasonably smooth gradient wise.
The direct link for the image is
http://testing404image.tumblr.com/images/status_bg.png?2
When I try to create a similar PNG (different colors, but same size) in Photoshop CS4 for Mac, the resulting file ends up at > 400k, whereas tumblers is 90k
Ive tried playing with all Photoshop options, including reducing number of colors to 55, but I cannot get the image below ~240k.
Ive also tried various optimising tools such as ImageOptim (http://imageoptim.com/) but to no avail.
Are there any properties of this PNG which result in a such a low file size?
I tried using JPG, thinking its better suited to gradient images, but even a 100% quality JPG resulted in noticeable aliasing, which an identical content/size PNG didnt have.
Thanks for any advice
Hi there changed the colours with
Image > Adjustments > Hue/Saturation - In Photoshop CS4
and this is the result:
as you can see it's almost the same size (75k).
Try playing around under the
Image > Adjustments
to get the color you are looking for and save as png with NONE for interlace.
Photoshop is not very good with PNG: I simply opened and saved it with the humble xnView (maximum compression), and got 74K. You can also convert it to paletted-image, and do some extra little tuning - PNGoptim gives me a final size of 64.548. I would't expect anything much better than that, the image is just too big.
BTW, be aware that using a gradient that is so big and so smooth that it a digital image (with 8 bits per pixel) cannot represent it without some banding. That image is really oversampled (you could resample it at 25% or less and display it scaled, and the result would be basically the same)
The actual reason is the source image your looking to have a lower gradient quality than the one you are making.
Just uncheck the Dither option (from the top toolbar in Photoshop) when filling the gradient color. the quality and smoothness of the gradient is decreased and therefore you get a very smaller file sized PNG output.

Matlab. Image processing on 8 ball pool flash game. Small cheat. Hehe

See the picture below. It's a flash game from a well known website :)
http://imageshack.us/photo/my-images/837/poolu.jpg/
I'd like to capture the images, frame by frame, using Matlab, and then lenghten the line that goes from the 8 ball, the short one, so i can see exactly where it will go. And display another window, in which the exact pool table will appear but with longer lines for the paths :)
I know, or can easily find out, how to capture the screen and whatnot, the problem is that i'm not sure how to start detecting those lines, to see the direction they are heading towards. Can anyone suggest an idea on how to accomplish this? Any image processing techniques i could use to at least filter out everything except those lines.
Not sure where to even start looking, or for WHAT.
And yeah, it's a cheat i know. But i got programming skills, why not put them in practice? :D Help me out people, it's a fun project :)
Thanks.
I would try using the Hough transform in the Matlab Image Processing Toolbox.
EDIT1:
Basically the Hough transform is a technique for detecting linear structures (lines) in an image.

Fake long exposure on iOS

I have to implement long exposure photo capabilities to an app. Since i know that this is not really possible i have to fake it. It should work like "Slow Shutter" or "Magic Shutter".
Sadly i got no clue how to achieve this. I know how to take images with the camera (through AVFoundation) but i'm stuck at merging them to fake long shutter times.
Possibly i need to manipulate and combine all the images with coregraphics but i'm not sure about this (even the how). Maybe there's a better solution to this.
I would appreciate every help i can get here,
thank you people!
You might try the plus lighter blend mode.
Well, I suppose it would be possible to average together the results of several shots. I've mucked around a bit with the core graphics stuff to resize images (averaging together adjacent pixels), but with lower res images. The algorithm I used is here -- maybe it'll give you some ideas.
There may, of course, be a better way, and some tricks for working efficiently with high-res images. Can't help you there.
Convert the images to pixel bitmaps. Align and stack the bitmaps. Then try applying various 3D convolution filters to the 3D pixel array.

PIL convert('L') to greyscale distorts some images

I'm converting some images to greyscale with the easy-thumbnails Django app. Most of them are fine, but a handful are getting partially or totally messed up. The same error is occurring on two different machines, so I don't think it's an issue of a PIL install being corrupt or something.
Here are a couple examples:
Original image http://66.228.39.122/uploads/companies/alerts_logo.png Disorted version http://66.228.39.122/uploads/companies/alerts_logo.png.198x150_q85_bw_upscale.jpg
Original image http://66.228.39.122/uploads/companies/HashableLogo_Color_RGB.png Disorted version http://66.228.39.122/uploads/companies/HashableLogo_Color_RGB.png.198x150_q85_bw_upscale.jpg
Any suggestions? Are the original images themselves corrupt in some subtle way? Should I recompile PIL with --no-random-distort or something? I've looked at the relevant part of the easy-thumbnails source and it's just calling image.convert('L'), so I think the problem must lie either inside PIL or with the images themselves. the way the exclamation mark is distorted in the Alerts.com logo makes me think that perhaps there's some kind of encoding issue, since the outline of the exclamation mark is angled and its color appears to be contributing to a larger grey splotch...but the Hashable logo doesn't show quite the same problem. Maybe they are two different problems. Thanks for any suggestions/thoughts/advice.
Update: Actually, if I remove the bw filter and add instead replace_alpha="red" I get similar (though not greyscaled) distortions, so it looks like it is probably PIL unhappy with how the images are encoded or something. Unfortunately I need non-technical people to be able to upload new images and have them work, so a programmatic solution is necessary, rather than just resaving the images manually.
Update 2: I actually experimented a bit and found that resaving the PNGs with GIMP solves the problem, if I resave once and check the box for saving color information for transparent pixels, and then resave a second time with that box unchecked. Just doing the first or the second of these doesn't solve the problem, and doing both and then the first again re-causes it. And it still doesn't handle partial transparency well. Since this is tedious and imperfect, a programmatic solution would still be great if there are any PIL or PNG encoding whizzes out there.
Here is some code that will convert an image to grayscale, without losing quality.
from PIL import Image, ImageOps
image = Image.open('filepath')
image = ImageOps.grayscale(image)
image.save('filepath', quality=95)

iphone, Image processing

I am building an application on night vision but i don't find any useful algorithm which I can apply on the dark images to make it clear. Anyone please suggest me some good algorithm.
Thanks in advance
With the size of the iphone lens and sensor, you are going to have a lot of noise no matter what you do. I would practice manipulating the image in Photoshop first, and you'll probably find that it is useful to select a white point out of a sample of the brighter pixels in the image and to use a curve. You'll probably also need to run an anti-noise filter and smoother. Edge detection or condensation may allow you to bold some areas of the image. As for specific algorithms to perform each of these filters there are a lot of Computer Science books and lists on the subject. Here is one list:
http://www.efg2.com/Lab/Library/ImageProcessing/Algorithms.htm
Many OpenGL implementations can be found if you find a standard name for an algorithm you need.
Real (useful) night vision typically uses an infrared light and an infrared-tuned camera. I think you're out of luck.
Of course using the iPhone 4's camera light could be considered "night vision" ...
Your real problem is the camera and not the algorithm.
You can apply algorithm to clarify images, but it won't make from dark to real like by magic ^^
But if you want to try some algorithms you should take a look at OpenCV (http://opencv.willowgarage.com/wiki/) there is some port like here http://ildan.blogspot.com/2008/07/creating-universal-static-opencv.html
I suppose there are two ways to refine the dark image. first is active which use infrared and other is passive which manipulates the pixel of the image....
The images will be noisy, but you can always try scaling up the pixel values (all of the components in RGB or just the luminance of HSV, either linear or applying some sort of curve, either globally or local to just the darker areas) and saturating them, and/or using a contrast edge enhancement filter algorithm.
If the camera and subject matter are sufficiently motionless (tripod, etc.) you could try summing each pixel over several image captures. Or you could do what some HDR apps do, and try aligning images before pixel processing across time.
I haven't seen any documentation on whether the iPhone's camera sensor has a wider wavelength gamut than the human eye.
I suggest conducting a simple test before trying to actually implement this:
Save a photo made in a dark room.
Open in GIMP (or a similar application).
Apply "Stretch HSV" algorithm (or equivalent).
Check if the resulting image quality is good enough.
This should give you an idea as to whether your camera is good enough to try it.