How does an Android/iPhone device implement text-zooming? - iphone

Simple question - how is text-zooming implemented on an Android/iPhone device? Do they pre-compute frequently used bitmaps of a font and replace the text as the scale changes? Or do they extract the contours from the font files and render the text as vector graphics?

Text rendering is a fairly complex subject so any answer here will just be glossing over a lot of stuff. Typefaces are generally stored in vector format, and not bitmaps. The system lays out the text by computing the metrics of each letter and creates a vector shape that is rendered into a bitmap that is displayed on the screen. It's unlikely that the system will cache the individual letterforms as bitmaps because of the way antialiasing and subpixel rendering works. But at some point all vector graphics are converted into bitmaps because the typical computer display is made up of pixels.

Related

Monogame rendering text using Graphics.DrawString (instead of SpriteBatch.DrawString)

Is there any downside to using Graphics.DrawString to render a (rather static) bunch of text to an offscreen bitmap, convert it to a Texture2D once, and then simply call SpriteBatch.Draw, instead of using the content pipeline and rendering text using SpriteFont? This is basically a page of text, drawn to a "sheet of paper", but user can also choose to resize fonts, so this means I would have to include spritefonts of different sizes.
Since this is a Windows only app (not planning to port it), I will have access to all fonts like in a plain old WinForms app, and I believe rendering quality will be much better when using Graphics.DrawString (or even TextRenderer) than using sprite fonts.
Also, it seems performance might be better since SpriteBatch.DrawString needs to "render" the entire text in each iteration (i.e. send vertices for each letter separately), while by using a bitmap I only do it once, so it should be slightly less work at the CPU side.
Are there any problems or downsides I am not seeing here?
Is it possible to get alpha blended text using spritefonts? I've seen Nuclex framework mentioned around, but it's not been ported to Monogame AFAIK.
[Update]
From the technical side, it seems to be working perfectly, much better text rendering than through sprite fonts. If they are rendered horizontally, I even get ClearType. One problem that might exist is that spritesheets for fonts are (might be?) more efficient in terms of texture memory than creating a separate texture for a page of text.
No
There doesn't seem to be any downside
In fact you seem to be following a standard approach to text rendering.
Rendering text 'properly' is a comparatively slow processed compared to rendering a textured quad, even though SpriteFonts cutout all the splining glyphs, if you are rendering a page of text then you can still be racking up a large number of triangles.
Whenever I've been looking at different text rendering solutions for GL/XNA, people tend to recommend your approach. Draw your wall of text once to a reusable texture, then render that texture.
You may also want to consider RenderTarget2D as possible solution that is portable.
As an example:
// Render the new text on the texture
LoadPageText(int pageNum) {
string[] text = this.book[pageNum];
GraphicsDevice.SetRenderTarget(pageTarget);
// TODO: draw page background
this.spriteBatchCache.Begin();
for (int i = 0; i < text.Length; i++) {
this.spriteBatchCache.DrawText(this.font,
new Vector2(10, 10 + this.fontHeight * i),
text[i],
this.color);
}
this.spriteBatchCache.End();
GraphicsDevice.SetRenderTarget(null);
}
Then in the scene render, you can spriteBatch.Draw(..., pageTarget, ...) to render the text.
This way you only need 1 texture for all your pages, just remember to also redraw if your font changes.
Other things to consider is your SpriteBatches sort mode, sometimes that may impact performance when rendering many triangles.
On point 2, as I mentioned above the SpriteFonts are pre-rendered textures, this means that the transparency is baked onto their spritesheet. As such it seems the default library uses no transparency/anti-aliasing.
If you rendered them twice as large and White on Black and used the SourceColor as an alpha channel then rendered them scaled back down blending with Color.Black you could possible get it back.
Please try color mixing with pointer:
MixedColor = ((Alpha1 * Channel1) + (Alpha2 * Channel2))/(Alpha1 + Alpha2)

visualising l*a*b space values in matlab or any software

I have table of l,a,b values and want to visualise these colors in matlab (or any other suitable software). Is there any quick way like series of rectangles filled with color values from the table?
There are several versions of the Lab color space, but presumably you're referring to most common, CIELAB. You can use imwrite in Matlab to create a TIFF image with 'cielab' specified for the 'Colorspace' option. I wouldn't trust Matlab as a viewer for the resultant images though. Photoshop in lab mode (from the menu bar: Image > Mode > Lab Color) would be a good choice if you want work with and see the closest thing to the actual CIELAB space. Other viewers/editors may convert to RGB or CMYK before rendering to the screen (likely without warning you), but maybe you don't mind. If you just want to convert from CIELAB to RGB, you might find these functions useful.
After lots of research, I found out there is a plugin called 'color inspector' that can be used along with ImageJ (all opensource tools). Have excellent capacity to view and analyse different color space. Even it has some color tools that matlab yet to have. here is imageJ: http://rsbweb.nih.gov/ij/download.html
and the plugin
http://rsb.info.nih.gov/ij/plugins/color-inspector.html
Hope this is useful to someone

Transparency with JPEGs

JPEGs are smaller in size than PNGs. So, I thought that if I can make a specific region in a JPEG-file transparent, with some code, maybe I can save some bytes.
So does anyone know how to achieve this with for example PHP or JavaScript?
No. You can't do this. JPGs do not support alpha channels and have no capacity to designate certain colors as transparent either (GIF-style).
There's several issues with this, all of them have to do with that JPEG is a lossy compression format. The JPEG format is optimized for natural images and sharp edges will get blurred. If you intend that a specific pixel should have the value #d67fff there's no guarantee that after color conversion, FDCT, quantization, IDCT and color conversion, the pixel still will have that value. There's also a strong possibility that that pixel value will occur in areas that you don't want.
No. JPEG does not support transparency and is not likely to do so any time
soon. http://www.faqs.org/faqs/jpeg-faq/part1/section-12.html
You cannot do that, the client renders the image and doesn't know that you want it to treat that color as transparent (plus various compression methods on jpeg wouldn't work well with transparencies anyway).
I believe you can go with an 8-bit custom-pallet png, should save you a lot of space. Otherwise 24-bit PNG is your only high color option.
You can convert your image to SVG containing a color information as JPEG and an alpha channel as grayscale mask. Here is a tool I wrote to do it https://github.com/igrmk/transpeg

Histogram equalization with color correction (iPhone/objective-C)

I am trying to implement a histogram equalization method (HE) for a UIImage in my iphone app.
I read the following:
http://en.wikipedia.org/wiki/Histogram_equalization
But it says:
Still, it should be noted that applying the same method on the Red, Green, and Blue components of an RGB image may yield dramatic changes in the image's color balance since the relative distributions of the color channels change as a result of applying the algorithm. However, if the image is first converted to another color space, Lab color space, or HSL/HSV color space in particular, then the algorithm can be applied to the luminance or value channel without resulting in changes to the hue and saturation of the image.
So would this be a feasible approach?
Grab UIImage data and convert from RGB to HSL
Apply HE on luminance channel
convert data back to RGB
Create new UIImage from data
Will this be slow, I wonder? Also, will I have to deal with 8/16/24 bit data differently, as I have no idea what kind of image will be used with my app? Or can I assume 24 bit for images in the iPhone?
I would appreciate any pointers to objective-C code that does color corrected histogram equalization.
I have looked at the library below, but it does not do any color correction for HE:
http://code.google.com/p/simple-iphone-image-processing/source/browse/#svn/trunk/Classes%3Fstate%3Dclosed
Thanks!
Yes you can do it this way, that will work. Yes it will "cost more" since you have to do the conversion back and forth - but that's the price you will have to pay if you don't want to affect the hue and saturation. Is that worth it for the images you're correcting? It would depend on your application, are you OK with a hit in performance vs best quality? You will likely only have to deal with 8bit color components, you can assume "24 bit" for images but that's 3 x 8bit components The only way to know your answers though is to try.
I recommend using YUV Colorspace. Both for accuracy and for computation simplicity (Linear Combination).
One method would be applying the histogram equalization on the RGB image (Image2).
Then let the user to chose what he wants, Apply only on luminosity or all 3 channels.
For the first choice take UV channels of the original image with the Y channel of the equalized image and convert back to RGB.
For the second choice just leave the user with Image2.
Since after transformation, you deal with I/V as being continuous values, you will have to apply some binning strategy, which results in a step Histogram for the quantity you wish to equalize. Therefore, you might speed this up by reducing the bin size?
Just write the codes and model applying HE to each of the RGB component. Although there are much calculation for its 3 components, but programming speed is OK. In most of the cases, the contrast is improved, but the "look" of the image is changed. So agree to transform the RGB into another space then apply the HE again. I am looking for the formula and also the correct color space for the HE. Which color space is easier?
I write the HE in the iPad platform, but I find after opening a big image taken from my Canon, the whole program crashes after UIPopoverContoller, UIImagePickerController functions. I think it maybe due to I am pushing too much on the phone's OS, or the OS allocates only a limit amount of memory space for each of the apps. If apps is using more than pre-set memory, then the iOS just kills the apps right away. So must take care of the size of the input image, and the garbage collection of unused memory, and memory leak. Using XCode's instrument tool to check for leakage is a must.

How do I turn an image into a vector on the iPhone like Adobe Illustrator's Live Trace?

I want to be able to create vector files like Illustrator does on the iPhone. Does anyone know of an algorithm?
for each pixel try to grow by testing against it's neighbours for colour similarity with a threshhold. keep growing until no more expansion is possible due to threshold then you make a path using the outermost border pixels. Now repeat for the other pixels in the orignal raster image which were not already included in your previous expansions.