BMP image header - biXPelsPerMeter - bmp

I have read a lot about BMP file format structure but I still cannot get what is the real meaning of the fields "biXPelsPermeter" and "biYPelsPermeter". I mean in practical way, how is it used or how it can be utilized. Any example or experience? Thanks a lot

biXPelsPermeter
Specifies the horizontal print resolution, in pixels per meter, of the target device for the bitmap.
biYPelsPermeter
Specifies the vertical print resolution.
Its not very important. You can leave them on 2835 its not going to ruin the image.
(72 DPI × 39.3701 inches per meter yields 2834.6472)

Think of it this way: The image bits within the BMP structure define the shape of the image using that much data (that much information describes the image), but that information must then be translated to a target device using a measuring system to indicate its applied resolution in practical use.
For example, if the BMP is 10,000 pixels wide, and 4,000 pixels high, that explains how much raw detail exists within the image bits. However, that image information must then be applied to some target. It uses the relationship to the dpi and its target to derive the applied resolution.
If it were printed at 1000 dpi then it's only going to give you an image with 10" x 4" but one with extremely high detail to the naked eye (more pixels per square inch). By contrast, if it's printed at only 100 dpi, then you'll get an image that's 100" x 40" with low detail (fewer pixels per square inch), but both of them have the same overall number of bits within. You can actually scale an image without scaling any of its internal image data by merely changing the dpi to non-standard values.
Also, using 72 dpi is a throwback to ancient printing techniques (https://en.wikipedia.org/wiki/Twip) which are not really relevant in moving forward (except to maintain compatibility with standards) as modern hardware devices often use other values for their fundamental relationships to image data. For video screens, for example, Macs use 72 dpi as the default. Windows uses 96 dpi. Others are similar. In theory you can set it to whatever you want, but be warned that not all software honors the internal settings and will instead assume a particular size. This can affect the way images are scaled within the app, even though the actual image data within hasn't changed.

Related

How to overwrite part of a png?

Given a png image and a set of data to write to it, is it possible to overwrite pixels in the existing png in a particular area of interest? For example, If I have a block of data in a rectangle between pixels (0,0) (5,10) would it be possible to write this data as a block into a 10X10 png without any concern for the area not being overwritten? My use case is that I have map tiles where half the data will be in one tile and half in the other, with the blank pixels being white squares. I would like to combine them by simply writing the non-white pixels directly to the existing png in a block without having to open, combine, then re-write the entire png. Does the structure of a png allow this?
I'm loath to claim that this is impossible, but it is certainly complicated.
First of all, pixels of a PNG are (sometimes) interlaced, so you'd have to calculate the locations of your target pixels based on the Adam7 scheme.
Furthermore each row is independently filtered, so you'd have to transform each row of your source using the filter of the target row. Depending on the filter you'd also have to adjust additional bytes on the border of the updated target bytes. Straight from the horse's mouth:
Though the concept is simple, there are quite a few subtleties in the actual mechanics of filtering.
Finally, all the filtered bytes are compressed using a generic compression algorithm called "deflate." Unless you want to decompress the whole thing beforehand, you need to make sure both that (1) your source data can be properly decoded and (2) the bytes near the border of the target bytes are properly compressed in the context of their new neighbors.
I'm not a compression expert, so I won't argue in more detail. One piece of good news is that the algorithm seems to preserve independence between distant regions due to its sliding window scheme: data are only compressed based on data in some preceding range, say 13,000 bytes.
If this seems at all easy to you, give it a try. If you're like me, though, you'll just decode the whole thing, overwrite the pixels as bitmap data, and encode the result.
This is practically impossible because the pixels data (after a row-by-row "filtering") is compressed with ZLIB. And it's practically impossible to change part of a compressed stream.

How does one embed a file inside of an image? iOS iPhone

There is an app on the app store called active photo (http://itunes.apple.com/us/app/active-photo/id366798464?mt=8) that allows you to embed a hidden image or .exe file inside of an image. I would like to know how to do this regrading adding images to images, kinda like sub images in the original image.
I've been looking into metadata but no tag seems to be big enough to hold an NSData representation of the second picture.
How would one go about adding any type of file to an image, either through embedding or metadata, that would allow the image to be sent though email and or text message and still retain the data?
Thank you.
This is known as steganography.
I would imagine the simplest way of hiding a file inside a JPEG image is just to alter its pixel data in such a way that the compression doesn't damage it but is subtle enough that an interceptor can't detect the hidden data.
I don't think it is possible with JPEG because it's a lossy compression so you would end up corrupting the embedded file. But PNG uses a compression method similar to Deflate, which is loseless.
I have started writing a program like this. The idea was to hide bytes of data by splitting them into the least significant bits of pixels' color channels. Let me do some examples.
An RGB-8 image represents a pixel with 3 bytes, one for red, one for green and one for blue. I store 3 bits into red channel, two into green (human eye is more sensitive to green color) and 3 into blue. So I embed one byte per pixel. Similarly with RGBA-8 image I do 2-2-2-2. This of course involves some bitwise operations.
Things become more interesting with RGB(A)-16 images, where there are two bytes per channel. I use the entire least significant byte of every channel with minimal distortion (worst case 255 / 65535 = ~3.9%) and store up to 3 or 4 bytes of data per pixel. Not bad!!
Moreover there are no complex bitwise operations in this case, a single assignement does the job.
There are lot of improvement to it. I thought to ask the user a password, hash it and seed a secure pseudo random number generator, then no longer move pixel by pixel but instead asking the generator for a new random index.
The drawback of this solution is that the more data has already been embedded, the slower it becomes, because the generator will give more and more occupied indices. But it is much more secure in this way. To make it even more safer I thought to introduce noise data in the untouched pixels, in order to hide the positions of the true data.
As you can see you can do a lot with PNG images! If you are interested I can give the code I wrote so far.

CGImageGetBytesPerRow() returns different values on iOS simulator and iOS Device

I have an image taken by my ipod touch 4 that's 720x960. In the simulator calling CGImageGetBytesPerRow() on the image returns 2880 (720 * 4 bytes) which is what I expected. However on the device CGImageGetBytesPerRow() will return 3840, the "bytes per row" along the height. Does anyone know why there is different behavior even though the image I'm CGImageGetBytesPerRow() on has a width of 720 and height of 960 in both cases?
Thanks in advance.
Bytes per row can be anything as long as it is sufficient to hold the image bounds, so best not to make assumptions that it will be the minimum to fit the image.
I would guess that on the device, bytes per row is dictated by some or other optimisation or hardware consideration: perhaps an image buffer that does not have to be changed if the orientation is rotated, or the image sensor transfers extra bytes of dead data per row that are then ignored instead of doing a second transfer into a buffer with minimum bytes per row, or some other reason that would only make sense if we knew the inner workings of these devices.
It may slightly different because the internal memory allocation: "The number of bytes used in memory for each row of the specified bitmap image (or image mask)."1
Consider using NSBitmapRepresention for some special tasks.

Image editing using iphone

I'm creating an image editing application for iphone. i would like to enable the user to pick an image from the photolibrary, edit it (grayscale, sepia,etc) and if possible, save back to the filesystem. I've done it for picking image (the simplest thing, as you know using imagepicker) and also for creating the grayscale image. But, i got stuck with sepia. I don't know how to implement that. Is it possible to get the values of each pixel of the image so that we can vary it to get the desired effects. Or any other possible methods are there??? pls help...
The Apple image picker code will most likely be holding just the file names and some lower-res renderings of the images in RAM til the last moment when a user selects an image.
When you ask for the full frame buffer of the image, the CPU suddenly has to do a lot more work decoding the image at full resolution, but it might be even as simple as this to trigger it off:
CFDataRef CopyImagePixels(CGImageRef inImage)
{
return CGDataProviderCopyData(CGImageGetDataProvider(inImage));
}
/* IN MAIN APPLICATION FLOW - but see EDIT 2 below */
const char* pixels = [[((NSData*)CopyImagePixels([myImage CGImage]))
autorelease] bytes]; /* N.B. returned pixel buffer would be read-only */
This is just a guess as to how it works, really, but based on some experience with image processing in other contexts. To work out whether what I suggest makes sense and is good from a memory usage point of view, run Instruments.
The Apple docs say (related, may apply to you):
You should avoid creating UIImage objects that are greater than 1024 x 1024 in size. Besides the large amount of memory such an image would consume, you may run into problems when using the image as a texture in OpenGL ES or when drawing the image to a view or layer. This size restriction does not apply if you are performing code-based manipulations, such as resizing an image larger than 1024 x 1024 pixels by drawing it to a bitmap-backed graphics context. In fact, you may need to resize an image in this manner (or break it into several smaller images) in order to draw it to one of your views.
[ http://developer.apple.com/iphone/library/documentation/UIKit/Reference/UIImage_Class/Reference/Reference.html ]
AND
Note: Prior to iPhone OS 3.0, UIView instances may have a maximum height and width of 1024 x 1024. In iPhone OS 3.0 and later, views are no longer restricted to this maximum size but are still limited by the amount of memory they consume. Therefore, it is in your best interests to keep view sizes as small as possible. Regardless of which version of iPhone OS is running, you should consider using a CATiledLayer object if you need to create views larger than 1024 x 1024 in size.
[ http://developer.apple.com/iPhone/library/documentation/UIKit/Reference/UIView_Class/UIView/UIView.html ]
Also worth noting:-
(a) Official how-to
http://developer.apple.com/iphone/library/qa/qa2007/qa1509.html
(b) From http://cameras.about.com/od/cameraphonespdas/fr/apple-iphone.htm
"The image size uploaded to your computer is at 1600x1200, but if you email the photo directly from the iPhone, the size will be reduced to 640x480."
(c) Encoding large images with JPEG image compression requires large amounts of RAM, depending on the size, possibly larger amounts than are available to the application.
(d) It may be possible to use an alternate compression algorithm with (if necessary) its malloc rewired to use temporary memory mapped files. But consider the data privacy/security issues.
(e) From iPhone SDK: After a certain number of characters entered, the animation just won't load
"I thought it might be a layer size issue, as the iPhone has a 1024 x 1024 texture size limit (after which you need to use a CATiledLayer to back your UIView), but I was able to lay out text wider than 1024 pixels and still have this work."
Sometimes the 1024 pixel limit may appear to be a bit soft, but I would always suggest you program defensively and stay within the 1024 pixel limit if you can.
EDIT 1
Added extra line break in code.
EDIT 2
Oops! The code gets a read-only copy of the data (there is a diference between CFMutableDataRef and CFDataRef). Because of limitations on available RAM, you then have to make a lower-res copy of it by smooth-scaling it down yourself, or to copy it into a modifiable buffer, if the image is large, you may need to write it in bands to a temporary file, release the unmodifiable data block and load the data back from file. And only do this of course if having the data in a temporary file like this is acceptable. Painful.
EDIT 3
Here's perhaps a better idea maybe try using a destination bitmap context that uses a CFData block that is a memory-mapped CFData. Does that work? Again only do this if you're happy with the data going via a temporary file.
EDIT 4
Oh no, it appears that memory mapped read-write CFData not available. Maybe try mmap BSD APIs.
EDIT 5
Added "const char*" and "pixels read-only" comment to code.

Get dpi settings via GTK

Using GTK, how do I query the current screen's dpi settings?
The current accepted answer is for PHPGTK, which feels a bit odd to me. The pure GDK library has this call: gdk_screen_get_resolution(), which sounds like a better match. Haven't worked with it myself, don't know if it's generally reliable.
The resolution height and width returned by screen includes the full multi-monitor sizes (e.g. combined width and length of the displayer buffer used to render multi-monitor setup). I've not check of the mm (millimeter width/height) calls returns the actual physical sizes but if it report combined physical sizes then the dpi computed from dividing one with another would be meaningless, e.g. to draw a box on screen that can be measured using a physical ruler.
See GdkScreen. You should be able to compute it using the get_height and get_height_mm or with get_width and get_width_mm.