How does one embed a file inside of an image? iOS iPhone - iphone

There is an app on the app store called active photo (http://itunes.apple.com/us/app/active-photo/id366798464?mt=8) that allows you to embed a hidden image or .exe file inside of an image. I would like to know how to do this regrading adding images to images, kinda like sub images in the original image.
I've been looking into metadata but no tag seems to be big enough to hold an NSData representation of the second picture.
How would one go about adding any type of file to an image, either through embedding or metadata, that would allow the image to be sent though email and or text message and still retain the data?
Thank you.

This is known as steganography.
I would imagine the simplest way of hiding a file inside a JPEG image is just to alter its pixel data in such a way that the compression doesn't damage it but is subtle enough that an interceptor can't detect the hidden data.

I don't think it is possible with JPEG because it's a lossy compression so you would end up corrupting the embedded file. But PNG uses a compression method similar to Deflate, which is loseless.
I have started writing a program like this. The idea was to hide bytes of data by splitting them into the least significant bits of pixels' color channels. Let me do some examples.
An RGB-8 image represents a pixel with 3 bytes, one for red, one for green and one for blue. I store 3 bits into red channel, two into green (human eye is more sensitive to green color) and 3 into blue. So I embed one byte per pixel. Similarly with RGBA-8 image I do 2-2-2-2. This of course involves some bitwise operations.
Things become more interesting with RGB(A)-16 images, where there are two bytes per channel. I use the entire least significant byte of every channel with minimal distortion (worst case 255 / 65535 = ~3.9%) and store up to 3 or 4 bytes of data per pixel. Not bad!!
Moreover there are no complex bitwise operations in this case, a single assignement does the job.
There are lot of improvement to it. I thought to ask the user a password, hash it and seed a secure pseudo random number generator, then no longer move pixel by pixel but instead asking the generator for a new random index.
The drawback of this solution is that the more data has already been embedded, the slower it becomes, because the generator will give more and more occupied indices. But it is much more secure in this way. To make it even more safer I thought to introduce noise data in the untouched pixels, in order to hide the positions of the true data.
As you can see you can do a lot with PNG images! If you are interested I can give the code I wrote so far.

Related

Decoding Keyence LJ-X8000 Bitmap-Height Image

I have a Keyence Line Laser System LJ-X 8000, that I use to scan the surface of different objects.
The Controller saves the height information as a bitmap, with each pixel representing one height value. After a lot of tinkering, I found out, that Keyence is not using the actual colors, rather than using the 24-Bit RGB-triplets as some form of binary storage. However, no combination of these bytes seems to work for me. Are there any common storage methods for 24-bit Integers?
To decode those values, I did a scan covering the whole measurement range of the scanner, including some out of range values in the beginning and the end. If you look at the distribution of the values of each color plane, you can see, that the first and third plane actually only use values up to 8/16 which means only 3/4 Bits. This is also visible in the image itself, as it mainly shows a green color.
I concluded that Keyence uses the full byte of the green color plane, 3 Bits of the first and 4 Bits of the last plane to store the height information. Keyence seems to have chosen some weird 15 Bit Integer Format to store their data.
With a little bit-shifting and knowing that the scanner has a valid range from [-2.2, 2.2], I was able to build the following simple little (Matlab-) script to calculate the height information for each pixel:
HeightValBin = bitshift(scanIm(:,:,2),7, 'uint16') ...
+ bitshift(scanIm(:,:,1),4, 'uint16')...
+ bitshift(scanIm(:,:,3),0, 'uint16');
scanBinValScaled = interp1([0,2^15], [-2.2, 2.2], double(scanBinVal));
Keyence offers a software to convert those .bmp into .csv-files, but without an API to automate the process. As I will have to deal with a lot of these files I needed to automate this process.
The calculated values from the rgb triplets are actually even more precise than the exported csv, as the csv only shows 4 digits after the decimal point.

Can Flutter render images from raw pixel data? [duplicate]

Setup
I am using a custom RenderBox to draw.
The canvas object in the code below comes from the PaintingContext in the paint method.
Drawing
I am trying to render pixels individually by using Canvas.drawRect.
I should point out that these are sometimes larger and sometimes smaller than the pixels on screen they actually occupy.
for (int i = 0; i < width * height; i++) {
// in this case the rect size is 1
canvas.drawRect(
Rect.fromLTWH(index % (width * height),
(index / (width * height)).floor(), 1, 1), Paint()..color = colors[i]);
}
Storage
I am storing the pixels as a List<List<Color>> (colors in the code above). I tried differently nested lists previously, but they did not cause any noticable discrepancies in terms of performance.
The memory on my Android Emulator test device increases by 282.7MB when populating the list with a 999x999 image. Note that it only temporarily increases by 282.7MB. After about half a minute, the increase drops to 153.6MB and stays there (without any user interaction).
Rendering
With a resolution of 999x999, the code above causes a GPU max of 250.1 ms/frame and a UI max of 1835.9 ms/frame, which is obviously unacceptable. The UI freezes for two seconds when trying to draw a 999x999 image, which should be a piece of cake (I would guess) considering that 4k video runs smoothly on the same device.
CPU
I am not exactly sure how to track this properly using the Android profiler, but while populating or changing the list, i.e. drawing the pixels (which is the case for the above metrics as well), CPU usage goes from 0% to up to 60%. Here are the AVD performance settings:
Cause
I have no idea where to start since I am not even sure what part of my code causes the freezing. Is it the memory usage? Or the drawing itself?
How would I go about this in general? What am I doing wrong? How should I store these pixels instead.
Efforts
I have tried so much that did not help at all that I will try to only point out the most notable ones:
I tried converting the List<List<Color>> to an Image from the dart:ui library hoping to use Canvas.drawImage. In order to do that, I tried encoding my own PNG, but I have not been able to render more than a single row. However, it did not look like that would boost performance. When trying to convert a 9999x9999 image, I ran into an out of memory exception. Now, I am wondering how video is rendered as all as any 4k video will easily take up more memory than a 9999x9999 image if a few seconds of it are in memory.
I tried implementing the image package. However, I stopped before completing it as I noticed that it is not meant to be used in Flutter but rather in HTML. I would not have gained anything using that.
This one is pretty important for the following conclusion I will draw: I tried to just draw without storing the pixels, i.e. is using Random.nextInt to generate random colors. When trying to randomly generate a 999x999 image, this resulted in a GPU max of 1824.7 ms/frames and a UI max of 2362.7 ms/frame, which is even worse, especially in the GPU department.
Conclusion
This is the conclusion I reached before trying my failed attempt at rendering using Canvas.drawImage: Canvas.drawRect is not made for this task as it cannot even draw simple images.
How do you do this in Flutter?
Notes
This is basically what I tried to ask over two months ago (yes, I have been trying to resolve this issue for that long), but I think that I did not express myself properly back then and that I knew even less what the actual problem was.
The highest resolution I can properly render is around 10k pixels. I need at least 1m.
I am thinking that abandoning Flutter and going for native might be my only option. However, I would like to believe that I am just approaching this problem completely wrong. I have spent about three months trying to figure this out and I did not find anything that lead me anywhere.
Solution
dart:ui has a function that converts pixels to an Image easily: decodeImageFromPixels
Example implementation
Issue on performance
Does not work in the current master channel
I was simply not aware of this back when I created this answer, which is why I wrote the "Alternative" section.
Alternative
Thanks to #pslink for reminding me of BMP after I wrote that I had failed to encode my own PNG.
I had looked into it previously, but I thought that it looked to complicated without sufficient documentation. Now, I found this nice article explaining the necessary BMP headers and implemented 32-bit BGRA (ARGB but BGRA is the order of the default mask) by copying Example 2 from the "BMP file format" Wikipedia article. I went through all sources but could not find an original source for this example. Maybe the authors of the Wikipedia article wrote it themselves.
Results
Using Canvas.drawImage and my 999x999 pixels converted to an image from a BMP byte list, I get a GPU max of 9.9 ms/frame and a UI max of 7.1 ms/frame, which is awesome!
| ms/frame | Before (Canvas.drawRect) | After (Canvas.drawImage) |
|-----------|---------------------------|--------------------------|
| GPU max | 1824.7 | 9.9 |
| UI max | 2362.7 | 7.1 |
Conclusion
Canvas operations like Canvas.drawRect are not meant to be used like that.
Instructions
First of, this is quite straight-forward, however, you need to correctly populate the byte list, otherwise, you are going to get an error that your data is not correctly formatted and see no results, which can be quite frustrating.
You will need to prepare your image before drawing as you cannot use async operations in the paint call.
In code, you need to use a Codec to transform your list of bytes into an image.
final list = [
0x42, 0x4d, // 'B', 'M'
...];
// make sure that you either know the file size, data size and data offset beforehand
// or that you edit these bytes afterwards
final Uint8List bytes = Uint8List.fromList(list);
final Codec codec = await instantiateImageCodec(bytes));
final Image image = (await codec.getNextFrame()).image;
You need to pass this image to your drawing widget, e.g. using a FutureBuilder.
Now, you can just use Canvas.drawImage in your draw call.

How to overwrite part of a png?

Given a png image and a set of data to write to it, is it possible to overwrite pixels in the existing png in a particular area of interest? For example, If I have a block of data in a rectangle between pixels (0,0) (5,10) would it be possible to write this data as a block into a 10X10 png without any concern for the area not being overwritten? My use case is that I have map tiles where half the data will be in one tile and half in the other, with the blank pixels being white squares. I would like to combine them by simply writing the non-white pixels directly to the existing png in a block without having to open, combine, then re-write the entire png. Does the structure of a png allow this?
I'm loath to claim that this is impossible, but it is certainly complicated.
First of all, pixels of a PNG are (sometimes) interlaced, so you'd have to calculate the locations of your target pixels based on the Adam7 scheme.
Furthermore each row is independently filtered, so you'd have to transform each row of your source using the filter of the target row. Depending on the filter you'd also have to adjust additional bytes on the border of the updated target bytes. Straight from the horse's mouth:
Though the concept is simple, there are quite a few subtleties in the actual mechanics of filtering.
Finally, all the filtered bytes are compressed using a generic compression algorithm called "deflate." Unless you want to decompress the whole thing beforehand, you need to make sure both that (1) your source data can be properly decoded and (2) the bytes near the border of the target bytes are properly compressed in the context of their new neighbors.
I'm not a compression expert, so I won't argue in more detail. One piece of good news is that the algorithm seems to preserve independence between distant regions due to its sliding window scheme: data are only compressed based on data in some preceding range, say 13,000 bytes.
If this seems at all easy to you, give it a try. If you're like me, though, you'll just decode the whole thing, overwrite the pixels as bitmap data, and encode the result.
This is practically impossible because the pixels data (after a row-by-row "filtering") is compressed with ZLIB. And it's practically impossible to change part of a compressed stream.

Histogram equalization with color correction (iPhone/objective-C)

I am trying to implement a histogram equalization method (HE) for a UIImage in my iphone app.
I read the following:
http://en.wikipedia.org/wiki/Histogram_equalization
But it says:
Still, it should be noted that applying the same method on the Red, Green, and Blue components of an RGB image may yield dramatic changes in the image's color balance since the relative distributions of the color channels change as a result of applying the algorithm. However, if the image is first converted to another color space, Lab color space, or HSL/HSV color space in particular, then the algorithm can be applied to the luminance or value channel without resulting in changes to the hue and saturation of the image.
So would this be a feasible approach?
Grab UIImage data and convert from RGB to HSL
Apply HE on luminance channel
convert data back to RGB
Create new UIImage from data
Will this be slow, I wonder? Also, will I have to deal with 8/16/24 bit data differently, as I have no idea what kind of image will be used with my app? Or can I assume 24 bit for images in the iPhone?
I would appreciate any pointers to objective-C code that does color corrected histogram equalization.
I have looked at the library below, but it does not do any color correction for HE:
http://code.google.com/p/simple-iphone-image-processing/source/browse/#svn/trunk/Classes%3Fstate%3Dclosed
Thanks!
Yes you can do it this way, that will work. Yes it will "cost more" since you have to do the conversion back and forth - but that's the price you will have to pay if you don't want to affect the hue and saturation. Is that worth it for the images you're correcting? It would depend on your application, are you OK with a hit in performance vs best quality? You will likely only have to deal with 8bit color components, you can assume "24 bit" for images but that's 3 x 8bit components The only way to know your answers though is to try.
I recommend using YUV Colorspace. Both for accuracy and for computation simplicity (Linear Combination).
One method would be applying the histogram equalization on the RGB image (Image2).
Then let the user to chose what he wants, Apply only on luminosity or all 3 channels.
For the first choice take UV channels of the original image with the Y channel of the equalized image and convert back to RGB.
For the second choice just leave the user with Image2.
Since after transformation, you deal with I/V as being continuous values, you will have to apply some binning strategy, which results in a step Histogram for the quantity you wish to equalize. Therefore, you might speed this up by reducing the bin size?
Just write the codes and model applying HE to each of the RGB component. Although there are much calculation for its 3 components, but programming speed is OK. In most of the cases, the contrast is improved, but the "look" of the image is changed. So agree to transform the RGB into another space then apply the HE again. I am looking for the formula and also the correct color space for the HE. Which color space is easier?
I write the HE in the iPad platform, but I find after opening a big image taken from my Canon, the whole program crashes after UIPopoverContoller, UIImagePickerController functions. I think it maybe due to I am pushing too much on the phone's OS, or the OS allocates only a limit amount of memory space for each of the apps. If apps is using more than pre-set memory, then the iOS just kills the apps right away. So must take care of the size of the input image, and the garbage collection of unused memory, and memory leak. Using XCode's instrument tool to check for leakage is a must.

Image editing using iphone

I'm creating an image editing application for iphone. i would like to enable the user to pick an image from the photolibrary, edit it (grayscale, sepia,etc) and if possible, save back to the filesystem. I've done it for picking image (the simplest thing, as you know using imagepicker) and also for creating the grayscale image. But, i got stuck with sepia. I don't know how to implement that. Is it possible to get the values of each pixel of the image so that we can vary it to get the desired effects. Or any other possible methods are there??? pls help...
The Apple image picker code will most likely be holding just the file names and some lower-res renderings of the images in RAM til the last moment when a user selects an image.
When you ask for the full frame buffer of the image, the CPU suddenly has to do a lot more work decoding the image at full resolution, but it might be even as simple as this to trigger it off:
CFDataRef CopyImagePixels(CGImageRef inImage)
{
return CGDataProviderCopyData(CGImageGetDataProvider(inImage));
}
/* IN MAIN APPLICATION FLOW - but see EDIT 2 below */
const char* pixels = [[((NSData*)CopyImagePixels([myImage CGImage]))
autorelease] bytes]; /* N.B. returned pixel buffer would be read-only */
This is just a guess as to how it works, really, but based on some experience with image processing in other contexts. To work out whether what I suggest makes sense and is good from a memory usage point of view, run Instruments.
The Apple docs say (related, may apply to you):
You should avoid creating UIImage objects that are greater than 1024 x 1024 in size. Besides the large amount of memory such an image would consume, you may run into problems when using the image as a texture in OpenGL ES or when drawing the image to a view or layer. This size restriction does not apply if you are performing code-based manipulations, such as resizing an image larger than 1024 x 1024 pixels by drawing it to a bitmap-backed graphics context. In fact, you may need to resize an image in this manner (or break it into several smaller images) in order to draw it to one of your views.
[ http://developer.apple.com/iphone/library/documentation/UIKit/Reference/UIImage_Class/Reference/Reference.html ]
AND
Note: Prior to iPhone OS 3.0, UIView instances may have a maximum height and width of 1024 x 1024. In iPhone OS 3.0 and later, views are no longer restricted to this maximum size but are still limited by the amount of memory they consume. Therefore, it is in your best interests to keep view sizes as small as possible. Regardless of which version of iPhone OS is running, you should consider using a CATiledLayer object if you need to create views larger than 1024 x 1024 in size.
[ http://developer.apple.com/iPhone/library/documentation/UIKit/Reference/UIView_Class/UIView/UIView.html ]
Also worth noting:-
(a) Official how-to
http://developer.apple.com/iphone/library/qa/qa2007/qa1509.html
(b) From http://cameras.about.com/od/cameraphonespdas/fr/apple-iphone.htm
"The image size uploaded to your computer is at 1600x1200, but if you email the photo directly from the iPhone, the size will be reduced to 640x480."
(c) Encoding large images with JPEG image compression requires large amounts of RAM, depending on the size, possibly larger amounts than are available to the application.
(d) It may be possible to use an alternate compression algorithm with (if necessary) its malloc rewired to use temporary memory mapped files. But consider the data privacy/security issues.
(e) From iPhone SDK: After a certain number of characters entered, the animation just won't load
"I thought it might be a layer size issue, as the iPhone has a 1024 x 1024 texture size limit (after which you need to use a CATiledLayer to back your UIView), but I was able to lay out text wider than 1024 pixels and still have this work."
Sometimes the 1024 pixel limit may appear to be a bit soft, but I would always suggest you program defensively and stay within the 1024 pixel limit if you can.
EDIT 1
Added extra line break in code.
EDIT 2
Oops! The code gets a read-only copy of the data (there is a diference between CFMutableDataRef and CFDataRef). Because of limitations on available RAM, you then have to make a lower-res copy of it by smooth-scaling it down yourself, or to copy it into a modifiable buffer, if the image is large, you may need to write it in bands to a temporary file, release the unmodifiable data block and load the data back from file. And only do this of course if having the data in a temporary file like this is acceptable. Painful.
EDIT 3
Here's perhaps a better idea maybe try using a destination bitmap context that uses a CFData block that is a memory-mapped CFData. Does that work? Again only do this if you're happy with the data going via a temporary file.
EDIT 4
Oh no, it appears that memory mapped read-write CFData not available. Maybe try mmap BSD APIs.
EDIT 5
Added "const char*" and "pixels read-only" comment to code.