How can I process an image to remove a watermark within my iPhone application? - iphone

I want to remove watermark from a picture within my iPhone / iPad application. Is there any kind of image processing I can perform within this application to do this?

Can't be done, sorry.
The watermarked image were originally two images (the base and the watermark), which were merged together to form the result. The problem here is that the most common image formats (such as JPG, PNG, or GIF) have no concept of layers - so that the base would be one layer, and the watermark another: the result is just one layer, onto which both were redrawn. This is somewhat similar to a physical painting: if you paint one image on a paper using watercolors, and then another over the same spot, their colors will mix and you won't be able to tell which parts belong to one or the other, as they'd become a single image.
This is similar with the computer image formats: there is only one "layer", which for every pixel encodes exactly one color that is there - only the current color exists, and the image doesn't keep track what was on that pixel before.
Now, the information is irreversibly lost from the result - in other words, it is not possible to recover the base knowing just the result (or the result and watermark) - BTW, that's exactly the point of watermarking.
I have borrowed the image sprites of StackOverflow for a demonstration; the actual images used are not unique, the technique would work just as well with any images. This was the watermark I used:
And this is the result image, after merging with the base:
Now, even though we have the exact watermark image used, there's no way to recover what was underneath that star in the original image. Through image processing operations, we could almost remove the star from the result, but there's not enough data to tell us what used to be underneath: - that information got erased in the merge at the beginning.
We could guess what used to be there, but then we're not doing recovery any more, we're interpreting the image and guessing what possibly could have been there - and that's pretty hard, even for a human; computers are really bad at that. This is the original image, before I watermarked it - I bet you were expecting something slightly different, no?

The watermark is almost certainly part of the image. (The only case in which it wouldn't be is something like PDF or SVG, where it could be a separate vector element.)
Watermarks are typically present on images for purposes of managing intellectual property; if one has licensed an image for a particular use, typically one will receive access to a version of the image without a watermark. Thus wanting to "remove watermarks" is also likely to be treated as highly suspicious.

Watermarks are part of the image, there isn't going to be a magic way to remove them and recover the missing pixels in any tool.

Take a look at the source! Most or the current watermarking is done in php as an automated script. In most cases you will see the base picture in source

Related

CoreML Image Detection

I want to implement an application, that is able to recognize pictures from camera input. I don't mean classification of objects, but rather detecting the exact single image from given set of images. So if I for example have an album with 500 pictures, then if I point a camera to one of them, then application will be able to tell it's filename. Most of tutorials I find about CoreML is strictly for image classification (recognizing class of object) and not about recognizing exact image name in camera. This needs to work from different angles as well, and all I can have for training the network is this album with many different pictures (single picture for single object). Can this be somehow achieved? I can't use ARKit Image Tracking, because there will be about 500 of these images, and I need to find at least a list of similar ones first with CoreML / Vision.
I am not sure, but I guess perceptual hashing might be able to help you.
It works in a way that it makes some fingerprint from the reference images, and for a given image, it extracts the fingerprints as well, and then you can find the most similar fingerprints.
in this way, even if the new image is not 100% as the image in the dataset, you still can detect it.
It is actually not very hard to implement. but if you would like, i think phash library is a good one to use.

Ignore extra white space in Unity3D Texture

I have different textures for a player's helmet, shirt and pants in order to render custom uniforms. They have white space so it lays on the model correctly, but this is causing the App's file size to be huge once installed because the game has over a hundred items and each texture is 2.7 MB.
How can I tell Unity to ignore parts of the image or map the textures onto the player so that I do not need the white space? For example, cutting the whitespace out of the helmet image lowers the size to under a MB.
Thanks!
For the sake of others who read this:
The obvious answer is, cut out the empty spaces in an image editor. That will solve the problem in the way it really should be solved.
That being said, it's quite possible you are using poorly UV mapped models that need that space, and you are unable to fix this, as the person who asked this question is.
If you're in a position where it might cost a little time or money to get someone to fix it, you should, because no matter what, you're wasting space, and it will add up. No one wants a 100Mb download to get 50Mb worth of game. And if you payed someone for models and they came like this, consider taking it up with them, because this is a somewhat major flaw.
The "real" answer:
The first thing you should do is enable compression. From your picture it appears you are using the RGBA 16-bit format. This is a lower quality version of Truecolor, an uncompressed 32-bit format, but is not compressed in the "traditional" sense.
You should use the "Compressed" image import setting (To see it you must turn off Advanced settings). This will select one of several compression formats (depending on the platform), all of which are highly optimized. You can define a specific compression in the Advanced window, but it is rarely needed, as Unity is great at choosing the right one for a given situation, and can can take special cases (such as specific chipsets) into consideration.
Depending on the compression algorithm, that white space could easily end up taking next to no space, and depending on the image, the compression might end up virtually undetectable.
On average the "Compressed" setting can create several orders of magnitude of a reduction of image size.
From there, if your image is still to large you can experiment with import size. This creates a fairly linear change in space taken and quality image. You are importing at 1024x1024 right now. Importing at 512x512 will about half the amount of space taken, and half the resolution of your image, but depending on the art style, the change can often be negligible visually.
You can for more details on these changes in the documentation for the texture importer

What are the differences between APNG and MNG?

I know that APNG is an extension of PNG, while MNG is more of its own format (albeit developed by the original PNG developers). MNG is barely supported in any browser at all, while APNG almost only has native support in Firefox (for various backward compatibility- and decoding-related reasons, it seems).
Except all of these behind-the-scenes things, what are the differences between APNG and MNG? Does one have features the other doesn't (for example, storing only parts that are modified instead of always whole frames)? Does one have better performance or file size than the other?
APNG can create a new frame by replacing the entire image or by overlaying or blending a smaller image over part of it. To display a "pong" game you'd need a new image of the ball in each different location. APNG has essentially the same capabilities as animated GIF, but also allowing 24bit RGB and 8-bit alpha.
MNG can do that, plus it can also retrieve an image that was previously defined in the datastream and place it over the previous frame in a new location. To display your "pong" game you'd only need to transmit one image of the ball and use it like a sprite.
Much more detail is available in the specifications:
apng: (https://wiki.mozilla.org/APNG_Specification‎)
mng: (http://www.libpng.org/pub/mng/spec/mng-lc.html)

How to segment an image in iOS to remove background and retain the foreground picture

I need to segment an image in ios for a fashion app by keeping only the foreground image and removing all other background part of the image which should resemble like a tool for removing the background of images in various photo editing tools please help me.
General background subtraction is an unsolved problem, so getting perfect results is going to be a big effort. With that said, you can probably get close. Here are a couple of suggested avenues:
I am guessing that your app will place clothes on a human, or something of the sort. Instead of getting a perfect segmentation, run a person detector, remove all of the image except for the detected person, and fit a part-based human model to the remaining image. Then you have the pose of the person, and can do your image processing accordingly.
Allow the user to input some strokes from the foreground and some strokes from the background, and run a graph-cuts-based image segmentation algorithm on the frame.
Begin your process by having the user not be present in your video stream. From this, learn the background distribution (start with a simple histogram of background pixels, there are much more elaborate schemes but you need a starting place). Then, when the user enters the scene, create a binary image containing the connected components that don't fit into the learned background distribution. This will not be perfect, but you will start to see something close to a binary image where the white pixels are your user, and the black pixels are the background. Use morphology operators to join any large connected components that are slightly separated, and threshold your image to remove small noise in the image, from things like specular objects and illumination changes.
Like I said (and is mentioned in the comments), this is not an easy problem, but you can come up with a good approximation if you put some time into it. I suggest the third method I listed. It is achievable, and can be broken down into small parts so you can tell when you're making progress.
Good luck!

Can we identify image having any overlay using quartz?

I have to work with image which is coming from some server. We want to process that image so that we can find out does it contain any specific color region.
Is there any way so that the image coming from server will be overlaid image & on device side we can process it to check if it contains those overlays?
I have never being worked on quartz stuff. If anybody can suggest some other solution?
There are several ways to do this depending on how complex you wish to get. OpenCV has image blocking which can segregate images into distinct regions. Or check out Image tools especially blob extraction and then look at the general pixel colouration in a single blob.