Which approach should I take to implementing this concept? - flutter

I'm planning on creating a simple color lookup table-based painting software in Flutter (one color palette can define colors for every image in a set, the canvas defines a source coordinate and a palette coordinate for each pixel, the color at the palette coordinate gives the color of the displayed canvas), and I'm torn between two approaches due to gaps in my knowledge.
In one approach, I can represent the canvas as cells of a GridView. This would allow me to lookup the color in the palette per-pixel, but I'm not sure how I would take the final info for the layout image and its palette and convert them to an image file format.
In another, I can represent the canvas as an Image widget and use the image library to draw upon it, later converting it to an image with image's functionality. However, I don't know how to implement a lookup per-pixel.
Any help would be appreciated. If I can call an event per pixel on Canvas/CustomPaint I'd be open to that as well. Thanks in advance!

Related

Automatic creation of color-grading palette for UE

I'm trying to mimic colors of a real camera in the instance of USceneCaptureComponent2D using its postprocessing feature called "Color grading lookup table".
The official manual describes how to create the lookup table for that feature by manually adjusting an image and visually controlling how it looks.
I’ve got a pair of images of a photographic color grading palette (like one of these).
One image was captured by a real photo camera, and another one comes from the scene capture component.
I’ve got the PDF file with that palette, and to get the latter image I’ve put it on a wall texture inside a UE level.
Is there any method to automatically create LUT?
My main issue is the color interpolation. Physical palettes contain about 20-50 colors, but the LUT includes 16x16x16 = 4096 colors.

How to merge an image to fill in other image with shape?

I want to merge an image to another image in one shape. Example:
1- People image
2- Shape Image:
So how to do draw that. I already implement for merging but it's not fill to that shape.
It's possible to do this using the masking functions in the Quartz 2D framework. It's a little bit more involved than using the higher level image functions of UI Kit, but Quartz 2D gives you a lot more power to do cool graphics techniques.
The relevant Apple Developer guide to this can be found here: https://developer.apple.com/library/mac/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_images/dq_images.html
For this example, you'd want to create a mask shape for the inside part of the shape image. There are two ways you can do this. One way is to use image editing software to create a second mask image, with the same size as your shape image, with pure black in the area where you want the people image to appear, and white where you don't want to appear. In this example, that would be the area inside the blue shape. It is important to not crop this image, or else they won't match up exactly.
The other way to create the masking image would be to do that dynamically based on the shape image, and honestly, this is the way I would do it. This would mean that you're including fewer images in your app, and if you made any changes to the shape image, you wouldn't have to recreate the mask image as well. You could do this by making a small change to the way your shape image is formatted. You would need to use a format that allows transparency - png is preferred - so that there is alpha transparency in the part of the image outside of the shape, which is white in your JPEG image. Make sure the section in the center of the image is white (really, any color that is NOT USED in the wanted part of the shape image would work, but I'll say white for this example) and that you don't have parts of it that aren't pure white after image compression.
You will then use Quartz to select the area that's white, and create a mask from that. This technique is a bit more involved, but what you need can be found in the document I linked to above. Because of this, you might start with a static masking image, and then convert to the more involved technique after you've got the code to make the first technique work.
When you have your masking image, you would create the mask itself with the function CGImageMaskCreate(::::::::). You can then apply the mask to the people image using the function CGImageCreateWithMask(::), which will give you an image with the person's portrait, with the correct shape cropped from the center.
Finally, you would display this in your app by placing the masked people image on top of the shape image, and voila, you'll have what you're looking for.
Also, keep in mind, when using the Quartz 2D framework, you'll have to make sure you release images when they are no longer needed, or else you could have memory leaks.

Color overlaying algorithm

I'm looking for an algorithm to overlay a color on top of existing picture. Something similar to the following app (wall painter): http://itunes.apple.com/us/app/wall-painter/id396799182?mt=8
I want a similar functionality so I can paint walls in an existing picture and change them to a different color.
I can work both in yuv or rgb mode.
To successfully paint the walls in a picture, you have to do two steps:
Find the boundary of the wall within the picture (select the part of the image to be colored)
Apply the desired color to the selected area
The first step is the hard part. It similar to what Photoshop's magic wand tool would do. And indeed a search for magic wand algorithm turns up a few good articles such as this article with Objective-C code.
The second step is much easier and can be achieve with CGContextSetBlendMode and CGContextDrawImage.
You could try drawing into a graphics context with kCGBlendModeColor. From the documentation:
Uses the luminance values of the background with the hue and saturation values of the source image. This mode preserves the gray levels in the image. You can use this mode to color monochrome images or to tint color images.
Experimenting with other blend modes might also do the trick. See the documentation for details (search for "kCGBlendMode").
The RGB and YUV color models are not really great for changing colors in this way. I think the best color model for this is HLS.
Link: RGB to HLS and HLS to RGB conversion source code
H (hue) will change the base color
L (luminance) will change the brightness
S (saturation) will change the amount of color
You can evaluate the effect of these three components in a photo editing app, like Photoshop of The GIMP.

Dragging images from a scrollable region in Raphael?

I'm investigating the feasibility of using Raphael for a user-research project. One of the features allows for users to drag images onto a canvas and we record where they placed it. The pool of images is potentially quite large and we'll have them in a scrollable box in the tool.
I put together a quick wireframe of the issue I'm looking into since it'll probably be clearer than my explanation.
Please see the wireframe:
I'd stick with straight HTML/CSS and use jQueryUI draggables, as you mention in your comment.
You don't appear to need any of the drawing/display features SVG offers, yet if you went that route, you'd have to build your own custom scrolling behavior (instead of setting a CSS overflow-y rule) and picture layout algorithms (again instead of using CSS floats or something).
You can create a scrollable region using Raphael.
Create the viewport with fixed
dimensions (say 800x600)
Draw the images with increasing y value. After few images, the y value will go beyond 600. It will be drawn but will not be visible in the viewport.
Create a scrollbar using raphael rects. Attach drag events to the scrollbar handle rect.
When the handle is moved, translate all the images accordingly.
For e.g. lets assume in step 2, you had drawn all the images and the bottom most point of the end image is having y value 2000. Assuming the scrollbar has length 500, each dx movement of the handle will have to translate 2000/500 = 4dx. You can calculate the handle length similarly using ratios.
Since everything inside a single Raphael paper the dragging of images will work seamlessly. You will have to maintain the positions of each images.
You might find this demo similar
Remember you can always use getBBox when you drop. In this case it's rects but images would be the same..
http://irunmywebsite.com/raphael/additionalhelp.php?q=bearbones

Defining regions in an image

Being a complete noob in iPhone development, I was wondering what would be the best way to define regions in an Image (for interaction ). So far I've got 2 ideas :
use CGpath to basically draw the areas that I`m interested in but I quickly can see it becoming tedious on complex graphics .
use a Color coded layer with regions containing different RGB values and return those as my regions .
Are those sensible approaches ?
Depends on what you mean by interaction and whether you want the regions to be visible to the user.
A simple approach would be to just add UIButton's above your image. They can be transparent and any size (rectangular) that you like. Or they can contain images or colors to be visible to the user.
If you need arbitrary shapes then this solution won't be useful to you.