I have one pixel length color image and i wanted to scale up it according to new length value. - iphone

So color image length will change dynamically. I will use it on the custom graph. What is the best solution for this? now i am using Resize category by extending UIImage.
Thanks advance..

For this task you could use instance method of UIImage stretchableImageWithLeftCapWidth:topCapHeight:. Suppose
[imgObj stretchableImageWithLeftCapWidth:4.0f topCapHeight:5.0f];
This method call will live 4 pix from left & right and 5 pix from top & bottom of your image and will repeat the rest image in middle.
But in your case both values will be Zero.

Related

Resizing command changes image shape

I have to resize image i.e if its dimension is 3456x5184 to 700X700 as my code needs image with less number of pixels otherwise it takes too much time to give results.So, when I use imresize command it changes the dimensions of image but at the same time it changes the shape of image i.e the circle in image which I also need to detect looks like oval instead of being cirle. I need your suggestions to resolve this problem. I am really grateful to you people.
Resizing images is done by either subsampling (to get smaller images) or some kind of interpolation (to get larger images)
Input is either a factor or a final dimension for width and height.
The only way to fit a rectangle into a square by simply resizing it is to use different scales for width and height. Which of course will yield in a distorted image.
To achieve what you want you can either crop a 700x700 region from your image or resize image using the same factor for with and height. Then you can fit the larger dimension into 700 and fill the rest around the other dimension with black or whatever you prefer.

Hide UI/Image partially in Unity 5

Does Unity 5 support partial hiding of a UI/Image?
For example, the UI/Image I have in my scene has 100 width and 100 height.
At time = 0, the UI/Image is hidden. When time = 5, the UI/Image only shows the first 50 pixels. When time = 10, the UI/Image is fully drawn.
The answer to the question is in this link
Set the image type to Filled
Set the fill method to horizontal
Set the fill origin to left
From the script, update the fill amount from 0 to 1 over the timespan
On first thought, I can come up with two workarounds for this.
If the background of the image-in-question is a solid color, you can use another image with the same color as background that covers the actual image, so that it looks like the actual image is partially revealed. Then, just reduce the length of this covering image over time to achieve a revealing effect using Coroutines.
You make multiple image files with alpha channels and change the textures of the UI/Image over time. Each image will act like an iteration of revealing effect. Say you have 11 images, the 6th image will have first half revealed, and second half as alpha=0. In this case, if you want smoothness, you will need a higher number of images.

Perspective correction of UIImage from Points

I'm working on a app where I'll let the user take a picture e.g of a business card or photograph.
The user will then mark the four corners of the object (which they took a picture off) - Like it is seen in a lot of document/image/business card scanning apps:
My question is how do i crop and fix the perspective according to these four points? I've been searching for days and looked at several image proccessing libraries without any luck.
Any one who can point me in the right direction?
From iOS8+ there is Filter for Core Image called CIPerspectiveCorrection. All you need to do is pass the image and four points.
Also there is one more filter supporting iOS6+ called CIPerspectiveTransform which can be used in similar way (skewing image).
If this image were loaded in as a texture, it'd be extremely simple to skew it using OpenGL. You'd literally just draw a full-screen quad and use the yellow correction points as the UV coordinate at each point.
I'm not sure if you've tried the Opencv library yet, but it has a very nice way to deskew an image. I've got here a small snippet that takes an array of corners, your four corners for example, and a final size to map it into.
You can read the man page for warpPerspective on the OpenCV site.
cv::Mat deskew(cv::Mat& capturedFrame, cv::Point2f source_points[], cv::Size finalSize)
{
cv::Point2f dest_points[4];
// Output of deskew operation has same color space as source frame, but
// is proportional to the area the document occupied; this is to reduce
// blur effects from a scaling component.
cv::Mat deskewedMat = cv::Mat(finalSize, capturedFrame.type());
cv::Size s = capturedFrame.size();
// Deskew to full output image corners
dest_points[0] = cv::Point2f(0,s.height); // lower left
dest_points[1] = cv::Point2f(0,0); // upper left
dest_points[2] = cv::Point2f(s.width,0); // upper right
dest_points[3] = cv::Point2f(s.width,s.height); // lower right
// Build quandrangle "de-skew" transform matrix values
cv::Mat transform = cv::getPerspectiveTransform( source_points, dest_points );
// Apply the deskew transform
cv::warpPerspective( capturedFrame, deskewedMat, transform, s, cv::INTER_CUBIC );
return deskewedMat;
}
I don't know exact solution of your case, but there is approach for trapezoid: http://www.comp.nus.edu.sg/~tants/tsm/TSM_recipe.html - the idea is to continuously build transformation matrix. Theoretically you can add transformation that converts your shape into trapecy.
And there are many questions like this: https://math.stackexchange.com/questions/13404/mapping-irregular-quadrilateral-to-a-rectangle , but I didn't check solutions.

Composite Chart + Objective C

I want to implement below chart like structure.
Explanation:
1. Each block should be clickable.
2. If the block is selected, it will be highlighted(i.e. Red block in figure).
I initially google for this but was unable to find. What should be "Drawing logic" corresponding to this with animation?Thanx in advance.
I think you need to use MCSegmentedControl.
You can get it from here.
Generally speaking, I'd have an image for the outline with a transparent middle, then dynamically create colored blocks behind it of the appropriate colors, with dynamic labels. The highlighting is a little tricky, but could be done with a set of image overlays. One could also try to shrink and expand fixed images for the bars/highlighting, but iPhone scales images poorly.
(Will it always be 4 blocks? There are a couple of other ways to manage it using fixed-size images overlaying each other.)
Maybe you should look into using CALayer for this?
U need to implement this type of logic using button. Just scale button width according to percentage.
And to make round rect button like appearance use the code below and don't forget to import quartz-core framework in class file.
And to scale first and last button as you need some overlap from nearby button.
btn.layer.cornerRadius = 8.0;
btn.layer.borderWidth = 0.5;
btn.layer.borderColor = [[UIColor blackColor] CGColor];

Iphonesdk boundries checking for coloring

im creating and app where user already have an image (with different objects) without colors, i have to check the object and then color with respected color with the touch on that objects. how should i do this. can anyone help me.
I would say that that is non-trivial. I can only give hints since I have not done such an app yet.
First, you need to convert the image into a CGImageRef, for example by doing [uiimage_object CGImage].
Next you need convert the CGImageRef into array of pixel colors. You can follow the tutorial at http://www.fiveminutes.eu/iphone-image-processing/ for sample code. But for your app you need to convert the array into two dimension based on image width and height.
Then, use the coordinate of the user touch to access the exact pixel color value from the array. Next you read off the color values of the surrounding pixels and determine if color is similar to the touched pixel or not (you might need to read some wikipedia articles etc on doing the color comparison). If the color is similar, change the color to the one you want. Recurse until the surrounding color is different (i.e. you hit the boundary).
When you are finished modifying the pixel color value array, you need to convert the array back into CGImageRef using CGImageCreate function. Then you convert back to UIImage using [UIImage imageWithCGImage:imageref].
Now you are on your own to implement the steps into code. It would be unreasonable if you expect me to code all that for you, wouldn't it?