Using the PhotoScroller example by Apple and ImageMagick I managed to build my catalog app.
But I'm having a rendering bug. The tiled images are rendered with a thin line between them.
My simple script using ImageMagick is this:
#!/bin/sh
file_list=`ls | grep JPG`
for i in 100 50 25; do
for file in $file_list; do
convert $file -scale ${i}%x -crop 256x256 -set filename:tile "%[fx:page.x/256]_%[fx:page.y/256]" +repage +adjoin "${file%.*}_${i}_%[filename:tile].${file#*.}"
done
done
The code from Apple is the same. The bizarre thing is that the images that they provida already tiled works like a charm, in the same run time, side-by-side with my images :(
My first guess was that the size of the tiles was not matching with the calculations on code, but change sizes didn't fix, neither on my script or in the code. My images are usually smaller than those provided by apple, half the size actually.
Anyone got the same issue?
I had problems with both solutions. Damien's approach did not fully eliminate all lines at all zoom scales and Brent's solution removed the lines, but added some artifacts at tile borders.
After googling around for some while, I finally found a solution that worked nicely for me: http://openradar.appspot.com/8503490 (comment by zephyr.renner).
After all, Apple's assumption that CTM.a == CTM.d doesn't seem to be "safe" at all...
I have the exact same issue here, using PhotoScroller code. Problem appears, when scale is not right in - (void)drawRect:(CGRect)rect.
You need to round scale... Add scale = 1.0f / roundf(1.0f / scale); after CGFloat scale = CGContextGetCTM(context).a; (it also prevents tiles from being drawn twice).
And draw tiles 1 pix larger... Add tileRect.size.width += 1; tileRect.size.height += 1; after tileRect = CGRectIntersection(self.bounds, tileRect);.
I have encountered this same PhotoScroller issue and Damien's solution was very close but requires one minor correction to completely eliminate those pesky seams.
Drawing tiles one pixel larger didn't work at all zoom levels for me. The reason is that we are drawing the image at the original resolution and then it is scaled by the CTM to the screen resolution.
So, the 1 pixel we added actually becomes 1/4 of a pixel when drawn at the 25% zoom level on the screen.
Therefore, to enlarge the tile by one pixel on the screen we would need to add 1.0/scale to the width/height. (and this should be done before calling CGRectIntersection)
tileRect.size.width += 1.0/scale; tileRect.size.height += 1.0/scale;
tileRect = CGRectIntersection(self.bounds, tileRect);
Related
I found out how to change the colour of constraints:
draw_options = pymunk.pygame_util.DrawOptions(screen)
draw_options.constraint_color = 200,200,200
But when drawing small objects, the size of the constraint appears to be too large and makes it look bad.
Is there a way to reduce the size of those pin joints? Instead of a radius of 5 pixels, I'd prefer 1 or 2 pixel radius joints/constraints.
An alternative was to make it partly transparent, but adding an alpha component to the colour doesn't seem to work.
draw_options.constraint_color = 200,200,200,50
Unfortunately the debug draw color for constraints doesnt work. https://github.com/viblo/pymunk/issues/160
But in general if you want special drawing its probably easiest to do it yourself. Its mainly meant for debugging and quick prototyping, so if you need more than whats included try drawing it yourself instead. There are some examples that does custom drawing and does not depend on the debug draw code.
I am currently recording on a single camera the images, one aside of the other one, of the same sample out of a microscope.
I have 2 issues with that, and I figured out that in post procesing with Matlab I could arrange these questions.
-First, the 2 images on the camera are supposed to have the same pixel size, or one is just a litle bigger than the other one, probably because of optical pathways. What is the adapted Matlab function or way to correlate the two images so they will have exactly the same pixel size in X and Y ?
Two images on same camera , one bigger or smaller compared to the other one
-Secondly, my sample is moving a litle during the recording ( while still staying in my field of view of course ). To make my analysis easier, it would be suitable that I could correct the images so the sample remain at the same place as in the first image, to perform calculations on it easier. What would be the adapted Matlab function or way to correct this movement in the image ?
Sample moving in the image on the camera
Sorry for the poor quality of my drawings !
Thank you very much for your advices and help.
First zero-pad the images to a sufficient degree, to get them both to double the size of the bigger one.
size_padding = max(size(fig1),size(fig2));
fig1_pad = padarray(fig1,size_padding-size(fig1),'post');
fig2_pad = padarray(fig2,size_padding-size(fig2),'post');
Assuming the sample is the only feature present in the images, the best way to proceed would be to use the xcorr2() function and find the lag corresponding to the maximum correlation, to get the space shift between the two images:
xc = xcorr2(fig1_pad,fig2_pad);
[max_cc, imax] = max(abs(xc(:)));
[ypeak, xpeak] = ind2sub(size(xc),imax(1));
corr_offset = [ (ypeak-size(fig2_pad,1)) (xpeak-size(fig2_pad,2)) ];
You then use circshift() to shift one of the images using the lag you obtained in the last step.
fig2_shift = circshift(fig2_pad,corr_offset);
You now have two images of the same size, where hopefully the sample is in the same position. If you want to remove the padding zeroes, crop the images to your liking with respect to the center using imcrop().
I'm having a problem with that code:
rect = new Rect(saveTextures[0].width, saveTextures[0].height, saveTextures[1].width, saveTextures[1].height);
GUI.DrawTexture(rect, saveTextures[0]);
if(GUI.Button(rect, saveTextures[1]){
//do stuff
}
It should look exactly the same, and it does in editor. It also looks totaly the same on iPad2, but on iPad3 the top GUI.Button is scaled down to approximatly 90%.
Any ideas what could be the problem?
I make a simple example of the problem. Here is how it should look and looks on iPad2.
And here is how it looks on retina screen:
The red part is the button, and it covers whole background on first but only like 90% on second.
Make sure you set the texture type to GUI and that its max size is high enough (try 4096 if not sure).
Also noticed that your Rect constructor is a bit strange. It's new Rect(top, left, width, height), so you're using saveTextures[0] as top-left and saveTextures[1] as width-height while displaying them in the same position.
I'm working on a app where I'll let the user take a picture e.g of a business card or photograph.
The user will then mark the four corners of the object (which they took a picture off) - Like it is seen in a lot of document/image/business card scanning apps:
My question is how do i crop and fix the perspective according to these four points? I've been searching for days and looked at several image proccessing libraries without any luck.
Any one who can point me in the right direction?
From iOS8+ there is Filter for Core Image called CIPerspectiveCorrection. All you need to do is pass the image and four points.
Also there is one more filter supporting iOS6+ called CIPerspectiveTransform which can be used in similar way (skewing image).
If this image were loaded in as a texture, it'd be extremely simple to skew it using OpenGL. You'd literally just draw a full-screen quad and use the yellow correction points as the UV coordinate at each point.
I'm not sure if you've tried the Opencv library yet, but it has a very nice way to deskew an image. I've got here a small snippet that takes an array of corners, your four corners for example, and a final size to map it into.
You can read the man page for warpPerspective on the OpenCV site.
cv::Mat deskew(cv::Mat& capturedFrame, cv::Point2f source_points[], cv::Size finalSize)
{
cv::Point2f dest_points[4];
// Output of deskew operation has same color space as source frame, but
// is proportional to the area the document occupied; this is to reduce
// blur effects from a scaling component.
cv::Mat deskewedMat = cv::Mat(finalSize, capturedFrame.type());
cv::Size s = capturedFrame.size();
// Deskew to full output image corners
dest_points[0] = cv::Point2f(0,s.height); // lower left
dest_points[1] = cv::Point2f(0,0); // upper left
dest_points[2] = cv::Point2f(s.width,0); // upper right
dest_points[3] = cv::Point2f(s.width,s.height); // lower right
// Build quandrangle "de-skew" transform matrix values
cv::Mat transform = cv::getPerspectiveTransform( source_points, dest_points );
// Apply the deskew transform
cv::warpPerspective( capturedFrame, deskewedMat, transform, s, cv::INTER_CUBIC );
return deskewedMat;
}
I don't know exact solution of your case, but there is approach for trapezoid: http://www.comp.nus.edu.sg/~tants/tsm/TSM_recipe.html - the idea is to continuously build transformation matrix. Theoretically you can add transformation that converts your shape into trapecy.
And there are many questions like this: https://math.stackexchange.com/questions/13404/mapping-irregular-quadrilateral-to-a-rectangle , but I didn't check solutions.
I am trying to use a UIImage with stretchableImageWithLeftCapWidth to set the image in my UIImageView but am encountering a strange scaling bug. Basically picture my image as an oval that is 31 pixels wide. The left and right 15 pixels are the caps and the middle single pixel is the scaled portion.
This works fine if I set the left cap to 15. However, if I set it to, say, 4. I would expect to get a 'center' portion that is a bit curved as it spans the center while the ends are a little pinched.
What I get is the left cap seemingly correct, followed by a long middle portion that is as if I scaled the single pixel at pixel 5, then a portion at the right of the image where it expands and closes over a width about twice the width of the original image. The resulting image is like a thermometer bulb.
Has anyone seen odd behavior like this and might know what's going on?
Your observation is correct, Joey. StretchableImageWithLeftCapWidth does NOT expand the whole center of the image as you would expect. It only expands the pixel column just right of the left cap and the pixel row just below the top cap!
Use UIView's contentStretch property instead, and your problem will be solved. Another advantage to this is that contentStretch can also shrink a graphic properly, whereas stretchableImageWithLeftCapWidth only works when making the graphic larger.
Not sure if I got you right, but LeftCapWidth etc is made for rounded corners, with everything in the rectangle within the rounding radius is stretched to fit the space between the 'caps' on the destination button or such.
So if your oval is taller or wider than 4 x 2 = 8, whatever is in the middle rectangle will be stretched. And yours is, so it would at least look at bit ugly! But if it's not even symmetrical, something has affected the stretch. Maybe something to do with origin or frame, or when it's set, or maybe it's set twice, or you have two different stretched images on top of each other giving the thermometer look.
I once created two identical buttons in the same place, using the same retained object - of course throwing away the previous button. Then I wondered why the heck the button didn't disappear when I set alpha to 0... But it did, it's just that there was a 'dead' identical button beneath it :)