I'm having a problem with that code:
rect = new Rect(saveTextures[0].width, saveTextures[0].height, saveTextures[1].width, saveTextures[1].height);
GUI.DrawTexture(rect, saveTextures[0]);
if(GUI.Button(rect, saveTextures[1]){
//do stuff
}
It should look exactly the same, and it does in editor. It also looks totaly the same on iPad2, but on iPad3 the top GUI.Button is scaled down to approximatly 90%.
Any ideas what could be the problem?
I make a simple example of the problem. Here is how it should look and looks on iPad2.
And here is how it looks on retina screen:
The red part is the button, and it covers whole background on first but only like 90% on second.
Make sure you set the texture type to GUI and that its max size is high enough (try 4096 if not sure).
Also noticed that your Rect constructor is a bit strange. It's new Rect(top, left, width, height), so you're using saveTextures[0] as top-left and saveTextures[1] as width-height while displaying them in the same position.
Related
Does Unity 5 support partial hiding of a UI/Image?
For example, the UI/Image I have in my scene has 100 width and 100 height.
At time = 0, the UI/Image is hidden. When time = 5, the UI/Image only shows the first 50 pixels. When time = 10, the UI/Image is fully drawn.
The answer to the question is in this link
Set the image type to Filled
Set the fill method to horizontal
Set the fill origin to left
From the script, update the fill amount from 0 to 1 over the timespan
On first thought, I can come up with two workarounds for this.
If the background of the image-in-question is a solid color, you can use another image with the same color as background that covers the actual image, so that it looks like the actual image is partially revealed. Then, just reduce the length of this covering image over time to achieve a revealing effect using Coroutines.
You make multiple image files with alpha channels and change the textures of the UI/Image over time. Each image will act like an iteration of revealing effect. Say you have 11 images, the 6th image will have first half revealed, and second half as alpha=0. In this case, if you want smoothness, you will need a higher number of images.
I need to include many images of unknown origin in a report. I have no idea what the images might be: portrait or landscape fotos, large or small, or even something with an atypical shape, like a 400x80 logo.
I'd like to scale down images with the following rule: proportionally downscale until the larger side is 200. And resulting image shouldn't take more space than needed (i.e. 1000x600 should be downscaled to 200x120, not to 200x200), so that there are no unneeded blank margins around non-square images.
Is what I need possible with JasperReports?
EDIT:
To clarify: "real size" mode is almost what I need. However, I don't see a way to limit height of resulting image. As a result, if the image I want to print is a portrait foto (or has even larger height compared to width), generated PDF looks ugly; in this case I would prefer to somehow downscale it to a smaller width.
I solved the Problem of resizing images of various sizes to a fixed size with "RetainShape" by writing an ImageResizer, based on the idea of the ImageTransformer from https://stackoverflow.com/a/39320863/8957103 , using https://github.com/rkalla/imgscalr for scaling the image.
We're creating an iOS photo app. In doing this, we have to create dynamically sized images up to about 2500x1600px. Once this image has been created, we want to draw smaller images on top of the big one reasonably quickly.
The problem as we can see it is that it's impossible to get a context larger than the screen resolution. The call does not crash, but it returns a nil-context. How can such a seemingly simple task be achieved?
Secondly, once this context is created, what is the fastest way to draw a small image at a given position on top of the big one?
Edit:
We found the solution. CGBitmapContextCreate returns nil because the width and height parameters were set as floats, not ints. Sometimes the solution is right there in front of you, and you're too blind to see it. Hopefully this answer can help other people that somehow have the same problem.
Make sure you specify integer widths and heights as the arguments to CGBitmapContextCreate, otherwise it returns nil. Otherwise, the size of the context should not matter as long as you can malloc enough memory for it.
It should be possible to get a context for almost any bitmap for which you can allocate/malloc enough memory, in your case multiples of 2500x1600x4 bytes of ARGB pixels.
You might also want to look into using a CATiledLayer, where you would only have to draw into the tiles covered by the smaller image. You may have to tile to support older devices which are limited by the max tile size that will fit into the GPUs texture cache.
Using the PhotoScroller example by Apple and ImageMagick I managed to build my catalog app.
But I'm having a rendering bug. The tiled images are rendered with a thin line between them.
My simple script using ImageMagick is this:
#!/bin/sh
file_list=`ls | grep JPG`
for i in 100 50 25; do
for file in $file_list; do
convert $file -scale ${i}%x -crop 256x256 -set filename:tile "%[fx:page.x/256]_%[fx:page.y/256]" +repage +adjoin "${file%.*}_${i}_%[filename:tile].${file#*.}"
done
done
The code from Apple is the same. The bizarre thing is that the images that they provida already tiled works like a charm, in the same run time, side-by-side with my images :(
My first guess was that the size of the tiles was not matching with the calculations on code, but change sizes didn't fix, neither on my script or in the code. My images are usually smaller than those provided by apple, half the size actually.
Anyone got the same issue?
I had problems with both solutions. Damien's approach did not fully eliminate all lines at all zoom scales and Brent's solution removed the lines, but added some artifacts at tile borders.
After googling around for some while, I finally found a solution that worked nicely for me: http://openradar.appspot.com/8503490 (comment by zephyr.renner).
After all, Apple's assumption that CTM.a == CTM.d doesn't seem to be "safe" at all...
I have the exact same issue here, using PhotoScroller code. Problem appears, when scale is not right in - (void)drawRect:(CGRect)rect.
You need to round scale... Add scale = 1.0f / roundf(1.0f / scale); after CGFloat scale = CGContextGetCTM(context).a; (it also prevents tiles from being drawn twice).
And draw tiles 1 pix larger... Add tileRect.size.width += 1; tileRect.size.height += 1; after tileRect = CGRectIntersection(self.bounds, tileRect);.
I have encountered this same PhotoScroller issue and Damien's solution was very close but requires one minor correction to completely eliminate those pesky seams.
Drawing tiles one pixel larger didn't work at all zoom levels for me. The reason is that we are drawing the image at the original resolution and then it is scaled by the CTM to the screen resolution.
So, the 1 pixel we added actually becomes 1/4 of a pixel when drawn at the 25% zoom level on the screen.
Therefore, to enlarge the tile by one pixel on the screen we would need to add 1.0/scale to the width/height. (and this should be done before calling CGRectIntersection)
tileRect.size.width += 1.0/scale; tileRect.size.height += 1.0/scale;
tileRect = CGRectIntersection(self.bounds, tileRect);
I am trying to use a UIImage with stretchableImageWithLeftCapWidth to set the image in my UIImageView but am encountering a strange scaling bug. Basically picture my image as an oval that is 31 pixels wide. The left and right 15 pixels are the caps and the middle single pixel is the scaled portion.
This works fine if I set the left cap to 15. However, if I set it to, say, 4. I would expect to get a 'center' portion that is a bit curved as it spans the center while the ends are a little pinched.
What I get is the left cap seemingly correct, followed by a long middle portion that is as if I scaled the single pixel at pixel 5, then a portion at the right of the image where it expands and closes over a width about twice the width of the original image. The resulting image is like a thermometer bulb.
Has anyone seen odd behavior like this and might know what's going on?
Your observation is correct, Joey. StretchableImageWithLeftCapWidth does NOT expand the whole center of the image as you would expect. It only expands the pixel column just right of the left cap and the pixel row just below the top cap!
Use UIView's contentStretch property instead, and your problem will be solved. Another advantage to this is that contentStretch can also shrink a graphic properly, whereas stretchableImageWithLeftCapWidth only works when making the graphic larger.
Not sure if I got you right, but LeftCapWidth etc is made for rounded corners, with everything in the rectangle within the rounding radius is stretched to fit the space between the 'caps' on the destination button or such.
So if your oval is taller or wider than 4 x 2 = 8, whatever is in the middle rectangle will be stretched. And yours is, so it would at least look at bit ugly! But if it's not even symmetrical, something has affected the stretch. Maybe something to do with origin or frame, or when it's set, or maybe it's set twice, or you have two different stretched images on top of each other giving the thermometer look.
I once created two identical buttons in the same place, using the same retained object - of course throwing away the previous button. Then I wondered why the heck the button didn't disappear when I set alpha to 0... But it did, it's just that there was a 'dead' identical button beneath it :)