I am setting a source surface and drawing. I am getting a border of the target surface's color at the edges, if the bitmap is stretched.
I believe this is because it is interpolating at the edge with alpha=0. How can I change this so it clamps the interpolation to the colors at the edge of the source texture? This would be equivalent to GL_CLAMP_TO_EDGE in OpenGL.
I found the answer. CAIRO_EXTEND_PAD is what I needed.
Related
I've noticed that the Gimp Layer Mask, which I believe should be at worst an 8-bit (256 values) grayscale layer does not have full precision. In RGB 8-bit precision mode, I can create an opaque image with a smooth transition between white and black, using all the expected 0-255 color values R, G, B... i.e (0,0,0), (1,1,1), (2,2,2) ... (255,255,255)
When I create a layer mask from it and view the layer mask, the result is an immediate jump from 0,0,0 to 13,13,13.
The layer mask should have 0-255 values from black to white, no? It is grayscale (you can't colorize it). It's not an issue with the conversion, because attempting to edit the layer mask with a smooth gradient results in the same blotchy transition.
Working in a higher color precision should not be necessary, and it only helps when viewing the image. Exporting a PNG in 8 bit RBGA results in the same lack of precision in the A channel.
The layermask is in linear light mode and this is a known issue: https://gitlab.gnome.org/GNOME/gimp/-/issues/3136
I am completely new to this so I expect I may be doing something trivial incorrectly. I have a regular sphere in my Unity3D scene and I have a .jpeg image with a number in various places and orientations. When I apply the image to the sphere as a texture, the numbers centrally located in the image display fine on the sphere, but those closer to the top or the bottom of the image file appear warped on the sphere. For e.g with the number 12, the base of the 1 and 2 are bigger and the number tapers the further up you go when rendered on the sphere.
This is not an error on your part; this is the way the texture is mapped to a sphere by default.
To 'fix' this you would have to compensate that distortion in your texture directly or modify the UV coordinates of the sphere.
What I want to do is color an irregular shape when user touch within that path.
Same as flood fill. But I found that flood fill is too costly in case of performance/speed/memory. So I have an idea. I dont know how to implement it. CGContextFillPath fills an irregular shapes.
So my Question is can we get a bounding paths/border line of that shape so that we can color that region??
It sounds like you have an image with a shape in it, where all the pixels in the shape are one color, and the boundary of the shape is a different color.
If I understand you correctly, you would have to use a flood-fill algorithm to find the boundary of the shape so you could turn that boundary into a CGPath. There's no magic way to get a path for the boundary of the shape without looking at the pixels.
When I render a cube and texture it I end up with white edges along the cube. I've checked the vertex and texture coordinates and they look fine to me. My texture is a power of 2. It is a texture map containing 4x4 textures in which each texture is 16x16 pixels. Does anyone have any suggestions?
I guess you are experiencing texture bleeding. You can solve it by either using GL_CLAMP on your textures or adjusting slightly your UV coordinates to 0.0005 and 0.0095 (for instance) instead of 0 and 1 to compensate for the texture sampling artifacts.
I am rotating my image with the following code:
CGAffineTransform rotate = CGAffineTransformMakeRotation( [ratio floatValue] );
[imageView setTransform:rotate];
But it doesn't have sharp edges, does someone know a solution for this?
Here's the image I get:
Using the UIImageView's CALayer to rasterize the image will provide the best antialiasing.
imageView.layer.shouldRasterize = YES;
The edges of the image itself look jagged because they are being placed into a pixel grid directly, and not being interpolated. Nearest Neighbor Interpolation is the simplest kind of interpolation, where if you have pixel grid A and you move your image to pixel grid B, the pixels in grid B are chosen by simply choosing the closest pixel from grid A. Other forms of interpolation choose a weighted average of the closest pixels to arrive at the pixel value in grid B.
Your image, with its jagged edges, looks like it's using nearest neighbor interpolation, which may be the default type of interpolation on an affine transform on an iphone.
When you use some other interpolation scheme other than nearest neighbor, you'll get aliasing effects, where the subsampling isn't perfect as you transfer from one pixel grid to another. That effect makes edges in the image itself seem blurrier than they otherwise would.
just add 1px transparent border to your image
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContext( imageRect.size );
[image drawInRect:CGRectMake(1,1,image.size.width-2,image.size.height-2)];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Anytime you do transforms on an image (except in 90° increments) it is going to cause slightly softer edges due to pixel interpolation.
There is a key that you can set in Info.plist that enables antialiasing of the edges: UIViewEdgeAntialiasing.
As described in the following answer: When I rotate an UIImageView with the transform property, the edges pixellate
The simplest solution:
Simply add this key-value pair to your Info.plist:
UIViewEdgeAntialiasing set to YES.
https://stackoverflow.com/a/12066215/1469060